AnyInt Docs
API Reference

Media APIs

AnyInt publishes media-generation routes through DashScope and Transtreams Kling. These are not chat-completion endpoints. They have their own request bodies, and Kling and video generation routes are task-based.

AnyInt publishes media-generation routes through DashScope and Transtreams Kling. These are not chat-completion endpoints. They have their own request bodies, and Kling and video generation routes are task-based.

Published routes

RoutePurpose
POST /dashscope/v1/services/aigc/multimodal-generation/generationPrompt-driven image generation
POST /dashscope/v1/services/aigc/video-generation/video-synthesisPrompt, image, and audio to video
GET /dashscope/v1/tasks/{task_id}Query video task status and result
POST /transtreams/kling/v1/images/generationsCreate a Kling v2.1 image task
GET /transtreams/kling/v1/images/generations/{task_id}Query a Kling v2.1 image task
POST /transtreams/kling/v1/images/omni-imageCreate a Kling V3 Omni image task
GET /transtreams/kling/v1/images/omni-image/{task_id}Query a Kling V3 Omni image task
POST /transtreams/kling/v1/videos/omni-videoCreate a Kling V3 Omni video task
GET /transtreams/kling/v1/videos/omni-video/{task_id}Query a Kling V3 Omni video task

Authentication

Authorization: Bearer <ANYINT_API_KEY>

Transtreams Kling routes

Use kling-v2-1 for the image model displayed as Kling: Image v2.1. Use kling-v3-omni for Omni image and video tasks; the video model is displayed as Kling: Video v3.0 Omni.

For exact Kling image examples, see Image Generation. For exact Kling video examples and polling behavior, see Kling video.

DashScope image generation

Core request shape

FieldTypeRequiredMeaningExample
modelstringYesImage-capable model ID available to your account.qwen-image-plus
input.messagesarrayYesPrompt messages for the image request.[{"role":"user","content":[...]}]
input.messages[].rolestringYesMessage role. Use user for prompts.user
input.messages[].contentarrayYesContent blocks for prompt text and optional reference media.[{"text":"A poster..."}]
content[].textstringUsuallyText prompt describing the desired image.A cinematic product poster...
content[].imagestringOptionalPublic HTTPS image URL or supported image input used as reference, when the selected model supports image input.https://your-domain.com/ref.png
parameters.sizestringRecommendedOutput image size. Supported values depend on the selected model.1328*1328
parameters.negative_promptstringOptionalThings the model should avoid generating.low quality, blurry
parameters.prompt_extendbooleanOptionalWhether the provider may expand or rewrite the prompt.true
parameters.watermarkbooleanOptionalWhether to include a provider watermark when supported.false

cURL example

Replace the media URLs with publicly reachable HTTPS assets from your own storage.

curl https://gateway.api.anyint.ai/dashscope/v1/services/aigc/multimodal-generation/generation \
  -H "Authorization: Bearer $ANYINT_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen-image-plus",
    "input": {
      "messages": [
        {
          "role": "user",
          "content": [
            {
              "text": "A cinematic product poster for an AI music launch, neon highlights, premium studio lighting."
            }
          ]
        }
      ]
    },
    "parameters": {
      "negative_prompt": "",
      "prompt_extend": true,
      "watermark": false,
      "size": "1328*1328"
    }
  }'

Useful response fields:

FieldMeaning
output.choices[0].message.content[].imageGenerated image URL or image payload returned by the provider
usage.widthOutput width when returned
usage.heightOutput height when returned
usage.image_countNumber of generated images when returned

DashScope video generation

Core request shape

FieldTypeRequiredMeaningExample
modelstringYesVideo-capable model ID available to your account.wan2.5-i2v-preview
input.promptstringYesText description of the video to generate.A neon street scene with slow camera movement.
input.img_urlstringOptionalPublic HTTPS source image URL for image-to-video flows.https://your-domain.com/source-image.png
input.audio_urlstringOptionalPublic HTTPS audio URL when the selected model supports audio-guided video.https://your-domain.com/source-audio.mp3
parameters.resolutionstringOptionalOutput resolution. Supported values depend on the selected model.720P
parameters.durationintegerOptionalTarget duration in seconds.10
parameters.audiobooleanOptionalWhether to include or use audio when supported.true
parameters.shot_typestringOptionalCamera or shot behavior when supported.multi
parameters.prompt_extendbooleanOptionalWhether the provider may expand or rewrite the prompt.true

cURL example

curl https://gateway.api.anyint.ai/dashscope/v1/services/aigc/video-generation/video-synthesis \
  -H "Authorization: Bearer $ANYINT_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "wan2.5-i2v-preview",
    "input": {
      "prompt": "A graffiti rapper comes to life under a railway bridge at night and performs to the beat.",
      "img_url": "https://your-domain.com/source-image.png",
      "audio_url": "https://your-domain.com/source-audio.mp3"
    },
    "parameters": {
      "resolution": "720P",
      "prompt_extend": true,
      "duration": 10,
      "audio": true,
      "shot_type": "multi"
    }
  }'

Response pattern

This is a task-based API. The response returns:

FieldMeaning
request_idRequest tracing identifier when returned
output.task_idTask ID to store and use for polling
output.task_statusInitial task state, commonly PENDING
metadata.providerProvider metadata when returned

Video task query

The current catalog shows a task query path in the form:

GET /dashscope/v1/tasks/{task_id}

Apifox currently includes one sample task ID directly in the example path. Treat that as a path template rather than a fixed ID.

Query example

curl https://gateway.api.anyint.ai/dashscope/v1/tasks/your-task-id \
  -H "Authorization: Bearer $ANYINT_API_KEY"

Query response fields

FieldMeaning
output.task_statusCurrent task state
output.video_urlGenerated video URL after success
output.actual_promptPrompt used after provider-side expansion
usage.durationGenerated duration
usage.video_countNumber of outputs

In the published examples, video tasks move from PENDING to SUCCEEDED.

HappyHorse video generation

HappyHorse is available through the DashScope-compatible video synthesis API. Create a video task first, then poll the task endpoint until output.task_status becomes SUCCEEDED. Use happyhorse-1.0-t2v as the model. The request can include a prompt, a source image URL, and a source audio URL. The initial response returns request_id for tracing and output.task_id for result polling.

Create a video task

Replace the media URLs with publicly reachable HTTPS assets from your own storage.

curl --location --request POST 'https://gateway.api.anyint.ai/dashscope/v1/services/aigc/video-generation/video-synthesis' \
  --header "Authorization: Bearer $ANYINT_API_KEY" \
  --header 'Content-Type: application/json' \
  --data-raw '{
    "model": "happyhorse-1.0-t2v",
    "input": {
      "prompt": "An urban fantasy scene in cinematic style. A graffiti-painted teenage rapper comes alive from a concrete wall under a railway bridge at night, performs a fast English rap, and strikes an energetic classic rapper pose. The only audio is the rap, with no extra dialogue or noise.",
      "img_url": "https://your-domain.com/source-image.png",
      "audio_url": "https://your-domain.com/source-audio.mp3"
    },
    "parameters": {
      "resolution": "720P",
      "prompt_extend": true,
      "duration": 10,
      "audio": true,
      "shot_type": "multi"
    }
  }'

Response example:

{
  "request_id": "570dd72c-52ae-9151-b819-411edf66069e",
  "output": {
    "task_id": "c25723f0-8f2b-401a-b843-7ae05ae4e5e3",
    "task_status": "PENDING"
  }
}

Query the generated video

Use the output.task_id from the previous response:

curl --location --request GET "https://gateway.api.anyint.ai/dashscope/v1/tasks/$TASK_ID" \
  --header "Authorization: Bearer $ANYINT_API_KEY"

Response example:

{
  "request_id": "1e1fb115-c22a-9769-af2c-056451a8f692",
  "output": {
    "task_id": "a54b0585-aa49-45be-8786-fc46414634da",
    "task_status": "SUCCEEDED",
    "submit_time": "2026-04-27 17:01:39.657",
    "scheduled_time": "2026-04-27 17:01:39.689",
    "end_time": "2026-04-27 17:07:10.758",
    "orig_prompt": "An urban fantasy scene in cinematic style...",
    "video_url": "https://example.com/generated-video.mp4"
  },
  "usage": {
    "duration": 10,
    "input_video_duration": 0,
    "output_video_duration": 10,
    "video_count": 1,
    "SR": 720,
    "ratio": "16:9"
  }
}

Copy output.video_url into a browser to preview or download the generated video. Video URLs are temporary, so save the file if you need to keep it.

When to use media APIs vs AI Music

Use media APIs for:

  • image generation
  • prompt-to-video
  • image-plus-audio-to-video

Use AI Music for:

  • song generation
  • cover workflows
  • personas
  • lyrics
  • stems
  • music video generation tied to Suno tasks

Common mistakes

  • Treating the example task ID in Apifox as a fixed route
  • Assuming video generation returns the final MP4 in the initial response
  • Reusing Gemini image-generation payloads against DashScope routes

On this page