# TikHub-AI-Proxy ## Docs - [Overview (PLEASE READ)](https://ai-docs.tikhub.io/8111961m0.md): - [Streaming API](https://ai-docs.tikhub.io/8175517m0.md): ## API Docs - OpenAI [OpenAI response](https://ai-docs.tikhub.io/421251331e0.md): Creates a model response for the given input. This is the newer, simplified API for generating responses from OpenAI models. - OpenAI [OpenAI embeddings](https://ai-docs.tikhub.io/421251332e0.md): Creates an embedding vector representing the input text. Embeddings are useful for search, clustering, recommendations, and other machine learning tasks. - OpenAI [OpenAI audio transcription](https://ai-docs.tikhub.io/421251333e0.md): Transcribes audio into the input language using the Whisper model. Supports various audio formats including mp3, mp4, mpeg, mpga, m4a, ogg, wav, and webm. - OpenAI [OpenAI chat completion](https://ai-docs.tikhub.io/411636868e0.md): Creates a chat completion using a conversational message format. This endpoint supports multi-turn conversations with system, user, and assistant messages. - Claude [Claude chat completion](https://ai-docs.tikhub.io/421251334e0.md): Creates a model response using Claude models via OpenAI-compatible format. This endpoint provides a familiar interface for developers already using OpenAI APIs. - Claude [Claude message](https://ai-docs.tikhub.io/421251335e0.md): Creates a message using Claude models via native Anthropic Messages API format. This endpoint follows the official Anthropic API specification. - DeepSeek [DeepSeek chat completion](https://ai-docs.tikhub.io/411641537e0.md): Creates a model response using DeepSeek models. Supports chat, reasoning, and specialized models for various tasks. - Sora [Sora video generation](https://ai-docs.tikhub.io/411630954e0.md): Creates a video generation request using OpenAI Sora models. Generate high-quality videos from text prompts with customizable size and duration. - Sora [Get Sora video status](https://ai-docs.tikhub.io/411630955e0.md): Retrieves the status of a video generation request by video ID. Use this to check if your video is ready for download. - Sora [Download Sora video content](https://ai-docs.tikhub.io/411630956e0.md): Downloads the generated video file, thumbnail, or spritesheet. Video must have status 'completed' before content can be downloaded. - Sora [Remix Sora video](https://ai-docs.tikhub.io/411635792e0.md): Takes an existing video and makes targeted adjustments without regenerating everything from scratch. Works best with single, well-defined changes like color shifts or style modifications. - Sora [List Sora videos](https://ai-docs.tikhub.io/411635793e0.md): Enumerate your videos with optional pagination and sorting. Returns a list of all video generation jobs. - Sora [Delete Sora video](https://ai-docs.tikhub.io/411635794e0.md): Remove a video from OpenAI's storage. This action cannot be undone. - Gemini [Gemini content](https://ai-docs.tikhub.io/411630957e0.md): Generates a model response given an input using Google Gemini models. Supports text chat and image generation capabilities. - Seedance [Seedance video generation](https://ai-docs.tikhub.io/419974982e0.md): Creates a video generation task using ByteDance Seedance models. Supports text-to-video and image-to-video generation with configurable resolution, aspect ratio, duration, and audio. - Seedance [Retrieve Seedance task](https://ai-docs.tikhub.io/419974983e0.md): Retrieves the information of a Seedance video generation task. Poll this endpoint until status is `succeeded` to get the video URL. - Kling [Kling text-to-video](https://ai-docs.tikhub.io/419974984e0.md): Creates a text-to-video generation task using Kling AI models. Supports multiple model versions, video modes, camera control, and sound generation. - Kling [Retrieve Kling text-to-video task](https://ai-docs.tikhub.io/419974985e0.md): Retrieves the status and result of a text-to-video generation task. Use either task_id or external_task_id in the path. - Kling [Kling image-to-video](https://ai-docs.tikhub.io/419974986e0.md): Creates an image-to-video generation task using Kling AI models. Provide a reference image along with a text prompt to generate a video. - Kling [Retrieve Kling image-to-video task](https://ai-docs.tikhub.io/419974987e0.md): Retrieves the status and result of an image-to-video generation task. Use either task_id or external_task_id in the path. - Veo [Veo video generation](https://ai-docs.tikhub.io/421718379e0.md): Creates a video generation task using Google Veo models. The task is asynchronous - use the fetchPredictOperation endpoint to poll for status until `done` is true. - Veo [Fetch Veo video generation status](https://ai-docs.tikhub.io/421718380e0.md): Retrieves the status of a Veo video generation operation. Poll this endpoint until `done` is `true` to get the generated video. ## Schemas - [ChatCompletionRequest](https://ai-docs.tikhub.io/242588990d0.md): - [ChatMessage](https://ai-docs.tikhub.io/242588991d0.md): - [Tool](https://ai-docs.tikhub.io/242588992d0.md): - [ToolCall](https://ai-docs.tikhub.io/242588993d0.md): - [ChatCompletionResponse](https://ai-docs.tikhub.io/242588994d0.md): - [ChatCompletionChoice](https://ai-docs.tikhub.io/242588995d0.md): - [ContentFilterResults](https://ai-docs.tikhub.io/242588996d0.md): - [UsageInfo](https://ai-docs.tikhub.io/242588997d0.md): - [EmbeddingRequest](https://ai-docs.tikhub.io/242591198d0.md): - [EmbeddingResponse](https://ai-docs.tikhub.io/242591199d0.md): - [TranscriptionRequest](https://ai-docs.tikhub.io/242598099d0.md): - [TranscriptionResponse](https://ai-docs.tikhub.io/242598100d0.md): - [ClaudeMessageRequest](https://ai-docs.tikhub.io/242598101d0.md): - [ClaudeMessageResponse](https://ai-docs.tikhub.io/242598102d0.md): - [VideoCreateRequest](https://ai-docs.tikhub.io/242598103d0.md): - [VideoResponse](https://ai-docs.tikhub.io/242598104d0.md): - [VideoRemixRequest](https://ai-docs.tikhub.io/242600404d0.md): - [GeminiGenerateContentRequest](https://ai-docs.tikhub.io/242598105d0.md): - [VideoListResponse](https://ai-docs.tikhub.io/242600405d0.md): - [GeminiContent](https://ai-docs.tikhub.io/242598106d0.md): - [VideoDeleteResponse](https://ai-docs.tikhub.io/242600406d0.md): - [GeminiGenerationConfig](https://ai-docs.tikhub.io/242598107d0.md): - [ResponseRequest](https://ai-docs.tikhub.io/242600407d0.md): - [GeminiGenerateContentResponse](https://ai-docs.tikhub.io/242598108d0.md): - [ResponseObject](https://ai-docs.tikhub.io/242600408d0.md): - [SeedanceTaskRequest](https://ai-docs.tikhub.io/248267798d0.md): - [SeedanceTaskCreateResponse](https://ai-docs.tikhub.io/248267799d0.md): - [SeedanceTaskResponse](https://ai-docs.tikhub.io/248267800d0.md): - [KlingText2VideoRequest](https://ai-docs.tikhub.io/248267801d0.md): - [KlingImage2VideoRequest](https://ai-docs.tikhub.io/248267802d0.md): - [KlingTaskResponse](https://ai-docs.tikhub.io/248267803d0.md): - [KlingTaskDetailResponse](https://ai-docs.tikhub.io/248267804d0.md): - [KlingTaskListResponse](https://ai-docs.tikhub.io/248267805d0.md): - [VeoGenerateRequest](https://ai-docs.tikhub.io/249487292d0.md): - [VeoOperationResponse](https://ai-docs.tikhub.io/249487293d0.md): - [VeoFetchOperationRequest](https://ai-docs.tikhub.io/249487294d0.md): - [VeoFetchOperationResponse](https://ai-docs.tikhub.io/249487295d0.md):