TikHub-AI-Proxy
    • Overview (PLEASE READ)
    • Streaming API
    • OpenAI
      • OpenAI response
        POST
      • OpenAI embeddings
        POST
      • OpenAI audio transcription
        POST
      • OpenAI chat completion
        POST
    • Claude
      • Claude chat completion
        POST
      • Claude message
        POST
    • DeepSeek
      • DeepSeek chat completion
        POST
    • Sora
      • Sora video generation
        POST
      • Get Sora video status
        GET
      • Download Sora video content
        GET
      • Remix Sora video
        POST
      • List Sora videos
        GET
      • Delete Sora video
        DELETE
    • Gemini
      • Gemini content
        POST
    • Seedance
      • Seedance video generation
        POST
      • Retrieve Seedance task
        GET
    • Kling
      • Kling text-to-video
        POST
      • Retrieve Kling text-to-video task
        GET
      • Kling image-to-video
        POST
      • Retrieve Kling image-to-video task
        GET
    • Veo
      • Veo video generation
      • Fetch Veo video generation status
    • Schemas
      • ChatCompletionRequest
      • ChatMessage
      • Tool
      • ToolCall
      • ChatCompletionResponse
      • ChatCompletionChoice
      • ContentFilterResults
      • UsageInfo
      • EmbeddingRequest
      • EmbeddingResponse
      • TranscriptionRequest
      • TranscriptionResponse
      • ClaudeMessageRequest
      • ClaudeMessageResponse
      • VideoCreateRequest
      • VideoResponse
      • VideoRemixRequest
      • GeminiGenerateContentRequest
      • VideoListResponse
      • GeminiContent
      • VideoDeleteResponse
      • GeminiGenerationConfig
      • ResponseRequest
      • GeminiGenerateContentResponse
      • ResponseObject
      • SeedanceTaskRequest
      • SeedanceTaskCreateResponse
      • SeedanceTaskResponse
      • KlingText2VideoRequest
      • KlingImage2VideoRequest
      • KlingTaskResponse
      • KlingTaskDetailResponse
      • KlingTaskListResponse
      • VeoGenerateRequest
      • VeoOperationResponse
      • VeoFetchOperationRequest
      • VeoFetchOperationResponse

    Streaming API

    TikHub Streaming AI API Documentation#

    TikHub AI Proxy provides a unified gateway to stream responses from leading AI models — including OpenAI, Anthropic Claude, and Google Gemini — through a single base URL: https://ai.tikhub.io.
    Streaming allows you to receive model output incrementally as it's generated, reducing perceived latency and enabling real-time display in your applications.

    Base URL#

    https://ai.tikhub.io

    Supported Providers & Endpoints#

    ProviderEndpoint PathAuth HeaderStreaming Format
    OpenAI/v1/responsesAuthorization: Bearer <key>SSE
    Anthropic/v1/messagesx-api-key: <key>SSE
    Gemini/v1beta/models/{MODEL}:streamGenerateContent?alt=ssex-goog-api-key: <key>SSE
    All streaming responses use the Server-Sent Events (SSE) protocol. Set Accept: text/event-stream or handle data: prefixed lines accordingly.

    Authentication#

    Use your TikHub API key in the provider-specific header:
    # OpenAI-compatible
    Authorization: Bearer sk-xxxxxxxxxxxx
    
    # Anthropic-compatible
    x-api-key: sk-xxxxxxxxxxxx
    
    # Gemini-compatible
    x-goog-api-key: sk-xxxxxxxxxxxx

    OpenAI Streaming#

    Endpoint#

    POST https://ai.tikhub.io/v1/responses

    Request Parameters#

    ParameterTypeRequiredDescription
    modelstring✅Model name (e.g., gpt-4o, gpt-5)
    inputstring✅The user prompt
    streamboolean✅Must be true for streaming

    Example (Python)#


    Anthropic Claude Streaming#

    Endpoint#

    POST https://ai.tikhub.io/v1/messages

    Request Parameters#

    ParameterTypeRequiredDescription
    modelstring✅Model name (e.g., claude-opus-4-6, claude-sonnet-4-5-20250929)
    max_tokensinteger✅Maximum number of tokens to generate
    streamboolean✅Must be true for streaming
    messagesarray✅Conversation messages in {role, content} format

    SSE Event Types#

    Event TypeDescription
    message_startInitial message metadata
    content_block_startStart of a content block
    content_block_deltaIncremental text chunk (text_delta)
    content_block_stopEnd of a content block
    message_deltaFinal message metadata (stop reason, usage)
    message_stopStream complete

    Example (Python)#


    Google Gemini Streaming#

    Endpoint#

    POST https://ai.tikhub.io/v1beta/models/{MODEL}:streamGenerateContent?alt=sse

    Request Parameters#

    ParameterTypeRequiredDescription
    contentsarray✅Conversation turns in {role, parts} format

    Supported Models#

    gemini-2.5-pro
    gemini-2.5-flash
    gemini-2.0-flash

    SSE Response Structure#

    Each SSE data: payload contains a JSON object with the following structure:
    {
      "candidates": [
        {
          "content": {
            "parts": [{"text": "..."}],
            "role": "model"
          }
        }
      ]
    }

    Example (Python)#


    General Notes#

    SSE Protocol: All streaming responses follow the Server-Sent Events format. Each event line is prefixed with data: followed by a JSON payload. The stream ends with data: [DONE] (OpenAI/Gemini) or a message_stop event (Anthropic).
    Error Handling: Always check HTTP status codes before processing the stream. Non-2xx responses will return a JSON error body instead of SSE events.
    Timeouts: Streaming connections may remain open for extended periods. Configure your HTTP client with appropriate read timeouts.
    Rate Limits: Standard TikHub API rate limits apply. Monitor response headers for rate limit information.

    Support#

    For questions or issues, contact us at:
    Website: tikhub.io
    Email: tikhub.io@proton.me
    Modified at 2026-02-09 19:32:03
    Previous
    Overview (PLEASE READ)
    Next
    OpenAI response
    Built with