Skip to main content
POST
/
v1
/
chat
/
completions
OpenAI chat completions
curl --request POST \
  --url http://sandbox.mintlify.com/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "gpt-4o",
  "messages": [
    {
      "role": "system",
      "content": "<string>"
    }
  ],
  "temperature": 123,
  "stream": true
}
'
{
  "id": "<string>",
  "object": "chat.completion",
  "choices": [
    {
      "index": 123,
      "message": {
        "role": "system",
        "content": "<string>"
      },
      "finish_reason": "<string>"
    }
  ],
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123
  }
}
Creates a model response for a chat conversation. This endpoint is still the safest default when you need compatibility with existing OpenAI SDKs, chat clients, or legacy chat-completion workflows. Supported fields vary by model, especially for reasoning, tool use, and multimodal inputs.

Integration guidance

  • Authenticate with Authorization: Bearer {API_KEY}
  • Use this as the default entry point for existing OpenAI-style chat integrations
  • If you want a more unified interface for structured output, multimodal input, and tools, prefer /v1/responses
  • Streaming clients should handle SSE chunks incrementally instead of waiting for one final JSON response

Request highlights

  • messages is required and carries the conversation history
  • model is required and selects the target model
  • temperature and top_p both affect sampling, but most integrations should tune only one of them
  • If you need token-level probabilities, combine logprobs with top_logprobs
  • For caching and safety attribution, prefer prompt_cache_key and safety_identifier

Response highlights

  • Plain text is usually read from choices[0].message.content
  • Tool calls can be read from message.tool_calls
  • Streaming responses arrive as SSE chunks and must be merged incrementally
  • Usage accounting is exposed through usage, including more detailed token breakdowns

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
model
string
required
Example:

"gpt-4o"

messages
object[]
required
temperature
number
stream
boolean

Response

Successful chat completion response

id
string
object
string
Example:

"chat.completion"

choices
object[]
usage
object