Skip to main content
POST
/
v1
/
responses
OpenAI responses
curl --request POST \
  --url http://sandbox.mintlify.com/v1/responses \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "gpt-4.1",
  "input": "<string>",
  "instructions": "<string>",
  "max_output_tokens": 123
}
'
{
  "id": "<string>",
  "object": "response",
  "status": "<string>",
  "output_text": "<string>",
  "output": [
    {}
  ],
  "usage": {
    "input_tokens": 123,
    "output_tokens": 123,
    "total_tokens": 123
  }
}
Creates a unified response object for text generation, structured output, tool calls, and multimodal input. Compared with chat.completions, this endpoint is a better fit for new integrations because text, images, reasoning controls, and tools all share one response model. Supported fields still vary by model.

Integration guidance

  • Authenticate with Authorization: Bearer {API_KEY}
  • Prefer this endpoint when you want one surface for text, JSON output, tool calls, and future multimodal workflows
  • Use previous_response_id to continue a conversation without resending the full history
  • For reasoning-model workflows, centralize reasoning, max_output_tokens, and tools in your server-side gateway
  • Streaming clients should consume incremental events instead of waiting for one final payload

Request highlights

  • input is the primary input field and can carry text or multimodal content blocks
  • model selects the target model for the response
  • previous_response_id is the main way to chain turns across a conversation
  • For structured output, declare explicit JSON formatting requirements and validate server-side
  • For tool use, pass tools and handle tool call outputs explicitly

Response highlights

  • Simple text can often be read from output_text
  • Richer results should be read from output[]
  • Tool calls, reasoning traces, and multimodal outputs all share the same response object
  • Usage and status metadata should be read from the response object rather than inferred from text alone

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
model
string
required
Example:

"gpt-4.1"

input
required
instructions
string
max_output_tokens
integer

Response

Successful responses payload

id
string
object
string
Example:

"response"

status
string
output_text
string
output
object[]
usage
object