Skip to main content

Minimal request

{
  "model": "gpt-4.1",
  "messages": [
    { "role": "system", "content": "You are a concise assistant." },
    { "role": "user", "content": "Summarize Api.Go in three sentences." }
  ],
  "temperature": 0.7
}

cURL example

curl https://mass.apigo.ai/v1/chat/completions \
  -H "Authorization: Bearer $TIDEMIND_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4.1",
    "messages": [
      { "role": "system", "content": "You are a concise assistant." },
      { "role": "user", "content": "Summarize Api.Go in three sentences." }
    ],
    "temperature": 0.7
  }'

Python example

from openai import OpenAI

client = OpenAI(
    base_url="https://mass.apigo.ai/v1",
    api_key="<TIDEMIND_API_KEY>",
)

response = client.chat.completions.create(
    model="gpt-4.1",
    messages=[
        {"role": "system", "content": "You are a concise assistant."},
        {"role": "user", "content": "Summarize Api.Go in three sentences."},
    ],
    temperature=0.7,
)

print(response.choices[0].message.content)

Node.js example

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://mass.apigo.ai/v1",
  apiKey: process.env.TIDEMIND_API_KEY,
});

const response = await client.chat.completions.create({
  model: "gpt-4.1",
  messages: [
    { role: "system", content: "You are a concise assistant." },
    { role: "user", content: "Summarize Api.Go in three sentences." }
  ],
  temperature: 0.7,
});

console.log(response.choices[0].message.content);

Best practices

  • Start with chat.completions when you need compatibility with existing clients
  • Keep the system instruction server-side
  • Centralize retries, timeout handling, and conversation history before moving to responses