跳转到主要内容

推荐 endpoint

最小请求

{
  "model": "gpt-4.1",
  "messages": [
    { "role": "system", "content": "You are a concise assistant." },
    { "role": "user", "content": "用三句话介绍 Api.Go。" }
  ],
  "temperature": 0.7
}

cURL 示例

curl https://mass.apigo.ai/v1/chat/completions \
  -H "Authorization: Bearer $TIDEMIND_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4.1",
    "messages": [
      { "role": "system", "content": "You are a concise assistant." },
      { "role": "user", "content": "用三句话介绍 Api.Go。" }
    ],
    "temperature": 0.7
  }'

Python 示例

from openai import OpenAI

client = OpenAI(
    base_url="https://mass.apigo.ai/v1",
    api_key="<TIDEMIND_API_KEY>",
)

response = client.chat.completions.create(
    model="gpt-4.1",
    messages=[
        {"role": "system", "content": "You are a concise assistant."},
        {"role": "user", "content": "用三句话介绍 Api.Go。"},
    ],
    temperature=0.7,
)

print(response.choices[0].message.content)

Node.js 示例

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://mass.apigo.ai/v1",
  apiKey: process.env.TIDEMIND_API_KEY,
});

const response = await client.chat.completions.create({
  model: "gpt-4.1",
  messages: [
    { role: "system", content: "You are a concise assistant." },
    { role: "user", content: "用三句话介绍 Api.Go。" }
  ],
  temperature: 0.7,
});

console.log(response.choices[0].message.content);

最佳实践

  • 兼容旧 SDK 或前端聊天组件时,优先从 chat.completions 起步
  • system 指令尽量固定在服务端,不要让前端随意覆盖
  • 先把多轮上下文、重试和超时放到网关统一处理,再考虑切到 responses