Documentation Index
Fetch the complete documentation index at: https://docs.apigo.ai/llms.txt
Use this file to discover all available pages before exploring further.
권장 엔드포인트
최소한의 요청
{
"model": "gpt-4.1",
"messages": [
{ "role": "user", "content": "Explain SSE streaming while streaming the answer." }
],
"stream": true
}
cURL 예
curl https://maas.apigo.ai/v1/chat/completions \
-H "Authorization: Bearer $YOUR API KEY" \
-H "Content-Type: application/json" \
-N \
-d '{
"model": "gpt-4.1",
"messages": [
{ "role": "user", "content": "Explain SSE streaming while streaming the answer." }
],
"stream": true
}'
파이썬 예제
from openai import OpenAI
client = OpenAI(
base_url="https://maas.apigo.ai/v1",
api_key="<YOUR API KEY>",
)
stream = client.chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "user", "content": "Explain SSE streaming while streaming the answer."}
],
stream=True,
)
for chunk in stream:
delta = chunk.choices[0].delta.content
if delta:
print(delta, end="")
Node.js 예
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://maas.apigo.ai/v1",
apiKey: process.env.YOUR API KEY,
});
const stream = await client.chat.completions.create({
model: "gpt-4.1",
messages: [
{ role: "user", content: "Explain SSE streaming while streaming the answer." }
],
stream: true
});
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta?.content;
if (delta) process.stdout.write(delta);
}
모범 사례
- 스트리밍된 청크에서 점진적으로 렌더링
- 나중에 도구나 구조화된 출력을 추가하려면
responses 스트리밍을 고려하세요.
- 서버에서 재연결 및 청크 어셈블리 처리