Chat Completions
Endpoint
Section titled “Endpoint”POST /v1/chat/completionsRequest Body
Section titled “Request Body”| Field | Type | Default | Description |
|---|---|---|---|
model | string | "auto" | Model identifier. Use "auto" for health-aware round-robin across all providers, or specify a model from GET /v1/models. |
messages | array | required | Conversation history. Each item has role, content, and optional name. |
stream | boolean | false | When true, responses are streamed as server-sent events. |
temperature | number | — | Sampling temperature (0–2). Higher values produce more varied output. |
max_tokens | number | — | Maximum number of tokens to generate. |
reasoning_effort | string | — | Reasoning budget for supported models. One of "auto", "low", "medium", "high". |
project_id | string | — | Optional tag for grouping requests in analytics. |
Message Object
Section titled “Message Object”{ "role": "user", "content": "What is the capital of France?", "name": "alice"}| Field | Type | Description |
|---|---|---|
role | string | One of "system", "user", "assistant". |
content | string | Message text. |
name | string | Optional display name for the message author. |
Non-Streaming Response
Section titled “Non-Streaming Response”A standard OpenAI-compatible response object is returned. The gateway adds extra diagnostic headers:
| Header | Description |
|---|---|
x-gateway-provider | The backend provider that served the request (e.g. groq, gemini). |
x-gateway-model | The exact model used by the provider. |
x-gateway-attempts | Number of provider attempts before a successful response. |
x-gateway-request-id | Unique request identifier for support and log correlation. |
x-gateway-reasoning-effort | The reasoning effort level applied to the request. |
Example Response
Section titled “Example Response”{ "id": "chatcmpl-abc123", "object": "chat.completion", "created": 1714000000, "model": "llama-3.3-70b-versatile", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "The capital of France is Paris." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 15, "completion_tokens": 9, "total_tokens": 24 }}Streaming Response
Section titled “Streaming Response”When stream: true, the response is a stream of data: lines in SSE format, each containing a JSON delta object. The stream is terminated by data: [DONE].
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1714000000,"model":"llama-3.3-70b-versatile","choices":[{"index":0,"delta":{"role":"assistant","content":"The"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1714000000,"model":"llama-3.3-70b-versatile","choices":[{"index":0,"delta":{"content":" capital"},"finish_reason":null}]}
data: [DONE]Examples
Section titled “Examples”Non-Streaming
Section titled “Non-Streaming”curl https://your-gateway.workers.dev/v1/chat/completions \ -H "Authorization: Bearer <GATEWAY_API_KEY>" \ -H "Content-Type: application/json" \ -d '{ "model": "auto", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "What is the capital of France?" } ], "temperature": 0.7, "max_tokens": 256 }'const response = await fetch('https://your-gateway.workers.dev/v1/chat/completions', { method: 'POST', headers: { 'Authorization': 'Bearer <GATEWAY_API_KEY>', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'auto', messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: 'What is the capital of France?' }, ], temperature: 0.7, max_tokens: 256, }),});
const data = await response.json();console.log(data.choices[0].message.content);// Inspect gateway headersconsole.log('Provider:', response.headers.get('x-gateway-provider'));console.log('Model:', response.headers.get('x-gateway-model'));Streaming
Section titled “Streaming”curl https://your-gateway.workers.dev/v1/chat/completions \ -H "Authorization: Bearer <GATEWAY_API_KEY>" \ -H "Content-Type: application/json" \ -d '{ "model": "auto", "messages": [{ "role": "user", "content": "Tell me a short story." }], "stream": true }'const response = await fetch('https://your-gateway.workers.dev/v1/chat/completions', { method: 'POST', headers: { 'Authorization': 'Bearer <GATEWAY_API_KEY>', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'auto', messages: [{ role: 'user', content: 'Tell me a short story.' }], stream: true, }),});
const reader = response.body.getReader();const decoder = new TextDecoder();
while (true) { const { done, value } = await reader.read(); if (done) break;
const chunk = decoder.decode(value); for (const line of chunk.split('\n')) { if (!line.startsWith('data: ')) continue; const payload = line.slice(6).trim(); if (payload === '[DONE]') break;
const delta = JSON.parse(payload); const text = delta.choices?.[0]?.delta?.content ?? ''; process.stdout.write(text); }}With Reasoning Effort
Section titled “With Reasoning Effort”curl https://your-gateway.workers.dev/v1/chat/completions \ -H "Authorization: Bearer <GATEWAY_API_KEY>" \ -H "Content-Type: application/json" \ -d '{ "model": "auto", "messages": [{ "role": "user", "content": "Solve: x^2 - 5x + 6 = 0" }], "reasoning_effort": "high" }'const response = await fetch('https://your-gateway.workers.dev/v1/chat/completions', { method: 'POST', headers: { 'Authorization': 'Bearer <GATEWAY_API_KEY>', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'auto', messages: [{ role: 'user', content: 'Solve: x^2 - 5x + 6 = 0' }], reasoning_effort: 'high', }),});
const data = await response.json();console.log(data.choices[0].message.content);