Skip to main content
POST
/
api
/
chat
/
completions
Chat Completions (OpenAI-compatible)
curl --request POST \
  --url https://easy-peasy.ai/api/chat/completions \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: <api-key>' \
  --data '
{
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "Explain quantum computing in simple terms."
    }
  ],
  "model": "gemini-3-flash",
  "temperature": 0.7,
  "max_tokens": 1000
}
'
{
  "id": "chatcmpl-1741234567890",
  "object": "chat.completion",
  "created": 1741234567,
  "model": "gemini-3-flash",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Quantum computing is..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 150,
    "total_tokens": 175
  }
}

OpenAI SDK Compatibility

This endpoint is fully compatible with the OpenAI SDK. Just change the baseURL and apiKey:
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_EASY_PEASY_API_KEY',
  baseURL: 'https://easy-peasy.ai/api',
});

// Non-streaming
const response = await client.chat.completions.create({
  model: 'gemini-3-flash',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello!' },
  ],
});
console.log(response.choices[0].message.content);

// Streaming
const stream = await client.chat.completions.create({
  model: 'gemini-3-flash',
  messages: [{ role: 'user', content: 'Tell me a story.' }],
  stream: true,
});
for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Authentication

This endpoint supports two authentication methods:
  • x-api-key header: x-api-key: YOUR_API_KEY
  • Authorization header: Authorization: Bearer YOUR_API_KEY (OpenAI SDK default)

Supported Models

ProviderModel IDDescription
Googlegemini-3-flashGemini 3 Flash — fast and efficient (default)
Googlegemini-3-proGemini 3 Pro — advanced reasoning
Googlegemini-3.1-proGemini 3.1 Pro — latest Gemini
Anthropicclaude-opus-4-6Claude Opus 4.6 — most capable
Anthropicclaude-sonnet-4-6Claude Sonnet 4.6 — balanced
Anthropicclaude-haiku-4-5Claude Haiku 4.5 — fast
OpenAIgpt-5GPT-5 — latest flagship
OpenAIgpt-5-miniGPT-5 Mini — smaller, fast
OpenAIgpt-5.4-instantGPT-5.4 Instant — fast
OpenAIgpt-5.4-thinkingGPT-5.4 Thinking — reasoning
OpenAIgpt-5.4-proGPT-5.4 Pro — most capable
DeepSeekdeepseek-v3DeepSeek V3
Kimikimi-k2.5Kimi K2.5
GLMglm-5GLM-5
MiniMaxminimax-m2p5MiniMax M2.5
xAIgrok-4Grok 4

Multimodal Messages

You can send images and audio alongside text using the OpenAI multimodal message format.

Vision (Image Input)

Send images as URLs or base64 data URIs:
const response = await client.chat.completions.create({
  model: 'gemini-3-flash',
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'What do you see in this image?' },
        {
          type: 'image_url',
          image_url: { url: 'https://example.com/photo.jpg' },
        },
      ],
    },
  ],
});
Base64 images are also supported:
{
  "type": "image_url",
  "image_url": {
    "url": "data:image/png;base64,iVBORw0KGgo..."
  }
}

Audio Input

Send audio as base64-encoded data (mp3, wav, webm, mp4):
{
  "role": "user",
  "content": [
    { "type": "text", "text": "Transcribe this audio." },
    {
      "type": "input_audio",
      "input_audio": {
        "data": "base64-encoded-audio-data...",
        "format": "mp3"
      }
    }
  ]
}

Streaming

When stream: true, the response uses Server-Sent Events in OpenAI chunk format:
data: {"id":"chatcmpl-...","object":"chat.completion.chunk","created":...,"model":"gemini-3-flash","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}

data: {"id":"chatcmpl-...","object":"chat.completion.chunk","created":...,"model":"gemini-3-flash","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}

data: [DONE]

Authorizations

x-api-key
string
header
required

API key for authentication. Get yours at https://easy-peasy.ai/settings/api

Headers

x-api-key
string

Your API key. Alternatively, use the Authorization: Bearer header.

Authorization
string

Bearer token authentication (alternative to x-api-key). Format: Bearer YOUR_API_KEY

Body

application/json
messages
object[]
required

Array of message objects for the conversation

model
enum<string>
default:gemini-3-flash

Model to use for the completion. See the models table below for all supported models.

Available options:
gemini-3-flash,
gemini-3-pro,
gemini-3.1-pro,
claude-opus-4-6,
claude-sonnet-4-6,
claude-haiku-4-5,
gpt-5,
gpt-5-mini,
gpt-5.4-instant,
gpt-5.4-thinking,
gpt-5.4-pro,
deepseek-v3,
kimi-k2.5,
glm-5,
minimax-m2p5,
grok-4
stream
boolean
default:false

Enable Server-Sent Events streaming

temperature
number

Sampling temperature (0-2)

max_tokens
integer

Maximum tokens to generate

top_p
number

Nucleus sampling parameter

stop

Stop sequences

Response

Chat completion response

id
string

Unique identifier for the completion

object
enum<string>

Object type

Available options:
chat.completion
created
integer

Unix timestamp of creation

model
string

Model used for the completion

choices
object[]
usage
object