o3-mini

Neural Network

GPT o3-mini is a neural network from OpenAI, released on January 31, 2025. It is focused on tasks in programming, mathematics, and analytics. Users receive fast, accurate answers even in complex areas, saving time and simplifying the decision-making process.

Main

/

Models

/

o3-mini
100 000

Max answer length

(in tokens)

200 000

Context size

(in tokens)

129,64

Prompt cost

(per 1M tokens)

518,57

Answer cost

(per 1M tokens)

0

Image prompt

(per 1K tokens)

*Prices for using the API.
Overview
Providers
API
bothub
BotHub: Try chat GPT for freebot

Caps remaining: 0 CAPS
Providers o3-miniOn Bothub, you can select your own providers for requests. If you haven't made a selection, we will automatically find suitable providers who can handle the size and parameters of your request.
Code example and API for o3-miniWe offer full access to the OpenAI API through our service. All our endpoints fully comply with OpenAI endpoints and can be used both with plugins and when developing your own software through the SDK.Create API key
Javascript
Python
Curl
import OpenAI from 'openai';
const openai = new OpenAI({
  apiKey: '<your bothub access token>',
  baseURL: 'https://openai.bothub.chat/v1'
});


// Sync - Text generation 

async function main() {
  const chatCompletion = await openai.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'o3-mini',
  });
} 

// Async - Text generation 

async function main() {
  const stream = await openai.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'o3-mini',
    stream: true
  });

  for await (const chunk of stream) {
    const part: string | null = chunk.choices[0].delta?.content ?? null;
  }
} 
main();
illustaration

How it works o3-mini?

The key advantages of GPT o3-mini include a 200,000-token context for in-depth dialogues, three levels of reasoning (low, medium, high), and an affordable cost compared to GPT-4o. In tests, the model is 24% faster than o1-mini and demonstrates high accuracy in mathematical and scientific tasks. Its built-in JSON handling facilitates automation, and its function calling feature simplifies application integration. Ultimately, users save resources, resolve complex tasks quickly, and enhance the quality of their projects.