llama-3.3-nemotron-super-49b-v1.5

Neural Network

Beep-boop, writing text for you...
pc1pc2pc3pc4
Main

/

Models

/

llama-3.3-nemotron-super-49b-v1.5
43 253

Max answer length

(in tokens)

131 072

Context size

(in tokens)

11,79

Prompt cost

(per 1M tokens)

47,14

Answer cost

(per 1M tokens)

0

Image prompt

(per 1K tokens)

*Prices for using the API.
Overview
Providers
API
bothub
BotHub: Try chat GPT for freebot

Caps remaining: 0 CAPS
Providers llama-3.3-nemotron-super-49b-v1.5On Bothub, you can select your own providers for requests. If you haven't made a selection, we will automatically find suitable providers who can handle the size and parameters of your request.
Code example and API for llama-3.3-nemotron-super-49b-v1.5We offer full access to the OpenAI API through our service. All our endpoints fully comply with OpenAI endpoints and can be used both with plugins and when developing your own software through the SDK.Create API key
Javascript
Python
Curl
import OpenAI from 'openai';
const openai = new OpenAI({
  apiKey: '<your bothub access token>',
  baseURL: 'https://bothub.chat/api/v2/openai/v1'
});


// Sync - Text generation 

async function main() {
  const chatCompletion = await openai.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'llama-3.3-nemotron-super-49b-v1.5',
  });
} 

// Async - Text generation 

async function main() {
  const stream = await openai.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'llama-3.3-nemotron-super-49b-v1.5',
    stream: true
  });

  for await (const chunk of stream) {
    const part: string | null = chunk.choices[0].delta?.content ?? null;
  }
} 
main();

How it works llama-3.3-nemotron-super-49b-v1.5?

Bothubs gather information...empty