Gemini-Modelle sind über die OpenAI-Bibliotheken (Python und TypeScript/Javascript) sowie die REST API zugänglich. Dazu müssen Sie drei Codezeilen aktualisieren und Ihren Gemini API-Schlüssel verwenden. Wenn Sie noch keine OpenAI-Bibliotheken nutzen, sollten Sie die Gemini API direkt aufrufen.
Python
from openai import OpenAI
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
response = client.chat.completions.create(
model="gemini-2.5-flash",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": "Explain to me how AI works"
}
]
)
print(response.choices[0].message)
JavaScript
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/"
});
const response = await openai.chat.completions.create({
model: "gemini-2.0-flash",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{
role: "user",
content: "Explain to me how AI works",
},
],
});
console.log(response.choices[0].message);
REST
curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer GEMINI_API_KEY" \
-d '{
"model": "gemini-2.0-flash",
"messages": [
{"role": "user", "content": "Explain to me how AI works"}
]
}'
Was hat sich geändert? Nur drei Zeilen!
api_key="GEMINI_API_KEY"
: Ersetzen Sie „GEMINI_API_KEY
“ durch Ihren tatsächlichen Gemini API-Schlüssel, den Sie in Google AI Studio erhalten können.base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
:Dadurch wird die OpenAI-Bibliothek angewiesen, Anfragen an den Gemini API-Endpunkt anstelle der Standard-URL zu senden.model="gemini-2.0-flash"
: Kompatibles Gemini-Modell auswählen
Ich überlege
Gemini 2.5-Modelle sind darauf trainiert, komplexe Probleme zu durchdenken, was zu einer deutlich verbesserten logischen Schlussfolgerung führt. Die Gemini API enthält den Parameter „thinking budget“, mit dem Sie genau steuern können, wie viel das Modell überlegen soll.
Im Gegensatz zur Gemini API bietet die OpenAI API drei Stufen der Denksteuerung: "low"
, "medium"
und "high"
, die jeweils 1.024, 8.192 und 24.576 Tokens entsprechen.
Wenn Sie das Denken deaktivieren möchten, können Sie reasoning_effort
auf "none"
setzen. Das Reasoning kann jedoch nicht für 2.5 Pro-Modelle deaktiviert werden.
Python
from openai import OpenAI
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
response = client.chat.completions.create(
model="gemini-2.5-flash",
reasoning_effort="low",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": "Explain to me how AI works"
}
]
)
print(response.choices[0].message)
JavaScript
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/"
});
const response = await openai.chat.completions.create({
model: "gemini-2.5-flash",
reasoning_effort: "low",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{
role: "user",
content: "Explain to me how AI works",
},
],
});
console.log(response.choices[0].message);
REST
curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"reasoning_effort": "low",
"messages": [
{"role": "user", "content": "Explain to me how AI works"}
]
}'
Gemini-Thinking-Modelle erstellen auch Zusammenfassungen der Denkprozesse und können genaue Budgets für Denkprozesse verwenden.
Mit dem Feld extra_body
können Sie diese Felder in Ihre Anfrage einfügen.
reasoning_effort
und thinking_budget
haben teilweise dieselben Funktionen und können daher nicht gleichzeitig verwendet werden.
Python
from openai import OpenAI
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
response = client.chat.completions.create(
model="gemini-2.5-flash",
messages=[{"role": "user", "content": "Explain to me how AI works"}],
extra_body={
'extra_body': {
"google": {
"thinking_config": {
"thinking_budget": 800,
"include_thoughts": True
}
}
}
}
)
print(response.choices[0].message)
JavaScript
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/"
});
const response = await openai.chat.completions.create({
model: "gemini-2.5-flash",
messages: [{role: "user", content: "Explain to me how AI works",}],
extra_body: {
"google": {
"thinking_config": {
"thinking_budget": 800,
"include_thoughts": true
}
}
}
});
console.log(response.choices[0].message);
REST
curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"messages": [{"role": "user", "content": "Explain to me how AI works"}],
"extra_body": {
"google": {
"thinking_config": {
"include_thoughts": true
}
}
}
}'
Streaming
Die Gemini API unterstützt Streaming-Antworten.
Python
from openai import OpenAI
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
response = client.chat.completions.create(
model="gemini-2.0-flash",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
stream=True
)
for chunk in response:
print(chunk.choices[0].delta)
JavaScript
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/"
});
async function main() {
const completion = await openai.chat.completions.create({
model: "gemini-2.0-flash",
messages: [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
stream: true,
});
for await (const chunk of completion) {
console.log(chunk.choices[0].delta.content);
}
}
main();
REST
curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer GEMINI_API_KEY" \
-d '{
"model": "gemini-2.0-flash",
"messages": [
{"role": "user", "content": "Explain to me how AI works"}
],
"stream": true
}'
Funktionsaufrufe
Mit Funktionsaufrufen können Sie leichter strukturierte Datenausgaben von generativen Modellen erhalten. Die Funktion wird in der Gemini API unterstützt.
Python
from openai import OpenAI
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. Chicago, IL",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
}
]
messages = [{"role": "user", "content": "What's the weather like in Chicago today?"}]
response = client.chat.completions.create(
model="gemini-2.0-flash",
messages=messages,
tools=tools,
tool_choice="auto"
)
print(response)
JavaScript
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/"
});
async function main() {
const messages = [{"role": "user", "content": "What's the weather like in Chicago today?"}];
const tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. Chicago, IL",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
}
];
const response = await openai.chat.completions.create({
model: "gemini-2.0-flash",
messages: messages,
tools: tools,
tool_choice: "auto",
});
console.log(response);
}
main();
REST
curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer GEMINI_API_KEY" \
-d '{
"model": "gemini-2.0-flash",
"messages": [
{
"role": "user",
"content": "What'\''s the weather like in Chicago today?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. Chicago, IL"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}'
Bilder verstehen
Gemini-Modelle sind nativ multimodal und bieten eine erstklassige Leistung bei vielen gängigen visuellen Aufgaben.
Python
import base64
from openai import OpenAI
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
# Function to encode the image
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')
# Getting the base64 string
base64_image = encode_image("Path/to/agi/image.jpeg")
response = client.chat.completions.create(
model="gemini-2.0-flash",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is in this image?",
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
},
},
],
}
],
)
print(response.choices[0])
JavaScript
import OpenAI from "openai";
import fs from 'fs/promises';
const openai = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/"
});
async function encodeImage(imagePath) {
try {
const imageBuffer = await fs.readFile(imagePath);
return imageBuffer.toString('base64');
} catch (error) {
console.error("Error encoding image:", error);
return null;
}
}
async function main() {
const imagePath = "Path/to/agi/image.jpeg";
const base64Image = await encodeImage(imagePath);
const messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is in this image?",
},
{
"type": "image_url",
"image_url": {
"url": `data:image/jpeg;base64,${base64Image}`
},
},
],
}
];
try {
const response = await openai.chat.completions.create({
model: "gemini-2.0-flash",
messages: messages,
});
console.log(response.choices[0]);
} catch (error) {
console.error("Error calling Gemini API:", error);
}
}
main();
REST
bash -c '
base64_image=$(base64 -i "Path/to/agi/image.jpeg");
curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer GEMINI_API_KEY" \
-d "{
\"model\": \"gemini-2.0-flash\",
\"messages\": [
{
\"role\": \"user\",
\"content\": [
{ \"type\": \"text\", \"text\": \"What is in this image?\" },
{
\"type\": \"image_url\",
\"image_url\": { \"url\": \"data:image/jpeg;base64,${base64_image}\" }
}
]
}
]
}"
'
Image generieren
Bild generieren:
Python
import base64
from openai import OpenAI
from PIL import Image
from io import BytesIO
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/",
)
response = client.images.generate(
model="imagen-3.0-generate-002",
prompt="a portrait of a sheepadoodle wearing a cape",
response_format='b64_json',
n=1,
)
for image_data in response.data:
image = Image.open(BytesIO(base64.b64decode(image_data.b64_json)))
image.show()
JavaScript
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/",
});
async function main() {
const image = await openai.images.generate(
{
model: "imagen-3.0-generate-002",
prompt: "a portrait of a sheepadoodle wearing a cape",
response_format: "b64_json",
n: 1,
}
);
console.log(image.data);
}
main();
REST
curl "https://generativelanguage.googleapis.com/v1beta/openai/images/generations" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer GEMINI_API_KEY" \
-d '{
"model": "imagen-3.0-generate-002",
"prompt": "a portrait of a sheepadoodle wearing a cape",
"response_format": "b64_json",
"n": 1,
}'
Verständnis von Audioinhalten
Audioeingabe analysieren:
Python
import base64
from openai import OpenAI
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
with open("/path/to/your/audio/file.wav", "rb") as audio_file:
base64_audio = base64.b64encode(audio_file.read()).decode('utf-8')
response = client.chat.completions.create(
model="gemini-2.0-flash",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Transcribe this audio",
},
{
"type": "input_audio",
"input_audio": {
"data": base64_audio,
"format": "wav"
}
}
],
}
],
)
print(response.choices[0].message.content)
JavaScript
import fs from "fs";
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/",
});
const audioFile = fs.readFileSync("/path/to/your/audio/file.wav");
const base64Audio = Buffer.from(audioFile).toString("base64");
async function main() {
const response = await client.chat.completions.create({
model: "gemini-2.0-flash",
messages: [
{
role: "user",
content: [
{
type: "text",
text: "Transcribe this audio",
},
{
type: "input_audio",
input_audio: {
data: base64Audio,
format: "wav",
},
},
],
},
],
});
console.log(response.choices[0].message.content);
}
main();
REST
bash -c '
base64_audio=$(base64 -i "/path/to/your/audio/file.wav");
curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer GEMINI_API_KEY" \
-d "{
\"model\": \"gemini-2.0-flash\",
\"messages\": [
{
\"role\": \"user\",
\"content\": [
{ \"type\": \"text\", \"text\": \"Transcribe this audio file.\" },
{
\"type\": \"input_audio\",
\"input_audio\": {
\"data\": \"${base64_audio}\",
\"format\": \"wav\"
}
}
]
}
]
}"
'
Strukturierte Ausgabe
Gemini-Modelle können JSON-Objekte in jeder von Ihnen definierten Struktur ausgeben.
Python
from pydantic import BaseModel
from openai import OpenAI
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
class CalendarEvent(BaseModel):
name: str
date: str
participants: list[str]
completion = client.beta.chat.completions.parse(
model="gemini-2.0-flash",
messages=[
{"role": "system", "content": "Extract the event information."},
{"role": "user", "content": "John and Susan are going to an AI conference on Friday."},
],
response_format=CalendarEvent,
)
print(completion.choices[0].message.parsed)
JavaScript
import OpenAI from "openai";
import { zodResponseFormat } from "openai/helpers/zod";
import { z } from "zod";
const openai = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai"
});
const CalendarEvent = z.object({
name: z.string(),
date: z.string(),
participants: z.array(z.string()),
});
const completion = await openai.chat.completions.parse({
model: "gemini-2.0-flash",
messages: [
{ role: "system", content: "Extract the event information." },
{ role: "user", content: "John and Susan are going to an AI conference on Friday" },
],
response_format: zodResponseFormat(CalendarEvent, "event"),
});
const event = completion.choices[0].message.parsed;
console.log(event);
Einbettungen
Texteinbettungen messen die Ähnlichkeit von Textstrings und können mit der Gemini API generiert werden.
Python
from openai import OpenAI
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
response = client.embeddings.create(
input="Your text string goes here",
model="gemini-embedding-001"
)
print(response.data[0].embedding)
JavaScript
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/"
});
async function main() {
const embedding = await openai.embeddings.create({
model: "gemini-embedding-001",
input: "Your text string goes here",
});
console.log(embedding);
}
main();
REST
curl "https://generativelanguage.googleapis.com/v1beta/openai/embeddings" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer GEMINI_API_KEY" \
-d '{
"input": "Your text string goes here",
"model": "gemini-embedding-001"
}'
Batch API
Sie können Batchjobs erstellen, einreichen und ihren Status mit der OpenAI-Bibliothek prüfen.
Sie müssen die JSONL-Datei im OpenAI-Eingabeformat vorbereiten. Beispiel:
{"custom_id": "request-1", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "gemini-2.5-flash", "messages": [{"role": "user", "content": "Tell me a one-sentence joke."}]}}
{"custom_id": "request-2", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "gemini-2.5-flash", "messages": [{"role": "user", "content": "Why is the sky blue?"}]}}
Die OpenAI-Kompatibilität für Batch unterstützt das Erstellen eines Batches, das Überwachen des Jobstatus und das Ansehen von Batchergebnissen.
Die Kompatibilität für Upload und Download wird derzeit nicht unterstützt. Im folgenden Beispiel wird stattdessen der genai
-Client zum Hoch- und Herunterladen von Dateien verwendet, genau wie bei der Verwendung der Gemini Batch API.
Python
from openai import OpenAI
# Regular genai client for uploads & downloads
from google import genai
client = genai.Client()
openai_client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
# Upload the JSONL file in OpenAI input format, using regular genai SDK
uploaded_file = client.files.upload(
file='my-batch-requests.jsonl',
config=types.UploadFileConfig(display_name='my-batch-requests', mime_type='jsonl')
)
# Create batch
batch = openai_client.batches.create(
input_file_id=batch_input_file_id,
endpoint="/v1/chat/completions",
completion_window="24h"
)
# Wait for batch to finish (up to 24h)
while True:
batch = client.batches.retrieve(batch.id)
if batch.status in ('completed', 'failed', 'cancelled', 'expired'):
break
print(f"Batch not finished. Current state: {batch.status}. Waiting 30 seconds...")
time.sleep(30)
print(f"Batch finished: {batch}")
# Download results in OpenAI output format, using regular genai SDK
file_content = genai_client.files.download(file=batch.output_file_id).decode('utf-8')
# See batch_output JSONL in OpenAI output format
for line in file_content.splitlines():
print(line)
Das OpenAI SDK unterstützt auch das Generieren von Einbettungen mit der Batch API. Ersetzen Sie dazu das Feld endpoint
der Methode create
durch einen Einbettungs-Endpunkt sowie die Schlüssel url
und model
in der JSONL-Datei:
# JSONL file using embeddings model and endpoint
# {"custom_id": "request-1", "method": "POST", "url": "/v1/embeddings", "body": {"model": "ggemini-embedding-001", "messages": [{"role": "user", "content": "Tell me a one-sentence joke."}]}}
# {"custom_id": "request-2", "method": "POST", "url": "/v1/embeddings", "body": {"model": "gemini-embedding-001", "messages": [{"role": "user", "content": "Why is the sky blue?"}]}}
# ...
# Create batch step with embeddings endpoint
batch = openai_client.batches.create(
input_file_id=batch_input_file_id,
endpoint="/v1/embeddings",
completion_window="24h"
)
Ein vollständiges Beispiel finden Sie im Abschnitt Batch embedding generation (Batch-Einbettungsgenerierung) im OpenAI-Kompatibilitäts-Cookbook.
extra_body
Es gibt mehrere Funktionen, die von Gemini unterstützt werden, aber nicht in OpenAI-Modellen verfügbar sind. Sie können jedoch mit dem Feld extra_body
aktiviert werden.
extra_body
-Funktionen
cached_content |
Entspricht GenerateContentRequest.cached_content von Gemini. |
thinking_config |
Entspricht ThinkingConfig von Gemini. |
cached_content
Hier ein Beispiel für die Verwendung von extra_body
zum Festlegen von cached_content
:
Python
from openai import OpenAI
client = OpenAI(
api_key=MY_API_KEY,
base_url="https://generativelanguage.googleapis.com/v1beta/"
)
stream = client.chat.completions.create(
model="gemini-2.5-pro",
n=1,
messages=[
{
"role": "user",
"content": "Summarize the video"
}
],
stream=True,
stream_options={'include_usage': True},
extra_body={
'extra_body':
{
'google': {
'cached_content': "cachedContents/0000aaaa1111bbbb2222cccc3333dddd4444eeee"
}
}
}
)
for chunk in stream:
print(chunk)
print(chunk.usage.to_dict())
Modelle auflisten
Rufen Sie eine Liste der verfügbaren Gemini-Modelle ab:
Python
from openai import OpenAI
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
models = client.models.list()
for model in models:
print(model.id)
JavaScript
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/",
});
async function main() {
const list = await openai.models.list();
for await (const model of list) {
console.log(model);
}
}
main();
REST
curl https://generativelanguage.googleapis.com/v1beta/openai/models \
-H "Authorization: Bearer GEMINI_API_KEY"
Modell abrufen
Gemini-Modell abrufen:
Python
from openai import OpenAI
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
model = client.models.retrieve("gemini-2.0-flash")
print(model.id)
JavaScript
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/",
});
async function main() {
const model = await openai.models.retrieve("gemini-2.0-flash");
console.log(model.id);
}
main();
REST
curl https://generativelanguage.googleapis.com/v1beta/openai/models/gemini-2.0-flash \
-H "Authorization: Bearer GEMINI_API_KEY"
Aktuelle Beschränkungen
Die Unterstützung für die OpenAI-Bibliotheken befindet sich noch in der Betaphase, während wir die Funktionsunterstützung erweitern.
Wenn Sie Fragen zu unterstützten Parametern oder anstehenden Funktionen haben oder Probleme beim Einstieg in Gemini auftreten, können Sie sich an unser Entwicklerforum wenden.
Nächste Schritte
In unserem OpenAI Compatibility Colab finden Sie detailliertere Beispiele.