La API de Interactions es una interfaz unificada para interactuar con modelos y agentes de Gemini. Simplifica la administración del estado, la organización de herramientas y las tareas de larga duración. Para obtener una vista completa del esquema de la API, consulta la referencia de la API.
En el siguiente ejemplo, se muestra cómo llamar a la API de Interactions con una instrucción de texto.
Python
from google import genai
client = genai.Client()
interaction = client.interactions.create(
model="gemini-3-pro-preview",
input="Tell me a short joke about programming."
)
print(interaction.outputs[-1].text)
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
const interaction = await client.interactions.create({
model: 'gemini-3-pro-preview',
input: 'Tell me a short joke about programming.',
});
console.log(interaction.outputs[interaction.outputs.length - 1].text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-3-pro-preview",
"input": "Tell me a short joke about programming."
}'
Interacciones básicas
La API de Interactions está disponible a través de nuestros SDKs existentes. La forma más sencilla de interactuar con el modelo es proporcionar una instrucción de texto. input puede ser una cadena, una lista que contiene objetos de contenido o una lista de turnos con roles y objetos de contenido.
Python
from google import genai
client = genai.Client()
interaction = client.interactions.create(
model="gemini-2.5-flash",
input="Tell me a short joke about programming."
)
print(interaction.outputs[-1].text)
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: 'Tell me a short joke about programming.',
});
console.log(interaction.outputs[interaction.outputs.length - 1].text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": "Tell me a short joke about programming."
}'
Conversación
Puedes crear conversaciones de varios turnos de dos maneras:
- Con estado, haciendo referencia a una interacción anterior
- Sin estado, proporcionando todo el historial de conversaciones
Conversación con estado
Pasa el id de la interacción anterior al parámetro previous_interaction_id para continuar una conversación.
Python
from google import genai
client = genai.Client()
# 1. First turn
interaction1 = client.interactions.create(
model="gemini-2.5-flash",
input="Hi, my name is Phil."
)
print(f"Model: {interaction1.outputs[-1].text}")
# 2. Second turn (passing previous_interaction_id)
interaction2 = client.interactions.create(
model="gemini-2.5-flash",
input="What is my name?",
previous_interaction_id=interaction1.id
)
print(f"Model: {interaction2.outputs[-1].text}")
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
// 1. First turn
const interaction1 = await client.interactions.create({
model: 'gemini-2.5-flash',
input: 'Hi, my name is Phil.'
});
console.log(`Model: ${interaction1.outputs[interaction1.outputs.length - 1].text}`);
// 2. Second turn (passing previous_interaction_id)
const interaction2 = await client.interactions.create({
model: 'gemini-2.5-flash',
input: 'What is my name?',
previous_interaction_id: interaction1.id
});
console.log(`Model: ${interaction2.outputs[interaction2.outputs.length - 1].text}`);
REST
# 1. First turn
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": "Hi, my name is Phil."
}'
# 2. Second turn (Replace INTERACTION_ID with the ID from the previous interaction)
# curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
# -H "Content-Type: application/json" \
# -H "x-goog-api-key: $GEMINI_API_KEY" \
# -d '{
# "model": "gemini-2.5-flash",
# "input": "What is my name?",
# "previous_interaction_id": "INTERACTION_ID"
# }'
Recupera interacciones anteriores con estado
Usar la interacción id para recuperar turnos anteriores de la conversación
Python
previous_interaction = client.interactions.get("<YOUR_INTERACTION_ID>")
print(previous_interaction)
JavaScript
const previous_interaction = await client.interactions.get("<YOUR_INTERACTION_ID>");
console.log(previous_interaction);
REST
curl -X GET "https://generativelanguage.googleapis.com/v1beta/interactions/<YOUR_INTERACTION_ID>" \
-H "x-goog-api-key: $GEMINI_API_KEY"
Conversación sin estado
Puedes administrar el historial de conversaciones de forma manual en el cliente.
Python
from google import genai
client = genai.Client()
conversation_history = [
{
"role": "user",
"content": "What are the three largest cities in Spain?"
}
]
interaction1 = client.interactions.create(
model="gemini-2.5-flash",
input=conversation_history
)
print(f"Model: {interaction1.outputs[-1].text}")
conversation_history.append({"role": "model", "content": interaction1.outputs})
conversation_history.append({
"role": "user",
"content": "What is the most famous landmark in the second one?"
})
interaction2 = client.interactions.create(
model="gemini-2.5-flash",
input=conversation_history
)
print(f"Model: {interaction2.outputs[-1].text}")
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
const conversationHistory = [
{
role: 'user',
content: "What are the three largest cities in Spain?"
}
];
const interaction1 = await client.interactions.create({
model: 'gemini-2.5-flash',
input: conversationHistory
});
console.log(`Model: ${interaction1.outputs[interaction1.outputs.length - 1].text}`);
conversationHistory.push({ role: 'model', content: interaction1.outputs });
conversationHistory.push({
role: 'user',
content: "What is the most famous landmark in the second one?"
});
const interaction2 = await client.interactions.create({
model: 'gemini-2.5-flash',
input: conversationHistory
});
console.log(`Model: ${interaction2.outputs[interaction2.outputs.length - 1].text}`);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": [
{
"role": "user",
"content": "What are the three largest cities in Spain?"
},
{
"role": "model",
"content": "The three largest cities in Spain are Madrid, Barcelona, and Valencia."
},
{
"role": "user",
"content": "What is the most famous landmark in the second one?"
}
]
}'
Capacidades multimodales
Puedes usar la API de Interactions para casos de uso multimodales, como la comprensión de imágenes o la generación de videos.
Comprensión multimodal
Puedes proporcionar datos multimodales como datos codificados en base64 intercalados o con la API de Files para archivos más grandes.
Comprensión de imágenes
Python
import base64
from pathlib import Path
from google import genai
client = genai.Client()
# Read and encode the image
with open(Path(__file__).parent / "car.png", "rb") as f:
base64_image = base64.b64encode(f.read()).decode('utf-8')
interaction = client.interactions.create(
model="gemini-2.5-flash",
input=[
{"type": "text", "text": "Describe the image."},
{"type": "image", "data": base64_image, "mime_type": "image/png"}
]
)
print(interaction.outputs[-1].text)
JavaScript
import { GoogleGenAI } from '@google/genai';
import * as fs from 'fs';
const client = new GoogleGenAI({});
const base64Image = fs.readFileSync('car.png', { encoding: 'base64' });
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: [
{ type: 'text', text: 'Describe the image.' },
{ type: 'image', data: base64Image, mime_type: 'image/png' }
]
});
console.log(interaction.outputs[interaction.outputs.length - 1].text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": [
{"type": "text", "text": "Describe the image."},
{"type": "image", "data": "'"$(base64 -w0 car.png)"'", "mime_type": "image/png"}
]
}'
Comprensión de audio
Python
import base64
from pathlib import Path
from google import genai
client = genai.Client()
# Read and encode the audio
with open(Path(__file__).parent / "speech.wav", "rb") as f:
base64_audio = base64.b64encode(f.read()).decode('utf-8')
interaction = client.interactions.create(
model="gemini-2.5-flash",
input=[
{"type": "text", "text": "What does this audio say?"},
{"type": "audio", "data": base64_audio, "mime_type": "audio/wav"}
]
)
print(interaction.outputs[-1].text)
JavaScript
import { GoogleGenAI } from '@google/genai';
import * as fs from 'fs';
const client = new GoogleGenAI({});
const base64Audio = fs.readFileSync('speech.wav', { encoding: 'base64' });
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: [
{ type: 'text', text: 'What does this audio say?' },
{ type: 'audio', data: base64Audio, mime_type: 'audio/wav' }
]
});
console.log(interaction.outputs[interaction.outputs.length - 1].text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": [
{"type": "text", "text": "What does this audio say?"},
{"type": "audio", "data": "'"$(base64 -w0 speech.wav)"'", "mime_type": "audio/wav"}
]
}'
Comprensión de videos
Python
import base64
from pathlib import Path
from google import genai
client = genai.Client()
# Read and encode the video
with open(Path(__file__).parent / "video.mp4", "rb") as f:
base64_video = base64.b64encode(f.read()).decode('utf-8')
print("Analyzing video...")
interaction = client.interactions.create(
model="gemini-2.5-flash",
input=[
{"type": "text", "text": "What is happening in this video? Provide a timestamped summary."},
{"type": "video", "data": base64_video, "mime_type": "video/mp4" }
]
)
print(interaction.outputs[-1].text)
JavaScript
import { GoogleGenAI } from '@google/genai';
import * as fs from 'fs';
const client = new GoogleGenAI({});
const base64Video = fs.readFileSync('video.mp4', { encoding: 'base64' });
console.log('Analyzing video...');
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: [
{ type: 'text', text: 'What is happening in this video? Provide a timestamped summary.' },
{ type: 'video', data: base64Video, mime_type: 'video/mp4'}
]
});
console.log(interaction.outputs[interaction.outputs.length - 1].text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": [
{"type": "text", "text": "What is happening in this video?"},
{"type": "video", "mime_type": "video/mp4", "data": "'"$(base64 -w0 video.mp4)"'"}
]
}'
Comprensión de documentos (PDF)
Python
import base64
from google import genai
client = genai.Client()
with open("sample.pdf", "rb") as f:
base64_pdf = base64.b64encode(f.read()).decode('utf-8')
interaction = client.interactions.create(
model="gemini-2.5-flash",
input=[
{"type": "text", "text": "What is this document about?"},
{"type": "document", "data": base64_pdf, "mime_type": "application/pdf"}
]
)
print(interaction.outputs[-1].text)
JavaScript
import { GoogleGenAI } from '@google/genai';
import * as fs from 'fs';
const client = new GoogleGenAI({});
const base64Pdf = fs.readFileSync('sample.pdf', { encoding: 'base64' });
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: [
{ type: 'text', text: 'What is this document about?' },
{ type: 'document', data: base64Pdf, mime_type: 'application/pdf' }
],
});
console.log(interaction.outputs[0].text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": [
{"type": "text", "text": "What is this document about?"},
{"type": "document", "data": "'"$(base64 -w0 sample.pdf)"'", "mime_type": "application/pdf"}
]
}'
Generación multimodal
Puedes usar la API de Interactions para generar resultados multimodales.
Generación de imágenes
Python
import base64
from google import genai
client = genai.Client()
interaction = client.interactions.create(
model="gemini-3-pro-image-preview",
input="Generate an image of a futuristic city.",
response_modalities=["IMAGE"]
)
for output in interaction.outputs:
if output.type == "image":
print(f"Generated image with mime_type: {output.mime_type}")
# Save the image
with open("generated_city.png", "wb") as f:
f.write(base64.b64decode(output.data))
JavaScript
import { GoogleGenAI } from '@google/genai';
import * as fs from 'fs';
const client = new GoogleGenAI({});
const interaction = await client.interactions.create({
model: 'gemini-3-pro-image-preview',
input: 'Generate an image of a futuristic city.',
response_modalities: ['IMAGE']
});
for (const output of interaction.outputs) {
if (output.type === 'image') {
console.log(`Generated image with mime_type: ${output.mime_type}`);
// Save the image
fs.writeFileSync('generated_city.png', Buffer.from(output.data, 'base64'));
}
}
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-3-pro-image-preview",
"input": "Generate an image of a futuristic city.",
"response_modalities": ["IMAGE"]
}'
Capacidades de agente
La API de Interactions está diseñada para crear agentes y comunicarse con ellos, y es compatible con llamadas a funciones, herramientas integradas, resultados estructurados y el Protocolo de contexto del modelo (MCP).
Agentes
Puedes usar agentes especializados, como deep-research-pro-preview-12-2025, para tareas complejas. Para obtener más información sobre el agente de Deep Research de Gemini, consulta la guía de Deep Research.
Python
import time
from google import genai
client = genai.Client()
# 1. Start the Deep Research Agent
initial_interaction = client.interactions.create(
input="Research the history of the Google TPUs with a focus on 2025 and 2026.",
agent="deep-research-pro-preview-12-2025",
background=True
)
print(f"Research started. Interaction ID: {initial_interaction.id}")
# 2. Poll for results
while True:
interaction = client.interactions.get(initial_interaction.id)
print(f"Status: {interaction.status}")
if interaction.status == "completed":
print("\nFinal Report:\n", interaction.outputs[-1].text)
break
elif interaction.status in ["failed", "cancelled"]:
print(f"Failed with status: {interaction.status}")
break
time.sleep(10)
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
// 1. Start the Deep Research Agent
const initialInteraction = await client.interactions.create({
input: 'Research the history of the Google TPUs with a focus on 2025 and 2026.',
agent: 'deep-research-pro-preview-12-2025',
background: true
});
console.log(`Research started. Interaction ID: ${initialInteraction.id}`);
// 2. Poll for results
while (true) {
const interaction = await client.interactions.get(initialInteraction.id);
console.log(`Status: ${interaction.status}`);
if (interaction.status === 'completed') {
console.log('\nFinal Report:\n', interaction.outputs[interaction.outputs.length - 1].text);
break;
} else if (['failed', 'cancelled'].includes(interaction.status)) {
console.log(`Failed with status: ${interaction.status}`);
break;
}
await new Promise(resolve => setTimeout(resolve, 10000));
}
REST
# 1. Start the Deep Research Agent
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"input": "Research the history of the Google TPUs with a focus on 2025 and 2026.",
"agent": "deep-research-pro-preview-12-2025",
"background": true
}'
# 2. Poll for results (Replace INTERACTION_ID with the ID from the previous interaction)
# curl -X GET "https://generativelanguage.googleapis.com/v1beta/interactions/INTERACTION_ID" \
# -H "x-goog-api-key: $GEMINI_API_KEY"
Herramientas y llamadas a funciones
En esta sección, se explica cómo usar la llamada a funciones para definir herramientas personalizadas y cómo usar las herramientas integradas de Google en la API de Interactions.
Llamada a función
Python
from google import genai
client = genai.Client()
# 1. Define the tool
def get_weather(location: str):
"""Gets the weather for a given location."""
return f"The weather in {location} is sunny."
weather_tool = {
"type": "function",
"name": "get_weather",
"description": "Gets the weather for a given location.",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}
},
"required": ["location"]
}
}
# 2. Send the request with tools
interaction = client.interactions.create(
model="gemini-2.5-flash",
input="What is the weather in Paris?",
tools=[weather_tool]
)
# 3. Handle the tool call
for output in interaction.outputs:
if output.type == "function_call":
print(f"Tool Call: {output.name}({output.arguments})")
# Execute tool
result = get_weather(**output.arguments)
# Send result back
interaction = client.interactions.create(
model="gemini-2.5-flash",
previous_interaction_id=interaction.id,
input=[{
"type": "function_result",
"name": output.name,
"call_id": output.id,
"result": result
}]
)
print(f"Response: {interaction.outputs[-1].text}")
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
// 1. Define the tool
const weatherTool = {
type: 'function',
name: 'get_weather',
description: 'Gets the weather for a given location.',
parameters: {
type: 'object',
properties: {
location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA' }
},
required: ['location']
}
};
// 2. Send the request with tools
let interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: 'What is the weather in Paris?',
tools: [weatherTool]
});
// 3. Handle the tool call
for (const output of interaction.outputs) {
if (output.type === 'function_call') {
console.log(`Tool Call: ${output.name}(${JSON.stringify(output.arguments)})`);
// Execute tool (Mocked)
const result = `The weather in ${output.arguments.location} is sunny.`;
// Send result back
interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
previous_interaction_id: interaction.id,
input: [{
type: 'function_result',
name: output.name,
call_id: output.id,
result: result
}]
});
console.log(`Response: ${interaction.outputs[interaction.outputs.length - 1].text}`);
}
}
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": "What is the weather in Paris?",
"tools": [{
"type": "function",
"name": "get_weather",
"description": "Gets the weather for a given location.",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}
},
"required": ["location"]
}
}]
}'
# Handle the tool call and send result back (Replace INTERACTION_ID and CALL_ID)
# curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
# -H "Content-Type: application/json" \
# -H "x-goog-api-key: $GEMINI_API_KEY" \
# -d '{
# "model": "gemini-2.5-flash",
# "previous_interaction_id": "INTERACTION_ID",
# "input": [{
# "type": "function_result",
# "name": "get_weather",
# "call_id": "FUNCTION_CALL_ID",
# "result": "The weather in Paris is sunny."
# }]
# }'
Llamadas a funciones con estado del cliente
Si no quieres usar el estado del servidor, puedes administrarlo todo en el cliente.
Python
from google import genai
client = genai.Client()
functions = [
{
"type": "function",
"name": "schedule_meeting",
"description": "Schedules a meeting with specified attendees at a given time and date.",
"parameters": {
"type": "object",
"properties": {
"attendees": {"type": "array", "items": {"type": "string"}},
"date": {"type": "string", "description": "Date of the meeting (e.g., 2024-07-29)"},
"time": {"type": "string", "description": "Time of the meeting (e.g., 15:00)"},
"topic": {"type": "string", "description": "The subject of the meeting."},
},
"required": ["attendees", "date", "time", "topic"],
},
}
]
history = [{"role": "user","content": [{"type": "text", "text": "Schedule a meeting for 2025-11-01 at 10 am with Peter and Amir about the Next Gen API."}]}]
# 1. Model decides to call the function
interaction = client.interactions.create(
model="gemini-2.5-flash",
input=history,
tools=functions
)
# add model interaction back to history
history.append({"role": "model", "content": interaction.outputs})
for output in interaction.outputs:
if output.type == "function_call":
print(f"Function call: {output.name} with arguments {output.arguments}")
# 2. Execute the function and get a result
# In a real app, you would call your function here.
# call_result = schedule_meeting(**json.loads(output.arguments))
call_result = "Meeting scheduled successfully."
# 3. Send the result back to the model
history.append({"role": "user", "content": [{"type": "function_result", "name": output.name, "call_id": output.id, "result": call_result}]})
interaction2 = client.interactions.create(
model="gemini-2.5-flash",
input=history,
)
print(f"Final response: {interaction2.outputs[-1].text}")
else:
print(f"Output: {output}")
JavaScript
// 1. Define the tool
const functions = [
{
type: 'function',
name: 'schedule_meeting',
description: 'Schedules a meeting with specified attendees at a given time and date.',
parameters: {
type: 'object',
properties: {
attendees: { type: 'array', items: { type: 'string' } },
date: { type: 'string', description: 'Date of the meeting (e.g., 2024-07-29)' },
time: { type: 'string', description: 'Time of the meeting (e.g., 15:00)' },
topic: { type: 'string', description: 'The subject of the meeting.' },
},
required: ['attendees', 'date', 'time', 'topic'],
},
},
];
const history = [
{ role: 'user', content: [{ type: 'text', text: 'Schedule a meeting for 2025-11-01 at 10 am with Peter and Amir about the Next Gen API.' }] }
];
// 2. Model decides to call the function
let interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: history,
tools: functions
});
// add model interaction back to history
history.push({ role: 'model', content: interaction.outputs });
for (const output of interaction.outputs) {
if (output.type === 'function_call') {
console.log(`Function call: ${output.name} with arguments ${JSON.stringify(output.arguments)}`);
// 3. Send the result back to the model
history.push({ role: 'user', content: [{ type: 'function_result', name: output.name, call_id: output.id, result: 'Meeting scheduled successfully.' }] });
const interaction2 = await client.interactions.create({
model: 'gemini-2.5-flash',
input: history,
});
console.log(`Final response: ${interaction2.outputs[interaction2.outputs.length - 1].text}`);
}
}
Herramientas integradas
Gemini incluye herramientas integradas, como Fundamentación con la Búsqueda de Google, Ejecución de código y Contexto de URL.
Fundamentación con la Búsqueda de Google
Python
from google import genai
client = genai.Client()
interaction = client.interactions.create(
model="gemini-2.5-flash",
input="Who won the last Super Bowl?",
tools=[{"type": "google_search"}]
)
# Find the text output (not the GoogleSearchResultContent)
text_output = next((o for o in interaction.outputs if o.type == "text"), None)
if text_output:
print(text_output.text)
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: 'Who won the last Super Bowl?',
tools: [{ type: 'google_search' }]
});
// Find the text output (not the GoogleSearchResultContent)
const textOutput = interaction.outputs.find(o => o.type === 'text');
if (textOutput) console.log(textOutput.text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": "Who won the last Super Bowl?",
"tools": [{"type": "google_search"}]
}'
Ejecución de código
Python
from google import genai
client = genai.Client()
interaction = client.interactions.create(
model="gemini-2.5-flash",
input="Calculate the 50th Fibonacci number.",
tools=[{"type": "code_execution"}]
)
print(interaction.outputs[-1].text)
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: 'Calculate the 50th Fibonacci number.',
tools: [{ type: 'code_execution' }]
});
console.log(interaction.outputs[interaction.outputs.length - 1].text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": "Calculate the 50th Fibonacci number.",
"tools": [{"type": "code_execution"}]
}'
Contexto de la URL
Python
from google import genai
client = genai.Client()
interaction = client.interactions.create(
model="gemini-2.5-flash",
input="Summarize the content of https://www.wikipedia.org/",
tools=[{"type": "url_context"}]
)
# Find the text output (not the URLContextResultContent)
text_output = next((o for o in interaction.outputs if o.type == "text"), None)
if text_output:
print(text_output.text)
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: 'Summarize the content of https://www.wikipedia.org/',
tools: [{ type: 'url_context' }]
});
// Find the text output (not the URLContextResultContent)
const textOutput = interaction.outputs.find(o => o.type === 'text');
if (textOutput) console.log(textOutput.text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": "Summarize the content of https://www.wikipedia.org/",
"tools": [{"type": "url_context"}]
}'
Protocolo de contexto del modelo (MCP) remoto
La integración remota de MCP simplifica el desarrollo de agentes, ya que permite que la API de Gemini llame directamente a herramientas externas alojadas en servidores remotos.
Python
from google import genai
client = genai.Client()
mcp_server = {
"type": "mcp_server",
"name": "weather_service",
"url": "https://gemini-api-demos.uc.r.appspot.com/mcp"
}
interaction = client.interactions.create(
model="gemini-2.5-flash",
input="What is the weather like in New York today?",
tools=[mcp_server]
)
print(interaction.outputs[-1].text)
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
const mcpServer = {
type: 'mcp_server',
name: 'weather_service',
url: 'https://gemini-api-demos.uc.r.appspot.com/mcp'
};
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: 'What is the weather like in New York today?',
tools: [mcpServer]
});
console.log(interaction.outputs[interaction.outputs.length - 1].text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": "What is the weather like in New York today?",
"tools": [{
"type": "mcp_server",
"name": "weather_service",
"url": "https://gemini-api-demos.uc.r.appspot.com/mcp"
}]
}'
Salida estructurada (esquema JSON)
Para aplicar un formato de salida JSON específico, proporciona un esquema JSON en el parámetro response_format. Esto es útil para tareas como la moderación, la clasificación o la extracción de datos.
Python
from google import genai
from pydantic import BaseModel, Field
from typing import Literal, Union
client = genai.Client()
class SpamDetails(BaseModel):
reason: str = Field(description="The reason why the content is considered spam.")
spam_type: Literal["phishing", "scam", "unsolicited promotion", "other"]
class NotSpamDetails(BaseModel):
summary: str = Field(description="A brief summary of the content.")
is_safe: bool = Field(description="Whether the content is safe for all audiences.")
class ModerationResult(BaseModel):
decision: Union[SpamDetails, NotSpamDetails]
interaction = client.interactions.create(
model="gemini-2.5-flash",
input="Moderate the following content: 'Congratulations! You've won a free cruise. Click here to claim your prize: www.definitely-not-a-scam.com'",
response_format=ModerationResult.model_json_schema(),
)
parsed_output = ModerationResult.model_validate_json(interaction.outputs[-1].text)
print(parsed_output)
JavaScript
import { GoogleGenAI } from '@google/genai';
import { z } from 'zod';
const client = new GoogleGenAI({});
const moderationSchema = z.object({
decision: z.union([
z.object({
reason: z.string().describe('The reason why the content is considered spam.'),
spam_type: z.enum(['phishing', 'scam', 'unsolicited promotion', 'other']).describe('The type of spam.'),
}).describe('Details for content classified as spam.'),
z.object({
summary: z.string().describe('A brief summary of the content.'),
is_safe: z.boolean().describe('Whether the content is safe for all audiences.'),
}).describe('Details for content classified as not spam.'),
]),
});
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: "Moderate the following content: 'Congratulations! You've won a free cruise. Click here to claim your prize: www.definitely-not-a-scam.com'",
response_format: z.toJSONSchema(moderationSchema),
});
console.log(interaction.outputs[0].text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": "Moderate the following content: 'Congratulations! You've won a free cruise. Click here to claim your prize: www.definitely-not-a-scam.com'",
"response_format": {
"type": "object",
"properties": {
"decision": {
"type": "object",
"properties": {
"reason": {"type": "string", "description": "The reason why the content is considered spam."},
"spam_type": {"type": "string", "description": "The type of spam."}
},
"required": ["reason", "spam_type"]
}
},
"required": ["decision"]
}
}'
Combinación de herramientas y resultados estructurados
Combina herramientas integradas con resultados estructurados para obtener un objeto JSON confiable basado en la información recuperada por una herramienta.
Python
from google import genai
from pydantic import BaseModel, Field
from typing import Literal, Union
client = genai.Client()
class SpamDetails(BaseModel):
reason: str = Field(description="The reason why the content is considered spam.")
spam_type: Literal["phishing", "scam", "unsolicited promotion", "other"]
class NotSpamDetails(BaseModel):
summary: str = Field(description="A brief summary of the content.")
is_safe: bool = Field(description="Whether the content is safe for all audiences.")
class ModerationResult(BaseModel):
decision: Union[SpamDetails, NotSpamDetails]
interaction = client.interactions.create(
model="gemini-3-pro-preview",
input="Moderate the following content: 'Congratulations! You've won a free cruise. Click here to claim your prize: www.definitely-not-a-scam.com'",
response_format=ModerationResult.model_json_schema(),
tools=[{"type": "url_context"}]
)
parsed_output = ModerationResult.model_validate_json(interaction.outputs[-1].text)
print(parsed_output)
JavaScript
import { GoogleGenAI } from '@google/genai';
import { z } from 'zod'; // Assuming zod is used for schema generation, or define manually
const client = new GoogleGenAI({});
const obj = z.object({
winning_team: z.string(),
score: z.string(),
});
const schema = z.toJSONSchema(obj);
const interaction = await client.interactions.create({
model: 'gemini-3-pro-preview',
input: 'Who won the last euro?',
tools: [{ type: 'google_search' }],
response_format: schema,
});
console.log(interaction.outputs[0].text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-3-pro-preview",
"input": "Who won the last euro?",
"tools": [{"type": "google_search"}],
"response_format": {
"type": "object",
"properties": {
"winning_team": {"type": "string"},
"score": {"type": "string"}
}
}
}'
Funciones avanzadas
También hay funciones avanzadas adicionales que te brindan más flexibilidad para trabajar con la API de Interactions.
Transmisión
Recibe respuestas de forma incremental a medida que se generan.
Python
from google import genai
client = genai.Client()
stream = client.interactions.create(
model="gemini-2.5-flash",
input="Explain quantum entanglement in simple terms.",
stream=True
)
for chunk in stream:
if chunk.event_type == "content.delta":
if chunk.delta.type == "text":
print(chunk.delta.text, end="", flush=True)
elif chunk.delta.type == "thought":
print(chunk.delta.thought, end="", flush=True)
elif chunk.event_type == "interaction.complete":
print(f"\n\n--- Stream Finished ---")
print(f"Total Tokens: {chunk.interaction.usage.total_tokens}")
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
const stream = await client.interactions.create({
model: 'gemini-2.5-flash',
input: 'Explain quantum entanglement in simple terms.',
stream: true,
});
for await (const chunk of stream) {
if (chunk.event_type === 'content.delta') {
if (chunk.delta.type === 'text' && 'text' in chunk.delta) {
process.stdout.write(chunk.delta.text);
} else if (chunk.delta.type === 'thought' && 'thought' in chunk.delta) {
process.stdout.write(chunk.delta.thought);
}
} else if (chunk.event_type === 'interaction.complete') {
console.log('\n\n--- Stream Finished ---');
console.log(`Total Tokens: ${chunk.interaction.usage.total_tokens}`);
}
}
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions?alt=sse" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": "Explain quantum entanglement in simple terms.",
"stream": true
}'
Configuración
Personaliza el comportamiento del modelo con generation_config.
Python
from google import genai
client = genai.Client()
interaction = client.interactions.create(
model="gemini-2.5-flash",
input="Tell me a story about a brave knight.",
generation_config={
"temperature": 0.7,
"max_output_tokens": 500,
"thinking_level": "low",
}
)
print(interaction.outputs[-1].text)
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: 'Tell me a story about a brave knight.',
generation_config: {
temperature: 0.7,
max_output_tokens: 500,
thinking_level: 'low',
}
});
console.log(interaction.outputs[interaction.outputs.length - 1].text);
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": "Tell me a story about a brave knight.",
"generation_config": {
"temperature": 0.7,
"max_output_tokens": 500,
"thinking_level": "low"
}
}'
Trabaja con archivos
Cómo trabajar con archivos remotos
Accede a archivos con URLs remotas directamente en la llamada a la API.
Python
from google import genai
client = genai.Client()
interaction = client.interactions.create(
model="gemini-2.5-flash",
input=[
{
"type": "image",
"uri": "https://github.com/<github-path>/cats-and-dogs.jpg",
},
{"type": "text", "text": "Describe what you see."}
],
)
for output in interaction.outputs:
if output.type == "text":
print(output.text)
JavaScript
import { GoogleGenAI } from '@google/genai';
const client = new GoogleGenAI({});
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: [
{
type: 'image',
uri: 'https://github.com/<github-path>/cats-and-dogs.jpg',
},
{ type: 'text', text: 'Describe what you see.' }
],
});
for (const output of interaction.outputs) {
if (output.type === 'text') {
console.log(output.text);
}
}
REST
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": [
{
"type": "image",
"uri": "https://github.com/<github-path>/cats-and-dogs.jpg"
},
{"type": "text", "text": "Describe what you see."}
]
}'
Trabaja con la API de Gemini Files
Sube archivos a la API de Files de Gemini antes de usarlos.
Python
from google import genai
import time
import requests
client = genai.Client()
# 1. Download the file
url = "https://github.com/philschmid/gemini-samples/raw/refs/heads/main/assets/cats-and-dogs.jpg"
response = requests.get(url)
with open("cats-and-dogs.jpg", "wb") as f:
f.write(response.content)
# 2. Upload to Gemini Files API
file = client.files.upload(file="cats-and-dogs.jpg")
# 3. Wait for processing
while client.files.get(name=file.name).state != "ACTIVE":
time.sleep(2)
# 4. Use in Interaction
interaction = client.interactions.create(
model="gemini-2.5-flash",
input=[
{
"type": "image",
"uri": file.uri,
},
{"type": "text", "text": "Describe what you see."}
],
)
for output in interaction.outputs:
if output.type == "text":
print(output.text)
JavaScript
import { GoogleGenAI } from '@google/genai';
import * as fs from 'fs';
import fetch from 'node-fetch';
const client = new GoogleGenAI({});
// 1. Download the file
const url = 'https://github.com/philschmid/gemini-samples/raw/refs/heads/main/assets/cats-and-dogs.jpg';
const filename = 'cats-and-dogs.jpg';
const response = await fetch(url);
const buffer = await response.buffer();
fs.writeFileSync(filename, buffer);
// 2. Upload to Gemini Files API
const myfile = await client.files.upload({ file: filename, config: { mimeType: 'image/jpeg' } });
// 3. Wait for processing
while ((await client.files.get({ name: myfile.name })).state !== 'ACTIVE') {
await new Promise(resolve => setTimeout(resolve, 2000));
}
// 4. Use in Interaction
const interaction = await client.interactions.create({
model: 'gemini-2.5-flash',
input: [
{ type: 'image', uri: myfile.uri, },
{ type: 'text', text: 'Describe what you see.' }
],
});
for (const output of interaction.outputs) {
if (output.type === 'text') {
console.log(output.text);
}
}
REST
# 1. Upload the file (Requires File API setup)
# See https://ai.google.dev/gemini-api/docs/files for details.
# Assume FILE_URI is obtained from the upload step.
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"model": "gemini-2.5-flash",
"input": [
{"type": "image", "uri": "FILE_URI"},
{"type": "text", "text": "Describe what you see."}
]
}'
Modelo de datos
Puedes obtener más información sobre el modelo de datos en la Referencia de la API. A continuación, se incluye una descripción general de alto nivel de los componentes principales.
Interacción
| Propiedad | Tipo | Descripción |
|---|---|---|
id |
string |
Es el identificador único de la interacción. |
model/agent |
string |
El modelo o agente que se usó. Solo se puede proporcionar uno. |
input |
Content[] |
Son las entradas proporcionadas. |
outputs |
Content[] |
Son las respuestas del modelo. |
tools |
Tool[] |
Las herramientas que se usaron |
previous_interaction_id |
string |
Es el ID de la interacción anterior para el contexto. |
stream |
boolean |
Indica si la interacción es de transmisión. |
status |
string |
Estado: completed, in_progress, requires_action,failed, etc. |
background |
boolean |
Indica si la interacción se encuentra en modo en segundo plano. |
store |
boolean |
Indica si se debe almacenar la interacción. Valor predeterminado: true. Configúralo en false para inhabilitar la opción. |
usage |
Uso | Es el uso de tokens de la solicitud de interacción. |
Modelos y agentes compatibles
| Nombre del modelo | Tipo | ID de modelo |
|---|---|---|
| Gemini 2.5 Pro | Modelo | gemini-2.5-pro |
| Gemini 2.5 Flash | Modelo | gemini-2.5-flash |
| Gemini 2.5 Flash-lite | Modelo | gemini-2.5-flash-lite |
| Versión preliminar de Gemini 3 Pro | Modelo | gemini-3-pro-preview |
| Versión preliminar de Deep Research | Agente | deep-research-pro-preview-12-2025 |
Cómo funciona la API de Interactions
La API de Interactions se diseñó en torno a un recurso central: Interaction.
Un Interaction representa un turno completo en una conversación o tarea. Actúa como un registro de sesión, ya que contiene todo el historial de una interacción, incluidas todas las entradas del usuario, las reflexiones del modelo, las llamadas a herramientas, los resultados de las herramientas y los resultados finales del modelo.
Cuando llamas a interactions.create, creas un nuevo recurso Interaction.
De manera opcional, puedes usar el id de este recurso en una llamada posterior con el parámetro previous_interaction_id para continuar la conversación. El servidor usa este ID para recuperar el contexto completo, lo que te evita tener que volver a enviar todo el historial de chat. Esta administración de estado del servidor es opcional. También puedes operar en modo sin estado enviando el historial de conversación completo en cada solicitud.
Almacenamiento y retención de datos
De forma predeterminada, todos los objetos Interaction se almacenan (store=true) para simplificar el uso de las funciones de administración de estado del servidor (con previous_interaction_id), la ejecución en segundo plano (con background=true) y los fines de observabilidad.
- Nivel pagado: Las interacciones se conservan durante 55 días.
- Nivel gratuito: Las interacciones se conservan durante 1 día.
Si no quieres esto, puedes establecer store=false en tu solicitud. Este control es independiente de la administración del estado. Puedes inhabilitar el almacenamiento para cualquier interacción. Sin embargo, ten en cuenta que store=false no es compatible con background=true y evita el uso de previous_interaction_id en turnos posteriores.
Puedes borrar las interacciones almacenadas en cualquier momento con el método de eliminación que se encuentra en la Referencia de la API. Solo puedes borrar interacciones si conoces su ID.
Una vez que venza el período de retención, tus datos se borrarán automáticamente.
Los objetos de interacciones se procesan según las condiciones.
Prácticas recomendadas
- Tasa de aciertos de caché: Usar
previous_interaction_idpara continuar las conversaciones permite que el sistema utilice con mayor facilidad el almacenamiento en caché implícito para el historial de conversaciones, lo que mejora el rendimiento y reduce los costos. - Combinación de interacciones: Tienes la flexibilidad de combinar interacciones del agente y del modelo en una conversación. Por ejemplo, puedes usar un agente especializado, como el agente de Deep Research, para la recopilación inicial de datos y, luego, usar un modelo estándar de Gemini para tareas de seguimiento, como resumir o reformatear, y vincular estos pasos con
previous_interaction_id.
SDK
Puedes usar la versión más reciente de los SDKs de IA generativa de Google para acceder a la API de Interactions.
- En Python, este es el paquete
google-genaia partir de la versión1.55.0. - En JavaScript, este es el paquete
@google/genaia partir de la versión1.33.0.
Puedes obtener más información para instalar los SDKs en la página Libraries.
Limitaciones
- Estado beta: La API de Interactions está en versión beta o de vista previa. Las funciones y los esquemas pueden cambiar.
Funciones no admitidas: Las siguientes funciones aún no son compatibles, pero lo serán pronto:
Orden de salida: El orden del contenido de las herramientas integradas (
google_searchyurl_context) a veces puede ser incorrecto, y el texto aparece antes de la ejecución y el resultado de la herramienta. Este es un problema conocido y se está trabajando en una solución.Combinaciones de herramientas: Aún no se admite la combinación de MCP, llamadas a funciones y herramientas integradas, pero pronto estará disponible.
MCP remoto: Gemini 3 no admite MCP remoto. Esta función estará disponible pronto.
Cambios rotundos
Actualmente, la API de Interactions se encuentra en una etapa de versión beta inicial. Estamos desarrollando y perfeccionando de forma activa las capacidades de la API, los esquemas de recursos y las interfaces del SDK en función del uso en el mundo real y los comentarios de los desarrolladores.
Como resultado, pueden producirse cambios rotundos. Las actualizaciones pueden incluir cambios en lo siguiente:
- Esquemas de entrada y salida.
- Firmas de métodos y estructuras de objetos del SDK
- Comportamientos específicos de las funciones
Para las cargas de trabajo de producción, debes seguir usando la API de generateContent estándar. Sigue siendo la ruta recomendada para las implementaciones estables y se seguirá desarrollando y manteniendo de forma activa.
Comentarios
Tus comentarios son fundamentales para el desarrollo de la API de Interactions. Comparte tus ideas, informa errores o solicita funciones en nuestro foro de la comunidad de desarrolladores de IA de Google.
¿Qué sigue?
- Obtén más información sobre el agente de Deep Research de Gemini.