透過工具,Live API 不僅能進行對話,還能在現實世界中執行動作,並在維持即時連線的同時,擷取外部情境資訊。您可以使用 Live API 定義工具,例如函式呼叫和 Google 搜尋。
支援的工具總覽
以下簡要介紹 Live API 模型可用的工具:
| 工具 |
gemini-2.5-flash-native-audio-preview-09-2025
|
|---|---|
| 搜尋 | 是 |
| 函式呼叫 | 是 |
| Google 地圖 | 否 |
| 程式碼執行 | 否 |
| 網址內容 | 否 |
函式呼叫
Live API 支援函式呼叫,就像一般內容生成要求一樣。透過函式呼叫,Live API 可以與外部資料和程式互動,大幅擴充應用程式的功能。
您可以將函式宣告定義為工作階段設定的一部分。收到工具呼叫後,用戶端應使用 session.send_tool_response 方法,以 FunctionResponse 物件清單回應。
詳情請參閱函式呼叫教學課程。
Python
import asyncio
import wave
from google import genai
from google.genai import types
client = genai.Client()
model = "gemini-2.5-flash-native-audio-preview-09-2025"
# Simple function definitions
turn_on_the_lights = {"name": "turn_on_the_lights"}
turn_off_the_lights = {"name": "turn_off_the_lights"}
tools = [{"function_declarations": [turn_on_the_lights, turn_off_the_lights]}]
config = {"response_modalities": ["AUDIO"], "tools": tools}
async def main():
async with client.aio.live.connect(model=model, config=config) as session:
prompt = "Turn on the lights please"
await session.send_client_content(turns={"parts": [{"text": prompt}]})
wf = wave.open("audio.wav", "wb")
wf.setnchannels(1)
wf.setsampwidth(2)
wf.setframerate(24000) # Output is 24kHz
async for response in session.receive():
if response.data is not None:
wf.writeframes(response.data)
elif response.tool_call:
print("The tool was called")
function_responses = []
for fc in response.tool_call.function_calls:
function_response = types.FunctionResponse(
id=fc.id,
name=fc.name,
response={ "result": "ok" } # simple, hard-coded function response
)
function_responses.append(function_response)
await session.send_tool_response(function_responses=function_responses)
wf.close()
if __name__ == "__main__":
asyncio.run(main())
JavaScript
import { GoogleGenAI, Modality } from '@google/genai';
import * as fs from "node:fs";
import pkg from 'wavefile'; // npm install wavefile
const { WaveFile } = pkg;
const ai = new GoogleGenAI({});
const model = 'gemini-2.5-flash-native-audio-preview-09-2025';
// Simple function definitions
const turn_on_the_lights = { name: "turn_on_the_lights" } // , description: '...', parameters: { ... }
const turn_off_the_lights = { name: "turn_off_the_lights" }
const tools = [{ functionDeclarations: [turn_on_the_lights, turn_off_the_lights] }]
const config = {
responseModalities: [Modality.AUDIO],
tools: tools
}
async function live() {
const responseQueue = [];
async function waitMessage() {
let done = false;
let message = undefined;
while (!done) {
message = responseQueue.shift();
if (message) {
done = true;
} else {
await new Promise((resolve) => setTimeout(resolve, 100));
}
}
return message;
}
async function handleTurn() {
const turns = [];
let done = false;
while (!done) {
const message = await waitMessage();
turns.push(message);
if (message.serverContent && message.serverContent.turnComplete) {
done = true;
} else if (message.toolCall) {
done = true;
}
}
return turns;
}
const session = await ai.live.connect({
model: model,
callbacks: {
onopen: function () {
console.debug('Opened');
},
onmessage: function (message) {
responseQueue.push(message);
},
onerror: function (e) {
console.debug('Error:', e.message);
},
onclose: function (e) {
console.debug('Close:', e.reason);
},
},
config: config,
});
const inputTurns = 'Turn on the lights please';
session.sendClientContent({ turns: inputTurns });
let turns = await handleTurn();
for (const turn of turns) {
if (turn.toolCall) {
console.debug('A tool was called');
const functionResponses = [];
for (const fc of turn.toolCall.functionCalls) {
functionResponses.push({
id: fc.id,
name: fc.name,
response: { result: "ok" } // simple, hard-coded function response
});
}
console.debug('Sending tool response...\n');
session.sendToolResponse({ functionResponses: functionResponses });
}
}
// Check again for new messages
turns = await handleTurn();
// Combine audio data strings and save as wave file
const combinedAudio = turns.reduce((acc, turn) => {
if (turn.data) {
const buffer = Buffer.from(turn.data, 'base64');
const intArray = new Int16Array(buffer.buffer, buffer.byteOffset, buffer.byteLength / Int16Array.BYTES_PER_ELEMENT);
return acc.concat(Array.from(intArray));
}
return acc;
}, []);
const audioBuffer = new Int16Array(combinedAudio);
const wf = new WaveFile();
wf.fromScratch(1, 24000, '16', audioBuffer); // output is 24kHz
fs.writeFileSync('audio.wav', wf.toBuffer());
session.close();
}
async function main() {
await live().catch((e) => console.error('got error', e));
}
main();
模型可以根據單一提示產生多個函式呼叫,以及串連輸出內容所需的程式碼。這段程式碼會在沙箱環境中執行,產生後續的 BidiGenerateContentToolCall 訊息。
非同步函式呼叫
根據預設,函式呼叫會依序執行,也就是說,系統會暫停執行作業,直到每個函式呼叫的結果都可用為止。這可確保系統依序處理函式,也就是說,函式執行期間,您無法繼續與模型互動。
如果不想封鎖對話,可以要求模型非同步執行函式。如要這麼做,請先在函式定義中新增 behavior:
Python
# Non-blocking function definitions
turn_on_the_lights = {"name": "turn_on_the_lights", "behavior": "NON_BLOCKING"} # turn_on_the_lights will run asynchronously
turn_off_the_lights = {"name": "turn_off_the_lights"} # turn_off_the_lights will still pause all interactions with the model
JavaScript
import { GoogleGenAI, Modality, Behavior } from '@google/genai';
// Non-blocking function definitions
const turn_on_the_lights = {name: "turn_on_the_lights", behavior: Behavior.NON_BLOCKING}
// Blocking function definitions
const turn_off_the_lights = {name: "turn_off_the_lights"}
const tools = [{ functionDeclarations: [turn_on_the_lights, turn_off_the_lights] }]
NON-BLOCKING 可確保函式以非同步方式執行,同時您可繼續與模型互動。
接著,您需要使用 scheduling 參數,告知模型收到 FunctionResponse 時的行為。你可以選擇:
- 中斷正在執行的動作,並立即告知你收到的回覆 (
scheduling="INTERRUPT"), - 請等待裝置完成目前執行的作業 (
scheduling="WHEN_IDLE"), 或者不採取任何行動,稍後在討論中使用該知識 (
scheduling="SILENT")
Python
# for a non-blocking function definition, apply scheduling in the function response:
function_response = types.FunctionResponse(
id=fc.id,
name=fc.name,
response={
"result": "ok",
"scheduling": "INTERRUPT" # Can also be WHEN_IDLE or SILENT
}
)
JavaScript
import { GoogleGenAI, Modality, Behavior, FunctionResponseScheduling } from '@google/genai';
// for a non-blocking function definition, apply scheduling in the function response:
const functionResponse = {
id: fc.id,
name: fc.name,
response: {
result: "ok",
scheduling: FunctionResponseScheduling.INTERRUPT // Can also be WHEN_IDLE or SILENT
}
}
以 Google 搜尋建立基準
您可以在工作階段設定中啟用「以 Google 搜尋為基礎」。這有助於提高 Live API 的準確度,並避免產生幻覺。詳情請參閱基礎教學課程。
Python
import asyncio
import wave
from google import genai
from google.genai import types
client = genai.Client()
model = "gemini-2.5-flash-native-audio-preview-09-2025"
tools = [{'google_search': {}}]
config = {"response_modalities": ["AUDIO"], "tools": tools}
async def main():
async with client.aio.live.connect(model=model, config=config) as session:
prompt = "When did the last Brazil vs. Argentina soccer match happen?"
await session.send_client_content(turns={"parts": [{"text": prompt}]})
wf = wave.open("audio.wav", "wb")
wf.setnchannels(1)
wf.setsampwidth(2)
wf.setframerate(24000) # Output is 24kHz
async for chunk in session.receive():
if chunk.server_content:
if chunk.data is not None:
wf.writeframes(chunk.data)
# The model might generate and execute Python code to use Search
model_turn = chunk.server_content.model_turn
if model_turn:
for part in model_turn.parts:
if part.executable_code is not None:
print(part.executable_code.code)
if part.code_execution_result is not None:
print(part.code_execution_result.output)
wf.close()
if __name__ == "__main__":
asyncio.run(main())
JavaScript
import { GoogleGenAI, Modality } from '@google/genai';
import * as fs from "node:fs";
import pkg from 'wavefile'; // npm install wavefile
const { WaveFile } = pkg;
const ai = new GoogleGenAI({});
const model = 'gemini-2.5-flash-native-audio-preview-09-2025';
const tools = [{ googleSearch: {} }]
const config = {
responseModalities: [Modality.AUDIO],
tools: tools
}
async function live() {
const responseQueue = [];
async function waitMessage() {
let done = false;
let message = undefined;
while (!done) {
message = responseQueue.shift();
if (message) {
done = true;
} else {
await new Promise((resolve) => setTimeout(resolve, 100));
}
}
return message;
}
async function handleTurn() {
const turns = [];
let done = false;
while (!done) {
const message = await waitMessage();
turns.push(message);
if (message.serverContent && message.serverContent.turnComplete) {
done = true;
} else if (message.toolCall) {
done = true;
}
}
return turns;
}
const session = await ai.live.connect({
model: model,
callbacks: {
onopen: function () {
console.debug('Opened');
},
onmessage: function (message) {
responseQueue.push(message);
},
onerror: function (e) {
console.debug('Error:', e.message);
},
onclose: function (e) {
console.debug('Close:', e.reason);
},
},
config: config,
});
const inputTurns = 'When did the last Brazil vs. Argentina soccer match happen?';
session.sendClientContent({ turns: inputTurns });
let turns = await handleTurn();
let combinedData = '';
for (const turn of turns) {
if (turn.serverContent && turn.serverContent.modelTurn && turn.serverContent.modelTurn.parts) {
for (const part of turn.serverContent.modelTurn.parts) {
if (part.executableCode) {
console.debug('executableCode: %s\n', part.executableCode.code);
}
else if (part.codeExecutionResult) {
console.debug('codeExecutionResult: %s\n', part.codeExecutionResult.output);
}
else if (part.inlineData && typeof part.inlineData.data === 'string') {
combinedData += atob(part.inlineData.data);
}
}
}
}
// Convert the base64-encoded string of bytes into a Buffer.
const buffer = Buffer.from(combinedData, 'binary');
// The buffer contains raw bytes. For 16-bit audio, we need to interpret every 2 bytes as a single sample.
const intArray = new Int16Array(buffer.buffer, buffer.byteOffset, buffer.byteLength / Int16Array.BYTES_PER_ELEMENT);
const wf = new WaveFile();
// The API returns 16-bit PCM audio at a 24kHz sample rate.
wf.fromScratch(1, 24000, '16', intArray);
fs.writeFileSync('audio.wav', wf.toBuffer());
session.close();
}
async function main() {
await live().catch((e) => console.error('got error', e));
}
main();
結合多種工具
您可以在 Live API 中結合多種工具,進一步提升應用程式的功能:
Python
prompt = """
Hey, I need you to do two things for me.
1. Use Google Search to look up information about the largest earthquake in California the week of Dec 5 2024?
2. Then turn on the lights
Thanks!
"""
tools = [
{"google_search": {}},
{"function_declarations": [turn_on_the_lights, turn_off_the_lights]},
]
config = {"response_modalities": ["AUDIO"], "tools": tools}
# ... remaining model call
JavaScript
const prompt = `Hey, I need you to do two things for me.
1. Use Google Search to look up information about the largest earthquake in California the week of Dec 5 2024?
2. Then turn on the lights
Thanks!
`
const tools = [
{ googleSearch: {} },
{ functionDeclarations: [turn_on_the_lights, turn_off_the_lights] }
]
const config = {
responseModalities: [Modality.AUDIO],
tools: tools
}
// ... remaining model call
後續步驟
- 如要查看更多使用工具的範例,請參閱工具使用食譜。
- 如要瞭解功能和設定的完整資訊,請參閱即時 API 功能指南。