Tuning

當您擁有一小組輸入/輸出範例資料集時,Gemini API 的精細調整支援功能可提供一種機制,用於整理輸出內容。詳情請參閱模型調整指南教學課程

方法:tunedModels.create

建立經過調整的模型。透過 google.longrunning.Operations 服務查看中繼調校進度 (如有)。

透過「作業」服務存取狀態和結果。範例:GET /v1/tunedModels/az2mb0bpw6i/operations/000-111-222

端點

post https://generativelanguage.googleapis.com/v1beta/tunedModels

查詢參數

tunedModelId string

(非必要) 已調整模型的專屬 ID (如有指定)。這個值的長度上限為 40 個字元,第一個字元必須是英文字母,最後一個字元可以是英文字母或數字。ID 必須符合下列規則運算式:[a-z]([a-z0-9-]{0,38}[a-z0-9])?

要求主體

要求主體包含 TunedModel 的例項。

欄位
displayName string

(非必要) 在使用者介面中顯示此模型的名稱。顯示名稱的長度上限為 40 個半形字元,包括空格。

description string

(非必要) 這個模型的簡短說明。

tuningTask object (TuningTask)

必要欄位。建立調整後模型的調整工作。

readerProjectNumbers[] string (int64 format)

(非必要) 列出具有調整過的模型讀取權的專案編號。

source_model Union type
用來調整模型的起點。source_model 只能是下列其中一項:
tunedModelSource object (TunedModelSource)

(非必要) 要用來訓練新模型的 TunedModel。

baseModel string

不可變動。要調整的 Model 名稱。範例:models/gemini-1.5-flash-001

temperature number

(非必要) 控制輸出的隨機性。

值的範圍為 [0.0,1.0] 以上 (包含 [0.0,1.0])。值越接近 1.0,產生的回覆就會越多樣化;值越接近 0.0,模型產生的回覆就會越常見。

這個值會將預設值指定為基礎模型在建立模型時使用的值。

topP number

(非必要) 適用於 Nucleus 取樣。

核子取樣會考慮機率總和至少為 topP 的最小詞元集合。

這個值會將預設值指定為基礎模型在建立模型時使用的值。

topK integer

(非必要) 適用於 Top-k 取樣。

Top-k 取樣會考慮機率最高的 topK 個符記組合。這個值會指定後端在呼叫模型時使用的預設值。

這個值會將預設值指定為基礎模型在建立模型時使用的值。

要求範例

Python
# With Gemini 2 we're launching a new SDK. See the following doc for details.
# https://ai.google.dev/gemini-api/docs/migrate

回應主體

如果成功,回應主體會包含新建立的 Operation 例項。

方法:tunedModels.generateContent

根據輸入的 GenerateContentRequest 產生模型回應。如需詳細的使用資訊,請參閱文字產生指南。輸入功能會因模型而異,包括經過調整的模型。詳情請參閱模型指南調整指南

端點

post https://generativelanguage.googleapis.com/v1beta/{model=tunedModels/*}:generateContent

路徑參數

model string

必要欄位。用於產生完成作業的 Model 名稱。

格式:models/{model}。其格式為 tunedModels/{tunedmodel}

要求主體

要求主體的資料會採用以下結構:

欄位
contents[] object (Content)

必要欄位。與模型目前對話的內容。

對於單一回合查詢,這會是單一例項。對於chat 等多輪查詢,這個欄位會重複出現,並包含對話記錄和最新要求。

tools[] object (Tool)

(非必要) Model 可能用來產生下一個回應的 Tools 清單。

Tool 是一段程式碼,可讓系統與外部系統互動,執行 Model 所不知或超出其範圍的動作或一組動作。支援的 ToolFunctioncodeExecution。詳情請參閱「函式呼叫」和「程式碼執行」指南。

toolConfig object (ToolConfig)

(非必要) 要求中指定的任何 Tool 工具設定。如需使用範例,請參閱函式呼叫指南

safetySettings[] object (SafetySetting)

(非必要) 用來封鎖不安全內容的 SafetySetting 不重複執行個體清單。

這項規定將適用於 GenerateContentRequest.contentsGenerateContentResponse.candidates。每個 SafetyCategory 類型應只設定一個設定。API 會封鎖所有不符合這些設定所設門檻的內容和回應。這份清單會覆寫 safetySettings 中指定的每個 SafetyCategory 的預設設定。如果清單中提供的特定 SafetyCategory 沒有 SafetySetting,API 會使用該類別的預設安全設定。系統支援的危害類別包括 HARM_CATEGORY_HATE_SPEECH、HARM_CATEGORY_SEXUALLY_EXPLICIT、HARM_CATEGORY_DANGEROUS_CONTENT、HARM_CATEGORY_HARASSMENT 和 HARM_CATEGORY_CIVIC_INTEGRITY。如要進一步瞭解可用的安全設定,請參閱指南。您也可以參閱安全指南,瞭解如何在 AI 應用程式中納入安全考量。

systemInstruction object (Content)

(非必要) 開發人員設定系統指令。目前僅支援文字。

generationConfig object (GenerationConfig)

(非必要) 模型產生和輸出的設定選項。

cachedContent string

(非必要) 快取的內容名稱,用於做為預測的上下文。格式:cachedContents/{cachedContent}

要求範例

from google import genai

client = genai.Client()
response = client.models.generate_content(
    model="gemini-2.0-flash", contents="Write a story about a magic backpack."
)
print(response.text)
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const response = await ai.models.generateContent({
  model: "gemini-2.0-flash",
  contents: "Write a story about a magic backpack.",
});
console.log(response.text);
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
	APIKey:  os.Getenv("GEMINI_API_KEY"),
	Backend: genai.BackendGeminiAPI,
})
if err != nil {
	log.Fatal(err)
}
contents := []*genai.Content{
	genai.NewContentFromText("Write a story about a magic backpack.", "user"),
}
response, err := client.Models.GenerateContent(ctx, "gemini-2.0-flash", contents, nil)
if err != nil {
	log.Fatal(err)
}
printResponse(response)
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[{"text": "Write a story about a magic backpack."}]
        }]
       }' 2> /dev/null
val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val prompt = "Write a story about a magic backpack."
val response = generativeModel.generateContent(prompt)
print(response.text)
let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

let prompt = "Write a story about a magic backpack."
let response = try await generativeModel.generateContent(prompt)
if let text = response.text {
  print(text)
}
// Make sure to include this import:
// import 'package:google_generative_ai/google_generative_ai.dart';
final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final prompt = 'Write a story about a magic backpack.';

final response = await model.generateContent([Content.text(prompt)]);
print(response.text);
// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Content content =
    new Content.Builder().addText("Write a story about a magic backpack.").build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

ListenableFuture<GenerateContentResponse> response = model.generateContent(content);
Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }
    },
    executor);
from google import genai
import PIL.Image

client = genai.Client()
organ = PIL.Image.open(media / "organ.jpg")
response = client.models.generate_content(
    model="gemini-2.0-flash", contents=["Tell me about this instrument", organ]
)
print(response.text)
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const organ = await ai.files.upload({
  file: path.join(media, "organ.jpg"),
});

const response = await ai.models.generateContent({
  model: "gemini-2.0-flash",
  contents: [
    createUserContent([
      "Tell me about this instrument", 
      createPartFromUri(organ.uri, organ.mimeType)
    ]),
  ],
});
console.log(response.text);
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
	APIKey:  os.Getenv("GEMINI_API_KEY"),
	Backend: genai.BackendGeminiAPI,
})
if err != nil {
	log.Fatal(err)
}

file, err := client.Files.UploadFromPath(
	ctx, 
	filepath.Join(getMedia(), "organ.jpg"), 
	&genai.UploadFileConfig{
		MIMEType : "image/jpeg",
	},
)
if err != nil {
	log.Fatal(err)
}
parts := []*genai.Part{
	genai.NewPartFromText("Tell me about this instrument"),
	genai.NewPartFromURI(file.URI, file.MIMEType),
}
contents := []*genai.Content{
	genai.NewContentFromParts(parts, "user"),
}

response, err := client.Models.GenerateContent(ctx, "gemini-2.0-flash", contents, nil)
if err != nil {
	log.Fatal(err)
}
printResponse(response)
# Use a temporary file to hold the base64 encoded image data
TEMP_B64=$(mktemp)
trap 'rm -f "$TEMP_B64"' EXIT
base64 $B64FLAGS $IMG_PATH > "$TEMP_B64"

# Use a temporary file to hold the JSON payload
TEMP_JSON=$(mktemp)
trap 'rm -f "$TEMP_JSON"' EXIT

cat > "$TEMP_JSON" << EOF
{
  "contents": [{
    "parts":[
      {"text": "Tell me about this instrument"},
      {
        "inline_data": {
          "mime_type":"image/jpeg",
          "data": "$(cat "$TEMP_B64")"
        }
      }
    ]
  }]
}
EOF

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d "@$TEMP_JSON" 2> /dev/null
val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val image: Bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.image)
val inputContent = content {
  image(image)
  text("What's in this picture?")
}

val response = generativeModel.generateContent(inputContent)
print(response.text)
let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

guard let image = UIImage(systemName: "cloud.sun") else { fatalError() }

let prompt = "What's in this picture?"

let response = try await generativeModel.generateContent(image, prompt)
if let text = response.text {
  print(text)
}
// Make sure to include this import:
// import 'package:google_generative_ai/google_generative_ai.dart';
final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);

Future<DataPart> fileToPart(String mimeType, String path) async {
  return DataPart(mimeType, await File(path).readAsBytes());
}

final prompt = 'Describe how this product might be manufactured.';
final image = await fileToPart('image/jpeg', 'resources/jetpack.jpg');

final response = await model.generateContent([
  Content.multi([TextPart(prompt), image])
]);
print(response.text);
// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Bitmap image = BitmapFactory.decodeResource(context.getResources(), R.drawable.image);

Content content =
    new Content.Builder()
        .addText("What's different between these pictures?")
        .addImage(image)
        .build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

ListenableFuture<GenerateContentResponse> response = model.generateContent(content);
Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }
    },
    executor);
PythonNode.js貝殼
from google import genai

client = genai.Client()
sample_audio = client.files.upload(file=media / "sample.mp3")
response = client.models.generate_content(
    model="gemini-2.0-flash",
    contents=["Give me a summary of this audio file.", sample_audio],
)
print(response.text)
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const audio = await ai.files.upload({
  file: path.join(media, "sample.mp3"),
});

const response = await ai.models.generateContent({
  model: "gemini-2.0-flash",
  contents: [
    createUserContent([
      "Give me a summary of this audio file.",
      createPartFromUri(audio.uri, audio.mimeType),
    ]),
  ],
});
console.log(response.text);
# Use File API to upload audio data to API request.
MIME_TYPE=$(file -b --mime-type "${AUDIO_PATH}")
NUM_BYTES=$(wc -c < "${AUDIO_PATH}")
DISPLAY_NAME=AUDIO

tmp_header_file=upload-header.tmp

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GEMINI_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${AUDIO_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Please describe this file."},
          {"file_data":{"mime_type": "audio/mpeg", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo

jq ".candidates[].content.parts[].text" response.json
PythonNode.jsGo貝殼
from google import genai
import time

client = genai.Client()
# Video clip (CC BY 3.0) from https://peach.blender.org/download/
myfile = client.files.upload(file=media / "Big_Buck_Bunny.mp4")
print(f"{myfile=}")

# Poll until the video file is completely processed (state becomes ACTIVE).
while not myfile.state or myfile.state.name != "ACTIVE":
    print("Processing video...")
    print("File state:", myfile.state)
    time.sleep(5)
    myfile = client.files.get(name=myfile.name)

response = client.models.generate_content(
    model="gemini-2.0-flash", contents=[myfile, "Describe this video clip"]
)
print(f"{response.text=}")
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

let video = await ai.files.upload({
  file: path.join(media, 'Big_Buck_Bunny.mp4'),
});

// Poll until the video file is completely processed (state becomes ACTIVE).
while (!video.state || video.state.toString() !== 'ACTIVE') {
  console.log('Processing video...');
  console.log('File state: ', video.state);
  await sleep(5000);
  video = await ai.files.get({name: video.name});
}

const response = await ai.models.generateContent({
  model: "gemini-2.0-flash",
  contents: [
    createUserContent([
      "Describe this video clip",
      createPartFromUri(video.uri, video.mimeType),
    ]),
  ],
});
console.log(response.text);
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
	APIKey:  os.Getenv("GEMINI_API_KEY"),
	Backend: genai.BackendGeminiAPI,
})
if err != nil {
	log.Fatal(err)
}

file, err := client.Files.UploadFromPath(
	ctx, 
	filepath.Join(getMedia(), "Big_Buck_Bunny.mp4"), 
	&genai.UploadFileConfig{
		MIMEType : "video/mp4",
	},
)
if err != nil {
	log.Fatal(err)
}

// Poll until the video file is completely processed (state becomes ACTIVE).
for file.State == genai.FileStateUnspecified || file.State != genai.FileStateActive {
	fmt.Println("Processing video...")
	fmt.Println("File state:", file.State)
	time.Sleep(5 * time.Second)

	file, err = client.Files.Get(ctx, file.Name, nil)
	if err != nil {
		log.Fatal(err)
	}
}

parts := []*genai.Part{
	genai.NewPartFromText("Describe this video clip"),
	genai.NewPartFromURI(file.URI, file.MIMEType),
}

contents := []*genai.Content{
	genai.NewContentFromParts(parts, "user"),
}

response, err := client.Models.GenerateContent(ctx, "gemini-2.0-flash", contents, nil)
if err != nil {
	log.Fatal(err)
}
printResponse(response)
# Use File API to upload audio data to API request.
MIME_TYPE=$(file -b --mime-type "${VIDEO_PATH}")
NUM_BYTES=$(wc -c < "${VIDEO_PATH}")
DISPLAY_NAME=VIDEO

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GEMINI_API_KEY}" \
  -D "${tmp_header_file}" \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${VIDEO_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

state=$(jq ".file.state" file_info.json)
echo state=$state

name=$(jq ".file.name" file_info.json)
echo name=$name

while [[ "($state)" = *"PROCESSING"* ]];
do
  echo "Processing video..."
  sleep 5
  # Get the file of interest to check state
  curl https://generativelanguage.googleapis.com/v1beta/files/$name > file_info.json
  state=$(jq ".file.state" file_info.json)
done

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Transcribe the audio from this video, giving timestamps for salient events in the video. Also provide visual descriptions."},
          {"file_data":{"mime_type": "video/mp4", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo

jq ".candidates[].content.parts[].text" response.json
Python貝殼
from google import genai

client = genai.Client()
sample_pdf = client.files.upload(file=media / "test.pdf")
response = client.models.generate_content(
    model="gemini-2.0-flash",
    contents=["Give me a summary of this document:", sample_pdf],
)
print(f"{response.text=}")
MIME_TYPE=$(file -b --mime-type "${PDF_PATH}")
NUM_BYTES=$(wc -c < "${PDF_PATH}")
DISPLAY_NAME=TEXT


echo $MIME_TYPE
tmp_header_file=upload-header.tmp

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GEMINI_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${PDF_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

# Now generate content using that file
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Can you add a few more lines to this poem?"},
          {"file_data":{"mime_type": "application/pdf", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo

jq ".candidates[].content.parts[].text" response.json
from google import genai
from google.genai import types

client = genai.Client()
# Pass initial history using the "history" argument
chat = client.chats.create(
    model="gemini-2.0-flash",
    history=[
        types.Content(role="user", parts=[types.Part(text="Hello")]),
        types.Content(
            role="model",
            parts=[
                types.Part(
                    text="Great to meet you. What would you like to know?"
                )
            ],
        ),
    ],
)
response = chat.send_message(message="I have 2 dogs in my house.")
print(response.text)
response = chat.send_message(message="How many paws are in my house?")
print(response.text)
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const chat = ai.chats.create({
  model: "gemini-2.0-flash",
  history: [
    {
      role: "user",
      parts: [{ text: "Hello" }],
    },
    {
      role: "model",
      parts: [{ text: "Great to meet you. What would you like to know?" }],
    },
  ],
});

const response1 = await chat.sendMessage({
  message: "I have 2 dogs in my house.",
});
console.log("Chat response 1:", response1.text);

const response2 = await chat.sendMessage({
  message: "How many paws are in my house?",
});
console.log("Chat response 2:", response2.text);
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
	APIKey:  os.Getenv("GEMINI_API_KEY"),
	Backend: genai.BackendGeminiAPI,
})
if err != nil {
	log.Fatal(err)
}

// Pass initial history using the History field.
history := []*genai.Content{
	genai.NewContentFromText("Hello", "user"),
	genai.NewContentFromText("Great to meet you. What would you like to know?", "model"),
}

chat, err := client.Chats.Create(ctx, "gemini-2.0-flash", nil, history)
if err != nil {
	log.Fatal(err)
}

firstResp, err := chat.SendMessage(ctx, genai.Part{Text: "I have 2 dogs in my house."})
if err != nil {
	log.Fatal(err)
}
fmt.Println(firstResp.Text())

secondResp, err := chat.SendMessage(ctx, genai.Part{Text: "How many paws are in my house?"})
if err != nil {
	log.Fatal(err)
}
fmt.Println(secondResp.Text())
curl https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [
        {"role":"user",
         "parts":[{
           "text": "Hello"}]},
        {"role": "model",
         "parts":[{
           "text": "Great to meet you. What would you like to know?"}]},
        {"role":"user",
         "parts":[{
           "text": "I have two dogs in my house. How many paws are in my house?"}]},
      ]
    }' 2> /dev/null | grep "text"
val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val chat =
    generativeModel.startChat(
        history =
            listOf(
                content(role = "user") { text("Hello, I have 2 dogs in my house.") },
                content(role = "model") {
                  text("Great to meet you. What would you like to know?")
                }))

val response = chat.sendMessage("How many paws are in my house?")
print(response.text)
let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

// Optionally specify existing chat history
let history = [
  ModelContent(role: "user", parts: "Hello, I have 2 dogs in my house."),
  ModelContent(role: "model", parts: "Great to meet you. What would you like to know?"),
]

// Initialize the chat with optional chat history
let chat = generativeModel.startChat(history: history)

// To generate text output, call sendMessage and pass in the message
let response = try await chat.sendMessage("How many paws are in my house?")
if let text = response.text {
  print(text)
}
// Make sure to include this import:
// import 'package:google_generative_ai/google_generative_ai.dart';
final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final chat = model.startChat(history: [
  Content.text('hello'),
  Content.model([TextPart('Great to meet you. What would you like to know?')])
]);
var response =
    await chat.sendMessage(Content.text('I have 2 dogs in my house.'));
print(response.text);
response =
    await chat.sendMessage(Content.text('How many paws are in my house?'));
print(response.text);
// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

// (optional) Create previous chat history for context
Content.Builder userContentBuilder = new Content.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText("Hello, I have 2 dogs in my house.");
Content userContent = userContentBuilder.build();

Content.Builder modelContentBuilder = new Content.Builder();
modelContentBuilder.setRole("model");
modelContentBuilder.addText("Great to meet you. What would you like to know?");
Content modelContent = userContentBuilder.build();

List<Content> history = Arrays.asList(userContent, modelContent);

// Initialize the chat
ChatFutures chat = model.startChat(history);

// Create a new user message
Content.Builder userMessageBuilder = new Content.Builder();
userMessageBuilder.setRole("user");
userMessageBuilder.addText("How many paws are in my house?");
Content userMessage = userMessageBuilder.build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

// Send the message
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(userMessage);

Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }
    },
    executor);
PythonNode.js
from google import genai
from google.genai import types

client = genai.Client()
document = client.files.upload(file=media / "a11.txt")
model_name = "gemini-1.5-flash-001"

cache = client.caches.create(
    model=model_name,
    config=types.CreateCachedContentConfig(
        contents=[document],
        system_instruction="You are an expert analyzing transcripts.",
    ),
)
print(cache)

response = client.models.generate_content(
    model=model_name,
    contents="Please summarize this transcript",
    config=types.GenerateContentConfig(cached_content=cache.name),
)
print(response.text)
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const filePath = path.join(media, "a11.txt");
const document = await ai.files.upload({
  file: filePath,
  config: { mimeType: "text/plain" },
});
console.log("Uploaded file name:", document.name);
const modelName = "gemini-1.5-flash-001";

const contents = [
  createUserContent(createPartFromUri(document.uri, document.mimeType)),
];

const cache = await ai.caches.create({
  model: modelName,
  config: {
    contents: contents,
    systemInstruction: "You are an expert analyzing transcripts.",
  },
});
console.log("Cache created:", cache);

const response = await ai.models.generateContent({
  model: modelName,
  contents: "Please summarize this transcript",
  config: { cachedContent: cache.name },
});
console.log("Response text:", response.text);
Python
# With Gemini 2 we're launching a new SDK. See the following doc for details.
# https://ai.google.dev/gemini-api/docs/migrate
from google import genai
from google.genai import types
from typing_extensions import TypedDict

class Recipe(TypedDict):
    recipe_name: str
    ingredients: list[str]

client = genai.Client()
result = client.models.generate_content(
    model="gemini-2.0-flash",
    contents="List a few popular cookie recipes.",
    config=types.GenerateContentConfig(
        response_mime_type="application/json", response_schema=list[Recipe]
    ),
)
print(result)
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.generateContent({
  model: "gemini-2.0-flash",
  contents: "List a few popular cookie recipes.",
  config: {
    responseMimeType: "application/json",
    responseSchema: {
      type: "array",
      items: {
        type: "object",
        properties: {
          recipeName: { type: "string" },
          ingredients: { type: "array", items: { type: "string" } },
        },
        required: ["recipeName", "ingredients"],
      },
    },
  },
});
console.log(response.text);
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
	APIKey:  os.Getenv("GEMINI_API_KEY"), 
	Backend: genai.BackendGeminiAPI,
})
if err != nil {
	log.Fatal(err)
}

schema := &genai.Schema{
	Type: genai.TypeArray,
	Items: &genai.Schema{
		Type: genai.TypeObject,
		Properties: map[string]*genai.Schema{
			"recipe_name": {Type: genai.TypeString},
			"ingredients": {
				Type:  genai.TypeArray,
				Items: &genai.Schema{Type: genai.TypeString},
			},
		},
		Required: []string{"recipe_name"},
	},
}

config := &genai.GenerateContentConfig{
	ResponseMIMEType: "application/json",
	ResponseSchema:   schema,
}

response, err := client.Models.GenerateContent(
	ctx,
	"gemini-2.0-flash",
	genai.Text("List a few popular cookie recipes."),
	config,
)
if err != nil {
	log.Fatal(err)
}
printResponse(response)
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
    "contents": [{
      "parts":[
        {"text": "List 5 popular cookie recipes"}
        ]
    }],
    "generationConfig": {
        "response_mime_type": "application/json",
        "response_schema": {
          "type": "ARRAY",
          "items": {
            "type": "OBJECT",
            "properties": {
              "recipe_name": {"type":"STRING"},
            }
          }
        }
    }
}' 2> /dev/null | head
val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-pro",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey,
        generationConfig = generationConfig {
            responseMimeType = "application/json"
            responseSchema = Schema(
                name = "recipes",
                description = "List of recipes",
                type = FunctionType.ARRAY,
                items = Schema(
                    name = "recipe",
                    description = "A recipe",
                    type = FunctionType.OBJECT,
                    properties = mapOf(
                        "recipeName" to Schema(
                            name = "recipeName",
                            description = "Name of the recipe",
                            type = FunctionType.STRING,
                            nullable = false
                        ),
                    ),
                    required = listOf("recipeName")
                ),
            )
        })

val prompt = "List a few popular cookie recipes."
val response = generativeModel.generateContent(prompt)
print(response.text)
let jsonSchema = Schema(
  type: .array,
  description: "List of recipes",
  items: Schema(
    type: .object,
    properties: [
      "recipeName": Schema(type: .string, description: "Name of the recipe", nullable: false),
    ],
    requiredProperties: ["recipeName"]
  )
)

let generativeModel = GenerativeModel(
  // Specify a model that supports controlled generation like Gemini 1.5 Pro
  name: "gemini-1.5-pro",
  // Access your API key from your on-demand resource .plist file (see "Set up your API key"
  // above)
  apiKey: APIKey.default,
  generationConfig: GenerationConfig(
    responseMIMEType: "application/json",
    responseSchema: jsonSchema
  )
)

let prompt = "List a few popular cookie recipes."
let response = try await generativeModel.generateContent(prompt)
if let text = response.text {
  print(text)
}
// Make sure to include this import:
// import 'package:google_generative_ai/google_generative_ai.dart';
final schema = Schema.array(
    description: 'List of recipes',
    items: Schema.object(properties: {
      'recipeName':
          Schema.string(description: 'Name of the recipe.', nullable: false)
    }, requiredProperties: [
      'recipeName'
    ]));

final model = GenerativeModel(
    model: 'gemini-1.5-pro',
    apiKey: apiKey,
    generationConfig: GenerationConfig(
        responseMimeType: 'application/json', responseSchema: schema));

final prompt = 'List a few popular cookie recipes.';
final response = await model.generateContent([Content.text(prompt)]);
print(response.text);
Schema<List<String>> schema =
    new Schema(
        /* name */ "recipes",
        /* description */ "List of recipes",
        /* format */ null,
        /* nullable */ false,
        /* list */ null,
        /* properties */ null,
        /* required */ null,
        /* items */ new Schema(
            /* name */ "recipe",
            /* description */ "A recipe",
            /* format */ null,
            /* nullable */ false,
            /* list */ null,
            /* properties */ Map.of(
                "recipeName",
                new Schema(
                    /* name */ "recipeName",
                    /* description */ "Name of the recipe",
                    /* format */ null,
                    /* nullable */ false,
                    /* list */ null,
                    /* properties */ null,
                    /* required */ null,
                    /* items */ null,
                    /* type */ FunctionType.STRING)),
            /* required */ null,
            /* items */ null,
            /* type */ FunctionType.OBJECT),
        /* type */ FunctionType.ARRAY);

GenerationConfig.Builder configBuilder = new GenerationConfig.Builder();
configBuilder.responseMimeType = "application/json";
configBuilder.responseSchema = schema;

GenerationConfig generationConfig = configBuilder.build();

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-pro",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey,
        /* generationConfig */ generationConfig);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Content content = new Content.Builder().addText("List a few popular cookie recipes.").build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

ListenableFuture<GenerateContentResponse> response = model.generateContent(content);
Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }
    },
    executor);
PythonKotlinJava
from google import genai
from google.genai import types

client = genai.Client()
response = client.models.generate_content(
    model="gemini-2.0-pro-exp-02-05",
    contents=(
        "Write and execute code that calculates the sum of the first 50 prime numbers. "
        "Ensure that only the executable code and its resulting output are generated."
    ),
)
# Each part may contain text, executable code, or an execution result.
for part in response.candidates[0].content.parts:
    print(part, "\n")

print("-" * 80)
# The .text accessor concatenates the parts into a markdown-formatted text.
print("\n", response.text)

val model = GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    modelName = "gemini-1.5-pro",
    // Access your API key as a Build Configuration variable (see "Set up your API key" above)
    apiKey = BuildConfig.apiKey,
    tools = listOf(Tool.CODE_EXECUTION)
)

val response = model.generateContent("What is the sum of the first 50 prime numbers?")

// Each `part` either contains `text`, `executable_code` or an `execution_result`
println(response.candidates[0].content.parts.joinToString("\n"))

// Alternatively, you can use the `text` accessor which joins the parts into a markdown compatible
// text representation
println(response.text)
// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
        new GenerativeModel(
                /* modelName */ "gemini-1.5-pro",
                // Access your API key as a Build Configuration variable (see "Set up your API key"
                // above)
                /* apiKey */ BuildConfig.apiKey,
                /* generationConfig */ null,
                /* safetySettings */ null,
                /* requestOptions */ new RequestOptions(),
                /* tools */ Collections.singletonList(Tool.CODE_EXECUTION));
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Content inputContent =
        new Content.Builder().addText("What is the sum of the first 50 prime numbers?").build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

ListenableFuture<GenerateContentResponse> response = model.generateContent(inputContent);
Futures.addCallback(
        response,
        new FutureCallback<GenerateContentResponse>() {
            @Override
            public void onSuccess(GenerateContentResponse result) {
                // Each `part` either contains `text`, `executable_code` or an
                // `execution_result`
                Candidate candidate = result.getCandidates().get(0);
                for (Part part : candidate.getContent().getParts()) {
                    System.out.println(part);
                }

                // Alternatively, you can use the `text` accessor which joins the parts into a
                // markdown compatible text representation
                String resultText = result.getText();
                System.out.println(resultText);
            }

            @Override
            public void onFailure(Throwable t) {
                t.printStackTrace();
            }
        },
        executor);
from google import genai
from google.genai import types

client = genai.Client()

def add(a: float, b: float) -> float:
    """returns a + b."""
    return a + b

def subtract(a: float, b: float) -> float:
    """returns a - b."""
    return a - b

def multiply(a: float, b: float) -> float:
    """returns a * b."""
    return a * b

def divide(a: float, b: float) -> float:
    """returns a / b."""
    return a / b

# Create a chat session; function calling (via tools) is enabled in the config.
chat = client.chats.create(
    model="gemini-2.0-flash",
    config=types.GenerateContentConfig(tools=[add, subtract, multiply, divide]),
)
response = chat.send_message(
    message="I have 57 cats, each owns 44 mittens, how many mittens is that in total?"
)
print(response.text)
  // Make sure to include the following import:
  // import {GoogleGenAI} from '@google/genai';
  const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

  /**
   * The add function returns the sum of two numbers.
   * @param {number} a
   * @param {number} b
   * @returns {number}
   */
  function add(a, b) {
    return a + b;
  }

  /**
   * The subtract function returns the difference (a - b).
   * @param {number} a
   * @param {number} b
   * @returns {number}
   */
  function subtract(a, b) {
    return a - b;
  }

  /**
   * The multiply function returns the product of two numbers.
   * @param {number} a
   * @param {number} b
   * @returns {number}
   */
  function multiply(a, b) {
    return a * b;
  }

  /**
   * The divide function returns the quotient of a divided by b.
   * @param {number} a
   * @param {number} b
   * @returns {number}
   */
  function divide(a, b) {
    return a / b;
  }

  const addDeclaration = {
    name: "addNumbers",
    parameters: {
      type: "object",
      description: "Return the result of adding two numbers.",
      properties: {
        firstParam: {
          type: "number",
          description:
            "The first parameter which can be an integer or a floating point number.",
        },
        secondParam: {
          type: "number",
          description:
            "The second parameter which can be an integer or a floating point number.",
        },
      },
      required: ["firstParam", "secondParam"],
    },
  };

  const subtractDeclaration = {
    name: "subtractNumbers",
    parameters: {
      type: "object",
      description:
        "Return the result of subtracting the second number from the first.",
      properties: {
        firstParam: {
          type: "number",
          description: "The first parameter.",
        },
        secondParam: {
          type: "number",
          description: "The second parameter.",
        },
      },
      required: ["firstParam", "secondParam"],
    },
  };

  const multiplyDeclaration = {
    name: "multiplyNumbers",
    parameters: {
      type: "object",
      description: "Return the product of two numbers.",
      properties: {
        firstParam: {
          type: "number",
          description: "The first parameter.",
        },
        secondParam: {
          type: "number",
          description: "The second parameter.",
        },
      },
      required: ["firstParam", "secondParam"],
    },
  };

  const divideDeclaration = {
    name: "divideNumbers",
    parameters: {
      type: "object",
      description:
        "Return the quotient of dividing the first number by the second.",
      properties: {
        firstParam: {
          type: "number",
          description: "The first parameter.",
        },
        secondParam: {
          type: "number",
          description: "The second parameter.",
        },
      },
      required: ["firstParam", "secondParam"],
    },
  };

  // Step 1: Call generateContent with function calling enabled.
  const generateContentResponse = await ai.models.generateContent({
    model: "gemini-2.0-flash",
    contents:
      "I have 57 cats, each owns 44 mittens, how many mittens is that in total?",
    config: {
      toolConfig: {
        functionCallingConfig: {
          mode: FunctionCallingConfigMode.ANY,
        },
      },
      tools: [
        {
          functionDeclarations: [
            addDeclaration,
            subtractDeclaration,
            multiplyDeclaration,
            divideDeclaration,
          ],
        },
      ],
    },
  });

  // Step 2: Extract the function call.(
  // Assuming the response contains a 'functionCalls' array.
  const functionCall =
    generateContentResponse.functionCalls &&
    generateContentResponse.functionCalls[0];
  console.log(functionCall);

  // Parse the arguments.
  const args = functionCall.args;
  // Expected args format: { firstParam: number, secondParam: number }

  // Step 3: Invoke the actual function based on the function name.
  const functionMapping = {
    addNumbers: add,
    subtractNumbers: subtract,
    multiplyNumbers: multiply,
    divideNumbers: divide,
  };
  const func = functionMapping[functionCall.name];
  if (!func) {
    console.error("Unimplemented error:", functionCall.name);
    return generateContentResponse;
  }
  const resultValue = func(args.firstParam, args.secondParam);
  console.log("Function result:", resultValue);

  // Step 4: Use the chat API to send the result as the final answer.
  const chat = ai.chats.create({ model: "gemini-2.0-flash" });
  const chatResponse = await chat.sendMessage({
    message: "The final result is " + resultValue,
  });
  console.log(chatResponse.text);
  return chatResponse;
}

cat > tools.json << EOF
{
  "function_declarations": [
    {
      "name": "enable_lights",
      "description": "Turn on the lighting system."
    },
    {
      "name": "set_light_color",
      "description": "Set the light color. Lights must be enabled for this to work.",
      "parameters": {
        "type": "object",
        "properties": {
          "rgb_hex": {
            "type": "string",
            "description": "The light color as a 6-digit hex string, e.g. ff0000 for red."
          }
        },
        "required": [
          "rgb_hex"
        ]
      }
    },
    {
      "name": "stop_lights",
      "description": "Turn off the lighting system."
    }
  ]
} 
EOF

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
  -H 'Content-Type: application/json' \
  -d @<(echo '
  {
    "system_instruction": {
      "parts": {
        "text": "You are a helpful lighting system bot. You can turn lights on and off, and you can set the color. Do not perform any other tasks."
      }
    },
    "tools": ['$(cat tools.json)'],

    "tool_config": {
      "function_calling_config": {"mode": "auto"}
    },

    "contents": {
      "role": "user",
      "parts": {
        "text": "Turn on the lights please."
      }
    }
  }
') 2>/dev/null |sed -n '/"content"/,/"finishReason"/p'
fun multiply(a: Double, b: Double) = a * b

val multiplyDefinition = defineFunction(
    name = "multiply",
    description = "returns the product of the provided numbers.",
    parameters = listOf(
    Schema.double("a", "First number"),
    Schema.double("b", "Second number")
    )
)

val usableFunctions = listOf(multiplyDefinition)

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey,
        // List the functions definitions you want to make available to the model
        tools = listOf(Tool(usableFunctions))
    )

val chat = generativeModel.startChat()
val prompt = "I have 57 cats, each owns 44 mittens, how many mittens is that in total?"

// Send the message to the generative model
var response = chat.sendMessage(prompt)

// Check if the model responded with a function call
response.functionCalls.first { it.name == "multiply" }.apply {
    val a: String by args
    val b: String by args

    val result = JSONObject(mapOf("result" to multiply(a.toDouble(), b.toDouble())))
    response = chat.sendMessage(
        content(role = "function") {
            part(FunctionResponsePart("multiply", result))
        }
    )
}

// Whenever the model responds with text, show it in the UI
response.text?.let { modelResponse ->
    println(modelResponse)
}
// Calls a hypothetical API to control a light bulb and returns the values that were set.
func controlLight(brightness: Double, colorTemperature: String) -> JSONObject {
  return ["brightness": .number(brightness), "colorTemperature": .string(colorTemperature)]
}

let generativeModel =
  GenerativeModel(
    // Use a model that supports function calling, like a Gemini 1.5 model
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default,
    tools: [Tool(functionDeclarations: [
      FunctionDeclaration(
        name: "controlLight",
        description: "Set the brightness and color temperature of a room light.",
        parameters: [
          "brightness": Schema(
            type: .number,
            format: "double",
            description: "Light level from 0 to 100. Zero is off and 100 is full brightness."
          ),
          "colorTemperature": Schema(
            type: .string,
            format: "enum",
            description: "Color temperature of the light fixture.",
            enumValues: ["daylight", "cool", "warm"]
          ),
        ],
        requiredParameters: ["brightness", "colorTemperature"]
      ),
    ])]
  )

let chat = generativeModel.startChat()

let prompt = "Dim the lights so the room feels cozy and warm."

// Send the message to the model.
let response1 = try await chat.sendMessage(prompt)

// Check if the model responded with a function call.
// For simplicity, this sample uses the first function call found.
guard let functionCall = response1.functionCalls.first else {
  fatalError("Model did not respond with a function call.")
}
// Print an error if the returned function was not declared
guard functionCall.name == "controlLight" else {
  fatalError("Unexpected function called: \(functionCall.name)")
}
// Verify that the names and types of the parameters match the declaration
guard case let .number(brightness) = functionCall.args["brightness"] else {
  fatalError("Missing argument: brightness")
}
guard case let .string(colorTemperature) = functionCall.args["colorTemperature"] else {
  fatalError("Missing argument: colorTemperature")
}

// Call the executable function named in the FunctionCall with the arguments specified in the
// FunctionCall and let it call the hypothetical API.
let apiResponse = controlLight(brightness: brightness, colorTemperature: colorTemperature)

// Send the API response back to the model so it can generate a text response that can be
// displayed to the user.
let response2 = try await chat.sendMessage([ModelContent(
  role: "function",
  parts: [.functionResponse(FunctionResponse(name: "controlLight", response: apiResponse))]
)])

if let text = response2.text {
  print(text)
}
// Make sure to include this import:
// import 'package:google_generative_ai/google_generative_ai.dart';
Map<String, Object?> setLightValues(Map<String, Object?> args) {
  return args;
}

final controlLightFunction = FunctionDeclaration(
    'controlLight',
    'Set the brightness and color temperature of a room light.',
    Schema.object(properties: {
      'brightness': Schema.number(
          description:
              'Light level from 0 to 100. Zero is off and 100 is full brightness.',
          nullable: false),
      'colorTemperatur': Schema.string(
          description:
              'Color temperature of the light fixture which can be `daylight`, `cool`, or `warm`',
          nullable: false),
    }));

final functions = {controlLightFunction.name: setLightValues};
FunctionResponse dispatchFunctionCall(FunctionCall call) {
  final function = functions[call.name]!;
  final result = function(call.args);
  return FunctionResponse(call.name, result);
}

final model = GenerativeModel(
  model: 'gemini-1.5-pro',
  apiKey: apiKey,
  tools: [
    Tool(functionDeclarations: [controlLightFunction])
  ],
);

final prompt = 'Dim the lights so the room feels cozy and warm.';
final content = [Content.text(prompt)];
var response = await model.generateContent(content);

List<FunctionCall> functionCalls;
while ((functionCalls = response.functionCalls.toList()).isNotEmpty) {
  var responses = <FunctionResponse>[
    for (final functionCall in functionCalls)
      dispatchFunctionCall(functionCall)
  ];
  content
    ..add(response.candidates.first.content)
    ..add(Content.functionResponses(responses));
  response = await model.generateContent(content);
}
print('Response: ${response.text}');
FunctionDeclaration multiplyDefinition =
    defineFunction(
        /* name  */ "multiply",
        /* description */ "returns a * b.",
        /* parameters */ Arrays.asList(
            Schema.numDouble("a", "First parameter"),
            Schema.numDouble("b", "Second parameter")),
        /* required */ Arrays.asList("a", "b"));

Tool tool = new Tool(Arrays.asList(multiplyDefinition), null);

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey,
        /* generationConfig (optional) */ null,
        /* safetySettings (optional) */ null,
        /* requestOptions (optional) */ new RequestOptions(),
        /* functionDeclarations (optional) */ Arrays.asList(tool));
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

// Create prompt
Content.Builder userContentBuilder = new Content.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText(
    "I have 57 cats, each owns 44 mittens, how many mittens is that in total?");
Content userMessage = userContentBuilder.build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

// Initialize the chat
ChatFutures chat = model.startChat();

// Send the message
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(userMessage);

Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        if (!result.getFunctionCalls().isEmpty()) {
          handleFunctionCall(result);
        }
        if (!result.getText().isEmpty()) {
          System.out.println(result.getText());
        }
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }

      private void handleFunctionCall(GenerateContentResponse result) {
        FunctionCallPart multiplyFunctionCallPart =
            result.getFunctionCalls().stream()
                .filter(fun -> fun.getName().equals("multiply"))
                .findFirst()
                .get();
        double a = Double.parseDouble(multiplyFunctionCallPart.getArgs().get("a"));
        double b = Double.parseDouble(multiplyFunctionCallPart.getArgs().get("b"));

        try {
          // `multiply(a, b)` is a regular java function defined in another class
          FunctionResponsePart functionResponsePart =
              new FunctionResponsePart(
                  "multiply", new JSONObject().put("result", multiply(a, b)));

          // Create prompt
          Content.Builder functionCallResponse = new Content.Builder();
          userContentBuilder.setRole("user");
          userContentBuilder.addPart(functionResponsePart);
          Content userMessage = userContentBuilder.build();

          chat.sendMessage(userMessage);
        } catch (JSONException e) {
          throw new RuntimeException(e);
        }
      }
    },
    executor);
from google import genai
from google.genai import types

client = genai.Client()
response = client.models.generate_content(
    model="gemini-2.0-flash",
    contents="Tell me a story about a magic backpack.",
    config=types.GenerateContentConfig(
        candidate_count=1,
        stop_sequences=["x"],
        max_output_tokens=20,
        temperature=1.0,
    ),
)
print(response.text)
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const response = await ai.models.generateContent({
  model: "gemini-2.0-flash",
  contents: "Tell me a story about a magic backpack.",
  config: {
    candidateCount: 1,
    stopSequences: ["x"],
    maxOutputTokens: 20,
    temperature: 1.0,
  },
});

console.log(response.text);
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
	APIKey:  os.Getenv("GEMINI_API_KEY"),
	Backend: genai.BackendGeminiAPI,
})
if err != nil {
	log.Fatal(err)
}

// Create local variables for parameters.
candidateCount := int32(1)
maxOutputTokens := int32(20)
temperature := float32(1.0)

response, err := client.Models.GenerateContent(
	ctx,
	"gemini-2.0-flash",
	genai.Text("Tell me a story about a magic backpack."),
	&genai.GenerateContentConfig{
		CandidateCount:  candidateCount,
		StopSequences:   []string{"x"},
		MaxOutputTokens: maxOutputTokens,
		Temperature:     &temperature,
	},
)
if err != nil {
	log.Fatal(err)
}

printResponse(response)
curl https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
        "contents": [{
            "parts":[
                {"text": "Explain how AI works"}
            ]
        }],
        "generationConfig": {
            "stopSequences": [
                "Title"
            ],
            "temperature": 1.0,
            "maxOutputTokens": 800,
            "topP": 0.8,
            "topK": 10
        }
    }'  2> /dev/null | grep "text"
val config = generationConfig {
  temperature = 0.9f
  topK = 16
  topP = 0.1f
  maxOutputTokens = 200
  stopSequences = listOf("red")
}

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        apiKey = BuildConfig.apiKey,
        generationConfig = config)
let config = GenerationConfig(
  temperature: 0.9,
  topP: 0.1,
  topK: 16,
  candidateCount: 1,
  maxOutputTokens: 200,
  stopSequences: ["red", "orange"]
)

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default,
    generationConfig: config
  )
final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final prompt = 'Tell me a story about a magic backpack.';

final response = await model.generateContent(
  [Content.text(prompt)],
  generationConfig: GenerationConfig(
    candidateCount: 1,
    stopSequences: ['x'],
    maxOutputTokens: 20,
    temperature: 1.0,
  ),
);
print(response.text);
GenerationConfig.Builder configBuilder = new GenerationConfig.Builder();
configBuilder.temperature = 0.9f;
configBuilder.topK = 16;
configBuilder.topP = 0.1f;
configBuilder.maxOutputTokens = 200;
configBuilder.stopSequences = Arrays.asList("red");

GenerationConfig generationConfig = configBuilder.build();

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel("gemini-1.5-flash", BuildConfig.apiKey, generationConfig);

GenerativeModelFutures model = GenerativeModelFutures.from(gm);
from google import genai
from google.genai import types

client = genai.Client()
unsafe_prompt = (
    "I support Martians Soccer Club and I think Jupiterians Football Club sucks! "
    "Write a ironic phrase about them including expletives."
)
response = client.models.generate_content(
    model="gemini-2.0-flash",
    contents=unsafe_prompt,
    config=types.GenerateContentConfig(
        safety_settings=[
            types.SafetySetting(
                category="HARM_CATEGORY_HATE_SPEECH",
                threshold="BLOCK_MEDIUM_AND_ABOVE",
            ),
            types.SafetySetting(
                category="HARM_CATEGORY_HARASSMENT", threshold="BLOCK_ONLY_HIGH"
            ),
        ]
    ),
)
try:
    print(response.text)
except Exception:
    print("No information generated by the model.")

print(response.candidates[0].safety_ratings)
  // Make sure to include the following import:
  // import {GoogleGenAI} from '@google/genai';
  const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
  const unsafePrompt =
    "I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them including expletives.";

  const response = await ai.models.generateContent({
    model: "gemini-2.0-flash",
    contents: unsafePrompt,
    config: {
      safetySettings: [
        {
          category: "HARM_CATEGORY_HATE_SPEECH",
          threshold: "BLOCK_MEDIUM_AND_ABOVE",
        },
        {
          category: "HARM_CATEGORY_HARASSMENT",
          threshold: "BLOCK_ONLY_HIGH",
        },
      ],
    },
  });

  try {
    console.log("Generated text:", response.text);
  } catch (error) {
    console.log("No information generated by the model.");
  }
  console.log("Safety ratings:", response.candidates[0].safetyRatings);
  return response;
}
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
	APIKey:  os.Getenv("GEMINI_API_KEY"),
	Backend: genai.BackendGeminiAPI,
})
if err != nil {
	log.Fatal(err)
}

unsafePrompt := "I support Martians Soccer Club and I think Jupiterians Football Club sucks! " +
	"Write a ironic phrase about them including expletives."

config := &genai.GenerateContentConfig{
	SafetySettings: []*genai.SafetySetting{
		{
			Category:  "HARM_CATEGORY_HATE_SPEECH",
			Threshold: "BLOCK_MEDIUM_AND_ABOVE",
		},
		{
			Category:  "HARM_CATEGORY_HARASSMENT",
			Threshold: "BLOCK_ONLY_HIGH",
		},
	},
}
contents := []*genai.Content{
	genai.NewContentFromText(unsafePrompt, "user"),
}
response, err := client.Models.GenerateContent(ctx, "gemini-2.0-flash", contents, config)
if err != nil {
	log.Fatal(err)
}

// Print the generated text.
text := response.Text()
fmt.Println("Generated text:", text)

// Print the and safety ratings from the first candidate.
if len(response.Candidates) > 0 {
	fmt.Println("Finish reason:", response.Candidates[0].FinishReason)
	safetyRatings, err := json.MarshalIndent(response.Candidates[0].SafetyRatings, "", "  ")
	if err != nil {
		return err
	}
	fmt.Println("Safety ratings:", string(safetyRatings))
} else {
	fmt.Println("No candidate returned.")
}
echo '{
    "safetySettings": [
        {"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_ONLY_HIGH"},
        {"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_MEDIUM_AND_ABOVE"}
    ],
    "contents": [{
        "parts":[{
            "text": "'I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them.'"}]}]}' > request.json

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d @request.json 2> /dev/null
val harassmentSafety = SafetySetting(HarmCategory.HARASSMENT, BlockThreshold.ONLY_HIGH)

val hateSpeechSafety = SafetySetting(HarmCategory.HATE_SPEECH, BlockThreshold.MEDIUM_AND_ABOVE)

val generativeModel =
    GenerativeModel(
        // The Gemini 1.5 models are versatile and work with most use cases
        modelName = "gemini-1.5-flash",
        apiKey = BuildConfig.apiKey,
        safetySettings = listOf(harassmentSafety, hateSpeechSafety))
let safetySettings = [
  SafetySetting(harmCategory: .dangerousContent, threshold: .blockLowAndAbove),
  SafetySetting(harmCategory: .harassment, threshold: .blockMediumAndAbove),
  SafetySetting(harmCategory: .hateSpeech, threshold: .blockOnlyHigh),
]

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default,
    safetySettings: safetySettings
  )
// Make sure to include this import:
// import 'package:google_generative_ai/google_generative_ai.dart';
final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final prompt = 'I support Martians Soccer Club and I think '
    'Jupiterians Football Club sucks! Write an ironic phrase telling '
    'them how I feel about them.';

final response = await model.generateContent(
  [Content.text(prompt)],
  safetySettings: [
    SafetySetting(HarmCategory.harassment, HarmBlockThreshold.medium),
    SafetySetting(HarmCategory.hateSpeech, HarmBlockThreshold.low),
  ],
);
try {
  print(response.text);
} catch (e) {
  print(e);
  for (final SafetyRating(:category, :probability)
      in response.candidates.first.safetyRatings!) {
    print('Safety Rating: $category - $probability');
  }
}
SafetySetting harassmentSafety =
    new SafetySetting(HarmCategory.HARASSMENT, BlockThreshold.ONLY_HIGH);

SafetySetting hateSpeechSafety =
    new SafetySetting(HarmCategory.HATE_SPEECH, BlockThreshold.MEDIUM_AND_ABOVE);

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        "gemini-1.5-flash",
        BuildConfig.apiKey,
        null, // generation config is optional
        Arrays.asList(harassmentSafety, hateSpeechSafety));

GenerativeModelFutures model = GenerativeModelFutures.from(gm);
from google import genai
from google.genai import types

client = genai.Client()
response = client.models.generate_content(
    model="gemini-2.0-flash",
    contents="Good morning! How are you?",
    config=types.GenerateContentConfig(
        system_instruction="You are a cat. Your name is Neko."
    ),
)
print(response.text)
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.generateContent({
  model: "gemini-2.0-flash",
  contents: "Good morning! How are you?",
  config: {
    systemInstruction: "You are a cat. Your name is Neko.",
  },
});
console.log(response.text);
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
	APIKey:  os.Getenv("GEMINI_API_KEY"),
	Backend: genai.BackendGeminiAPI,
})
if err != nil {
	log.Fatal(err)
}

// Construct the user message contents.
contents := []*genai.Content{
	genai.NewContentFromText("Good morning! How are you?", "user"),
}

// Set the system instruction as a *genai.Content.
config := &genai.GenerateContentConfig{
	SystemInstruction: genai.NewContentFromText("You are a cat. Your name is Neko.", "user"),
}

response, err := client.Models.GenerateContent(ctx, "gemini-2.0-flash", contents, config)
if err != nil {
	log.Fatal(err)
}
printResponse(response)
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
-H 'Content-Type: application/json' \
-d '{ "system_instruction": {
    "parts":
      { "text": "You are a cat. Your name is Neko."}},
    "contents": {
      "parts": {
        "text": "Hello there"}}}'
val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        apiKey = BuildConfig.apiKey,
        systemInstruction = content { text("You are a cat. Your name is Neko.") },
    )
let generativeModel =
  GenerativeModel(
    // Specify a model that supports system instructions, like a Gemini 1.5 model
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default,
    systemInstruction: ModelContent(role: "system", parts: "You are a cat. Your name is Neko.")
  )
// Make sure to include this import:
// import 'package:google_generative_ai/google_generative_ai.dart';
final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
  systemInstruction: Content.system('You are a cat. Your name is Neko.'),
);
final prompt = 'Good morning! How are you?';

final response = await model.generateContent([Content.text(prompt)]);
print(response.text);
GenerativeModel model =
    new GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        /* modelName */ "gemini-1.5-flash",
        /* apiKey */ BuildConfig.apiKey,
        /* generationConfig (optional) */ null,
        /* safetySettings (optional) */ null,
        /* requestOptions (optional) */ new RequestOptions(),
        /* tools (optional) */ null,
        /* toolsConfig (optional) */ null,
        /* systemInstruction (optional) */ new Content.Builder()
            .addText("You are a cat. Your name is Neko.")
            .build());

回應主體

如果成功,回應主體會包含 GenerateContentResponse 的執行例項。

方法:tunedModels.streamGenerateContent

在輸入 GenerateContentRequest 的情況下,從模型產生串流回應

端點

post https://generativelanguage.googleapis.com/v1beta/{model=tunedModels/*}:streamGenerateContent

路徑參數

model string

必要欄位。用於產生完成作業的 Model 名稱。

格式:models/{model}。其格式為 tunedModels/{tunedmodel}

要求主體

要求主體的資料會採用以下結構:

欄位
contents[] object (Content)

必要欄位。與模型目前對話的內容。

對於單一回合查詢,這會是單一例項。對於chat 等多輪查詢,這個欄位會重複出現,並包含對話記錄和最新要求。

tools[] object (Tool)

(非必要) Model 可能用來產生下一個回應的 Tools 清單。

Tool 是一段程式碼,可讓系統與外部系統互動,執行 Model 所不知或超出其範圍的動作或一組動作。支援的 ToolFunctioncodeExecution。詳情請參閱「函式呼叫」和「程式碼執行」指南。

toolConfig object (ToolConfig)

(非必要) 要求中指定的任何 Tool 工具設定。如需使用範例,請參閱函式呼叫指南

safetySettings[] object (SafetySetting)

(非必要) 用來封鎖不安全內容的 SafetySetting 不重複執行個體清單。

這項規定將適用於 GenerateContentRequest.contentsGenerateContentResponse.candidates。每個 SafetyCategory 類型應只設定一個設定。API 會封鎖所有不符合這些設定所設門檻的內容和回應。這份清單會覆寫 safetySettings 中指定的每個 SafetyCategory 的預設設定。如果清單中提供的特定 SafetyCategory 沒有 SafetySetting,API 會使用該類別的預設安全設定。系統支援的危害類別包括 HARM_CATEGORY_HATE_SPEECH、HARM_CATEGORY_SEXUALLY_EXPLICIT、HARM_CATEGORY_DANGEROUS_CONTENT、HARM_CATEGORY_HARASSMENT 和 HARM_CATEGORY_CIVIC_INTEGRITY。如要進一步瞭解可用的安全設定,請參閱指南。您也可以參閱安全指南,瞭解如何在 AI 應用程式中納入安全考量。

systemInstruction object (Content)

(非必要) 開發人員設定系統指令。目前僅支援文字。

generationConfig object (GenerationConfig)

(非必要) 模型產生和輸出的設定選項。

cachedContent string

(非必要) 快取的內容名稱,用於做為預測的上下文。格式:cachedContents/{cachedContent}

要求範例

from google import genai

client = genai.Client()
response = client.models.generate_content_stream(
    model="gemini-2.0-flash", contents="Write a story about a magic backpack."
)
for chunk in response:
    print(chunk.text)
    print("_" * 80)
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const response = await ai.models.generateContentStream({
  model: "gemini-2.0-flash",
  contents: "Write a story about a magic backpack.",
});
let text = "";
for await (const chunk of response) {
  console.log(chunk.text);
  text += chunk.text;
}
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
	APIKey:  os.Getenv("GEMINI_API_KEY"),
	Backend: genai.BackendGeminiAPI,
})
if err != nil {
	log.Fatal(err)
}
contents := []*genai.Content{
	genai.NewContentFromText("Write a story about a magic backpack.", "user"),
}
for response, err := range client.Models.GenerateContentStream(
	ctx,
	"gemini-2.0-flash",
	contents,
	nil,
) {
	if err != nil {
		log.Fatal(err)
	}
	fmt.Print(response.Candidates[0].Content.Parts[0].Text)
}
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=${GEMINI_API_KEY}" \
        -H 'Content-Type: application/json' \
        --no-buffer \
        -d '{ "contents":[{"parts":[{"text": "Write a story about a magic backpack."}]}]}'
val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val prompt = "Write a story about a magic backpack."
// Use streaming with text-only input
generativeModel.generateContentStream(prompt).collect { chunk -> print(chunk.text) }
let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

let prompt = "Write a story about a magic backpack."
// Use streaming with text-only input
for try await response in generativeModel.generateContentStream(prompt) {
  if let text = response.text {
    print(text)
  }
}
// Make sure to include this import:
// import 'package:google_generative_ai/google_generative_ai.dart';
final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final prompt = 'Write a story about a magic backpack.';

final responses = model.generateContentStream([Content.text(prompt)]);
await for (final response in responses) {
  print(response.text);
}
// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Content content =
    new Content.Builder().addText("Write a story about a magic backpack.").build();

Publisher<GenerateContentResponse> streamingResponse = model.generateContentStream(content);

StringBuilder outputContent = new StringBuilder();

streamingResponse.subscribe(
    new Subscriber<GenerateContentResponse>() {
      @Override
      public void onNext(GenerateContentResponse generateContentResponse) {
        String chunk = generateContentResponse.getText();
        outputContent.append(chunk);
      }

      @Override
      public void onComplete() {
        System.out.println(outputContent);
      }

      @Override
      public void onError(Throwable t) {
        t.printStackTrace();
      }

      @Override
      public void onSubscribe(Subscription s) {
        s.request(Long.MAX_VALUE);
      }
    });
from google import genai
import PIL.Image

client = genai.Client()
organ = PIL.Image.open(media / "organ.jpg")
response = client.models.generate_content_stream(
    model="gemini-2.0-flash", contents=["Tell me about this instrument", organ]
)
for chunk in response:
    print(chunk.text)
    print("_" * 80)
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const organ = await ai.files.upload({
  file: path.join(media, "organ.jpg"),
});

const response = await ai.models.generateContentStream({
  model: "gemini-2.0-flash",
  contents: [
    createUserContent([
      "Tell me about this instrument", 
      createPartFromUri(organ.uri, organ.mimeType)
    ]),
  ],
});
let text = "";
for await (const chunk of response) {
  console.log(chunk.text);
  text += chunk.text;
}
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
	APIKey:  os.Getenv("GEMINI_API_KEY"),
	Backend: genai.BackendGeminiAPI,
})
if err != nil {
	log.Fatal(err)
}
file, err := client.Files.UploadFromPath(
	ctx, 
	filepath.Join(getMedia(), "organ.jpg"), 
	&genai.UploadFileConfig{
		MIMEType : "image/jpeg",
	},
)
if err != nil {
	log.Fatal(err)
}
parts := []*genai.Part{
	genai.NewPartFromText("Tell me about this instrument"),
	genai.NewPartFromURI(file.URI, file.MIMEType),
}
contents := []*genai.Content{
	genai.NewContentFromParts(parts, "user"),
}
for response, err := range client.Models.GenerateContentStream(
	ctx,
	"gemini-2.0-flash",
	contents,
	nil,
) {
	if err != nil {
		log.Fatal(err)
	}
	fmt.Print(response.Candidates[0].Content.Parts[0].Text)
}
cat > "$TEMP_JSON" << EOF
{
  "contents": [{
    "parts":[
      {"text": "Tell me about this instrument"},
      {
        "inline_data": {
          "mime_type":"image/jpeg",
          "data": "$(cat "$TEMP_B64")"
        }
      }
    ]
  }]
}
EOF

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=$GEMINI_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d "@$TEMP_JSON" 2> /dev/null
val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val image: Bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.image)
val inputContent = content {
  image(image)
  text("What's in this picture?")
}

generativeModel.generateContentStream(inputContent).collect { chunk -> print(chunk.text) }
let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

guard let image = UIImage(systemName: "cloud.sun") else { fatalError() }

let prompt = "What's in this picture?"

for try await response in generativeModel.generateContentStream(image, prompt) {
  if let text = response.text {
    print(text)
  }
}
// Make sure to include this import:
// import 'package:google_generative_ai/google_generative_ai.dart';
final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);

Future<DataPart> fileToPart(String mimeType, String path) async {
  return DataPart(mimeType, await File(path).readAsBytes());
}

final prompt = 'Describe how this product might be manufactured.';
final image = await fileToPart('image/jpeg', 'resources/jetpack.jpg');

final responses = model.generateContentStream([
  Content.multi([TextPart(prompt), image])
]);
await for (final response in responses) {
  print(response.text);
}
// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Bitmap image1 = BitmapFactory.decodeResource(context.getResources(), R.drawable.image1);
Bitmap image2 = BitmapFactory.decodeResource(context.getResources(), R.drawable.image2);

Content content =
    new Content.Builder()
        .addText("What's different between these pictures?")
        .addImage(image1)
        .addImage(image2)
        .build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

Publisher<GenerateContentResponse> streamingResponse = model.generateContentStream(content);

StringBuilder outputContent = new StringBuilder();

streamingResponse.subscribe(
    new Subscriber<GenerateContentResponse>() {
      @Override
      public void onNext(GenerateContentResponse generateContentResponse) {
        String chunk = generateContentResponse.getText();
        outputContent.append(chunk);
      }

      @Override
      public void onComplete() {
        System.out.println(outputContent);
      }

      @Override
      public void onError(Throwable t) {
        t.printStackTrace();
      }

      @Override
      public void onSubscribe(Subscription s) {
        s.request(Long.MAX_VALUE);
      }
    });
Python貝殼
from google import genai

client = genai.Client()
sample_audio = client.files.upload(file=media / "sample.mp3")
response = client.models.generate_content_stream(
    model="gemini-2.0-flash",
    contents=["Give me a summary of this audio file.", sample_audio],
)
for chunk in response:
    print(chunk.text)
    print("_" * 80)
# Use File API to upload audio data to API request.
MIME_TYPE=$(file -b --mime-type "${AUDIO_PATH}")
NUM_BYTES=$(wc -c < "${AUDIO_PATH}")
DISPLAY_NAME=AUDIO

tmp_header_file=upload-header.tmp

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GEMINI_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${AUDIO_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=$GEMINI_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Please describe this file."},
          {"file_data":{"mime_type": "audio/mpeg", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo
PythonNode.jsGo貝殼
from google import genai
import time

client = genai.Client()
# Video clip (CC BY 3.0) from https://peach.blender.org/download/
myfile = client.files.upload(file=media / "Big_Buck_Bunny.mp4")
print(f"{myfile=}")

# Poll until the video file is completely processed (state becomes ACTIVE).
while not myfile.state or myfile.state.name != "ACTIVE":
    print("Processing video...")
    print("File state:", myfile.state)
    time.sleep(5)
    myfile = client.files.get(name=myfile.name)

response = client.models.generate_content_stream(
    model="gemini-2.0-flash", contents=[myfile, "Describe this video clip"]
)
for chunk in response:
    print(chunk.text)
    print("_" * 80)
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

let video = await ai.files.upload({
  file: path.join(media, 'Big_Buck_Bunny.mp4'),
});

// Poll until the video file is completely processed (state becomes ACTIVE).
while (!video.state || video.state.toString() !== 'ACTIVE') {
  console.log('Processing video...');
  console.log('File state: ', video.state);
  await sleep(5000);
  video = await ai.files.get({name: video.name});
}

const response = await ai.models.generateContentStream({
  model: "gemini-2.0-flash",
  contents: [
    createUserContent([
      "Describe this video clip",
      createPartFromUri(video.uri, video.mimeType),
    ]),
  ],
});
let text = "";
for await (const chunk of response) {
  console.log(chunk.text);
  text += chunk.text;
}
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
	APIKey:  os.Getenv("GEMINI_API_KEY"),
	Backend: genai.BackendGeminiAPI,
})
if err != nil {
	log.Fatal(err)
}

file, err := client.Files.UploadFromPath(
	ctx, 
	filepath.Join(getMedia(), "Big_Buck_Bunny.mp4"), 
	&genai.UploadFileConfig{
		MIMEType : "video/mp4",
	},
)
if err != nil {
	log.Fatal(err)
}

// Poll until the video file is completely processed (state becomes ACTIVE).
for file.State == genai.FileStateUnspecified || file.State != genai.FileStateActive {
	fmt.Println("Processing video...")
	fmt.Println("File state:", file.State)
	time.Sleep(5 * time.Second)

	file, err = client.Files.Get(ctx, file.Name, nil)
	if err != nil {
		log.Fatal(err)
	}
}

parts := []*genai.Part{
	genai.NewPartFromText("Describe this video clip"),
	genai.NewPartFromURI(file.URI, file.MIMEType),
}

contents := []*genai.Content{
	genai.NewContentFromParts(parts, "user"),
}

for result, err := range client.Models.GenerateContentStream(
	ctx,
	"gemini-2.0-flash",
	contents,
	nil,
) {
	if err != nil {
		log.Fatal(err)
	}
	fmt.Print(result.Candidates[0].Content.Parts[0].Text)
}
# Use File API to upload audio data to API request.
MIME_TYPE=$(file -b --mime-type "${VIDEO_PATH}")
NUM_BYTES=$(wc -c < "${VIDEO_PATH}")
DISPLAY_NAME=VIDEO_PATH

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GEMINI_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${VIDEO_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

state=$(jq ".file.state" file_info.json)
echo state=$state

while [[ "($state)" = *"PROCESSING"* ]];
do
  echo "Processing video..."
  sleep 5
  # Get the file of interest to check state
  curl https://generativelanguage.googleapis.com/v1beta/files/$name > file_info.json
  state=$(jq ".file.state" file_info.json)
done

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=$GEMINI_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Please describe this file."},
          {"file_data":{"mime_type": "video/mp4", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo
Python貝殼
from google import genai

client = genai.Client()
sample_pdf = client.files.upload(file=media / "test.pdf")
response = client.models.generate_content_stream(
    model="gemini-2.0-flash",
    contents=["Give me a summary of this document:", sample_pdf],
)

for chunk in response:
    print(chunk.text)
    print("_" * 80)
MIME_TYPE=$(file -b --mime-type "${PDF_PATH}")
NUM_BYTES=$(wc -c < "${PDF_PATH}")
DISPLAY_NAME=TEXT


echo $MIME_TYPE
tmp_header_file=upload-header.tmp

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GEMINI_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${PDF_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

# Now generate content using that file
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=$GEMINI_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Can you add a few more lines to this poem?"},
          {"file_data":{"mime_type": "application/pdf", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo
from google import genai
from google.genai import types

client = genai.Client()
chat = client.chats.create(
    model="gemini-2.0-flash",
    history=[
        types.Content(role="user", parts=[types.Part(text="Hello")]),
        types.Content(
            role="model",
            parts=[
                types.Part(
                    text="Great to meet you. What would you like to know?"
                )
            ],
        ),
    ],
)
response = chat.send_message_stream(message="I have 2 dogs in my house.")
for chunk in response:
    print(chunk.text)
    print("_" * 80)
response = chat.send_message_stream(message="How many paws are in my house?")
for chunk in response:
    print(chunk.text)
    print("_" * 80)

print(chat.get_history())
// Make sure to include the following import:
// import {GoogleGenAI} from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const chat = ai.chats.create({
  model: "gemini-2.0-flash",
  history: [
    {
      role: "user",
      parts: [{ text: "Hello" }],
    },
    {
      role: "model",
      parts: [{ text: "Great to meet you. What would you like to know?" }],
    },
  ],
});

console.log("Streaming response for first message:");
const stream1 = await chat.sendMessageStream({
  message: "I have 2 dogs in my house.",
});
for await (const chunk of stream1) {
  console.log(chunk.text);
  console.log("_".repeat(80));
}

console.log("Streaming response for second message:");
const stream2 = await chat.sendMessageStream({
  message: "How many paws are in my house?",
});
for await (const chunk of stream2) {
  console.log(chunk.text);
  console.log("_".repeat(80));
}

console.log(chat.getHistory());
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
	APIKey:  os.Getenv("GEMINI_API_KEY"),
	Backend: genai.BackendGeminiAPI,
})
if err != nil {
	log.Fatal(err)
}

history := []*genai.Content{
	genai.NewContentFromText("Hello", "user"),
	genai.NewContentFromText("Great to meet you. What would you like to know?", "model"),
}
chat, err := client.Chats.Create(ctx, "gemini-2.0-flash", nil, history)
if err != nil {
	log.Fatal(err)
}

for chunk, err := range chat.SendMessageStream(ctx, genai.Part{Text: "I have 2 dogs in my house."}) {
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(chunk.Text())
	fmt.Println(strings.Repeat("_", 64))
}

for chunk, err := range chat.SendMessageStream(ctx, genai.Part{Text: "How many paws are in my house?"}) {
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(chunk.Text())
	fmt.Println(strings.Repeat("_", 64))
}

fmt.Println(chat.History(false))
curl https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=$GEMINI_API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [
        {"role":"user",
         "parts":[{
           "text": "Hello"}]},
        {"role": "model",
         "parts":[{
           "text": "Great to meet you. What would you like to know?"}]},
        {"role":"user",
         "parts":[{
           "text": "I have two dogs in my house. How many paws are in my house?"}]},
      ]
    }' 2> /dev/null | grep "text"
// Use streaming with multi-turn conversations (like chat)
val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val chat =
    generativeModel.startChat(
        history =
            listOf(
                content(role = "user") { text("Hello, I have 2 dogs in my house.") },
                content(role = "model") {
                  text("Great to meet you. What would you like to know?")
                }))

chat.sendMessageStream("How many paws are in my house?").collect { chunk -> print(chunk.text) }
let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

// Optionally specify existing chat history
let history = [
  ModelContent(role: "user", parts: "Hello, I have 2 dogs in my house."),
  ModelContent(role: "model", parts: "Great to meet you. What would you like to know?"),
]

// Initialize the chat with optional chat history
let chat = generativeModel.startChat(history: history)

// To stream generated text output, call sendMessageStream and pass in the message
let contentStream = chat.sendMessageStream("How many paws are in my house?")
for try await chunk in contentStream {
  if let text = chunk.text {
    print(text)
  }
}
// Make sure to include this import:
// import 'package:google_generative_ai/google_generative_ai.dart';
final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final chat = model.startChat(history: [
  Content.text('hello'),
  Content.model([TextPart('Great to meet you. What would you like to know?')])
]);
var responses =
    chat.sendMessageStream(Content.text('I have 2 dogs in my house.'));
await for (final response in responses) {
  print(response.text);
  print('_' * 80);
}
responses =
    chat.sendMessageStream(Content.text('How many paws are in my house?'));
await for (final response in responses) {
  print(response.text);
  print('_' * 80);
}
// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

// (optional) Create previous chat history for context
Content.Builder userContentBuilder = new Content.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText("Hello, I have 2 dogs in my house.");
Content userContent = userContentBuilder.build();

Content.Builder modelContentBuilder = new Content.Builder();
modelContentBuilder.setRole("model");
modelContentBuilder.addText("Great to meet you. What would you like to know?");
Content modelContent = userContentBuilder.build();

List<Content> history = Arrays.asList(userContent, modelContent);

// Initialize the chat
ChatFutures chat = model.startChat(history);

// Create a new user message
Content.Builder userMessageBuilder = new Content.Builder();
userMessageBuilder.setRole("user");
userMessageBuilder.addText("How many paws are in my house?");
Content userMessage = userMessageBuilder.build();

// Use streaming with text-only input
Publisher<GenerateContentResponse> streamingResponse = model.generateContentStream(userMessage);

StringBuilder outputContent = new StringBuilder();

streamingResponse.subscribe(
    new Subscriber<GenerateContentResponse>() {
      @Override
      public void onNext(GenerateContentResponse generateContentResponse) {
        String chunk = generateContentResponse.getText();
        outputContent.append(chunk);
      }

      @Override
      public void onComplete() {
        System.out.println(outputContent);
      }

      @Override
      public void onSubscribe(Subscription s) {
        s.request(Long.MAX_VALUE);
      }

      @Override
      public void onError(Throwable t) {}

    });

回應主體

如果成功,回應主體會包含 GenerateContentResponse 例項的串流。

方法:tunedModels.get

取得特定 TunedModel 的相關資訊。

端點

get https://generativelanguage.googleapis.com/v1beta/{name=tunedModels/*}

路徑參數

name string

必要欄位。模型的資源名稱。

格式:tunedModels/my-model-id 格式為 tunedModels/{tunedmodel}

要求主體

要求主體必須為空白。

要求範例

Python
# With Gemini 2 we're launching a new SDK. See the following doc for details.
# https://ai.google.dev/gemini-api/docs/migrate

回應主體

如果成功,回應主體會包含 TunedModel 的執行例項。

方法:tunedModels.list

列出已建立的調整後模型。

端點

get https://generativelanguage.googleapis.com/v1beta/tunedModels

查詢參數

pageSize integer

(非必要) 每頁傳回的 TunedModels 數量上限。服務可能會傳回較少的調校模型。

如未指定,最多將傳回 10 個經過調整的模型。這個方法每頁最多會傳回 1000 個模型,即使您傳遞較大的 pageSize 也一樣。

pageToken string

(非必要) 從先前 tunedModels.list 呼叫收到的網頁權杖。

將一個要求傳回的 pageToken 做為下一個要求的引數,擷取下一頁。

進行分頁時,提供至 tunedModels.list 的所有其他參數須與提供網頁權杖的呼叫相符。

filter string

(非必要) 篩選器是針對已調整的模型說明和顯示名稱進行全文搜尋。根據預設,結果不會包含與所有人共用的調校模型。

其他運算子:- owner:me - writers:me - readers:me - readers:everyone

範例:"owner:me" 會傳回呼叫端具有擁有者角色的所有調校模型;"readers:me" 會傳回呼叫端具有讀者角色的所有調校模型;"readers:everyone" 會傳回與所有人共用的所有調校模型

要求主體

要求主體必須為空白。

要求範例

Python
# With Gemini 2 we're launching a new SDK. See the following doc for details.
# https://ai.google.dev/gemini-api/docs/migrate

回應主體

來自 tunedModels.list 的回應,其中包含分頁的模型清單。

如果成功,回應主體會含有以下結構的資料:

欄位
tunedModels[] object (TunedModel)

傳回的模型。

nextPageToken string

可做為 pageToken 傳送的權杖,用於擷取後續網頁。

如果省略這個欄位,就沒有後續頁面。

JSON 表示法
{
  "tunedModels": [
    {
      object (TunedModel)
    }
  ],
  "nextPageToken": string
}

方法:tunedModels.patch

更新已調整的模型。

端點

修補 https://generativelanguage.googleapis.com/v1beta/{tunedModel.name=tunedModels/*}

PATCH https://generativelanguage.googleapis.com/v1beta/{tunedModel.name=tunedModels/*}

路徑參數

tunedModel.name string

僅供輸出。調整後模型的名稱。系統會在建立時產生專屬名稱。範例:tunedModels/az2mb0bpw6i 如果在建立時設定顯示名稱,系統會將顯示名稱的字詞連結起來,並加上隨機字串,以確保名稱的唯一性。

範例:

  • displayName = Sentence Translator
  • name = tunedModels/sentence-translator-u3b7m 格式為 tunedModels/{tunedmodel}

查詢參數

updateMask string (FieldMask format)

(非必要) 要更新的欄位清單。

這是以半形逗號分隔的完整欄位名稱清單。範例:"user.displayName,photo"

要求主體

要求主體包含 TunedModel 的例項。

欄位
displayName string

(非必要) 在使用者介面中顯示此模型的名稱。顯示名稱的長度上限為 40 個半形字元,包括空格。

description string

(非必要) 這個模型的簡短說明。

tuningTask object (TuningTask)

必要欄位。建立調整後模型的調整工作。

readerProjectNumbers[] string (int64 format)

(非必要) 列出具有調整過的模型讀取權的專案編號。

source_model Union type
用來調整模型的起點。source_model 只能是下列其中一項:
tunedModelSource object (TunedModelSource)

(非必要) 要用來訓練新模型的 TunedModel。

temperature number

(非必要) 控制輸出的隨機性。

值的範圍為 [0.0,1.0] 以上 (包含 [0.0,1.0])。值越接近 1.0,產生的回覆就會越多樣化;值越接近 0.0,模型產生的回覆就會越常見。

這個值會將預設值指定為基礎模型在建立模型時使用的值。

topP number

(非必要) 適用於 Nucleus 取樣。

核子取樣會考慮機率總和至少為 topP 的最小詞元集合。

這個值會將預設值指定為基礎模型在建立模型時使用的值。

topK integer

(非必要) 適用於 Top-k 取樣。

Top-k 取樣會考慮機率最高的 topK 個符記組合。這個值會指定後端在呼叫模型時使用的預設值。

這個值會將預設值指定為基礎模型在建立模型時使用的值。

回應主體

如果成功,回應主體會包含 TunedModel 的執行例項。

方法:tunedModels.delete

刪除已調整的模型。

端點

刪除 https://generativelanguage.googleapis.com/v1beta/{name=tunedModels/*}

路徑參數

name string

必要欄位。模型的資源名稱。格式:tunedModels/my-model-id 格式為 tunedModels/{tunedmodel}

要求主體

要求主體必須為空白。

回應主體

如果成功,回應主體會是空白的 JSON 物件。

REST 資源:tunedModels

資源:TunedModel

使用 ModelService.CreateTunedModel 建立的微調模型。

欄位
name string

僅供輸出。調整後模型的名稱。系統會在建立時產生專屬名稱。範例:tunedModels/az2mb0bpw6i 如果在建立時設定顯示名稱,系統會將顯示名稱的字詞連結起來,並加上隨機字串,以確保名稱的唯一性。

範例:

  • displayName = Sentence Translator
  • name = tunedModels/sentence-translator-u3b7m
displayName string

(非必要) 在使用者介面中顯示此模型的名稱。顯示名稱的長度上限為 40 個半形字元,包括空格。

description string

(非必要) 這個模型的簡短說明。

state enum (State)

僅供輸出。已調整模型的狀態。

createTime string (Timestamp format)

僅供輸出。建立此模型的時間戳記。

使用 RFC 3339,產生的輸出內容一律會經過 Z 標準化,並使用 0、3、6 或 9 小數位數。系統也接受「Z」以外的偏移值。例如:"2014-10-02T15:01:23Z""2014-10-02T15:01:23.045123456Z""2014-10-02T15:01:23+05:30"

updateTime string (Timestamp format)

僅供輸出。更新此模型的時間戳記。

使用 RFC 3339,產生的輸出內容一律會經過 Z 標準化,並使用 0、3、6 或 9 小數位數。系統也接受「Z」以外的偏移值。例如:"2014-10-02T15:01:23Z""2014-10-02T15:01:23.045123456Z""2014-10-02T15:01:23+05:30"

tuningTask object (TuningTask)

必要欄位。建立調整後模型的調整工作。

readerProjectNumbers[] string (int64 format)

(非必要) 列出具有調整過的模型讀取權的專案編號。

source_model Union type
用來調整模型的起點。source_model 只能是下列其中一項:
tunedModelSource object (TunedModelSource)

(非必要) 要用來訓練新模型的 TunedModel。

baseModel string

不可變動。要調整的 Model 名稱。範例:models/gemini-1.5-flash-001

temperature number

(非必要) 控制輸出的隨機性。

值的範圍為 [0.0,1.0] 以上 (包含 [0.0,1.0])。值越接近 1.0,產生的回覆就會越多樣化;值越接近 0.0,模型產生的回覆就會越常見。

這個值會將預設值指定為基礎模型在建立模型時使用的值。

topP number

(非必要) 適用於 Nucleus 取樣。

核子取樣會考慮機率總和至少為 topP 的最小詞元集合。

這個值會將預設值指定為基礎模型在建立模型時使用的值。

topK integer

(非必要) 適用於 Top-k 取樣。

Top-k 取樣會考慮機率最高的 topK 個符記組合。這個值會指定後端在呼叫模型時使用的預設值。

這個值會將預設值指定為基礎模型在建立模型時使用的值。

JSON 表示法
{
  "name": string,
  "displayName": string,
  "description": string,
  "state": enum (State),
  "createTime": string,
  "updateTime": string,
  "tuningTask": {
    object (TuningTask)
  },
  "readerProjectNumbers": [
    string
  ],

  // source_model
  "tunedModelSource": {
    object (TunedModelSource)
  },
  "baseModel": string
  // Union type
  "temperature": number,
  "topP": number,
  "topK": integer
}

TunedModelSource

將經過調整的模型做為訓練新模型的來源。

欄位
tunedModel string

不可變動。要用來訓練新模型的起點 TunedModel 名稱。範例:tunedModels/my-tuned-model

baseModel string

僅供輸出。這個 TunedModel 是從哪個基礎 Model 進行調整。範例:models/gemini-1.5-flash-001

JSON 表示法
{
  "tunedModel": string,
  "baseModel": string
}

已調整模型的狀態。

列舉
STATE_UNSPECIFIED 預設值。此值未使用。
CREATING 模型正在建立中。
ACTIVE 模型已準備就緒,可供使用。
FAILED 無法建立模型。

TuningTask

調整建立調整後模型的工作。

欄位
startTime string (Timestamp format)

僅供輸出。開始調整此模型的時間戳記。

使用 RFC 3339,產生的輸出內容一律會經過 Z 標準化,並使用 0、3、6 或 9 小數位數。系統也接受「Z」以外的偏移值。例如:"2014-10-02T15:01:23Z""2014-10-02T15:01:23.045123456Z""2014-10-02T15:01:23+05:30"

completeTime string (Timestamp format)

僅供輸出。完成調整此模型的時間戳記。

使用 RFC 3339,產生的輸出內容一律會經過 Z 標準化,並使用 0、3、6 或 9 小數位數。系統也接受「Z」以外的偏移值。例如:"2014-10-02T15:01:23Z""2014-10-02T15:01:23.045123456Z""2014-10-02T15:01:23+05:30"

snapshots[] object (TuningSnapshot)

僅供輸出。調整期間收集的指標。

trainingData object (Dataset)

必要欄位。僅限輸入。不可變動。模型訓練資料。

hyperparameters object (Hyperparameters)

不可變動。控制調整程序的超參數。如未提供,系統會使用預設值。

JSON 表示法
{
  "startTime": string,
  "completeTime": string,
  "snapshots": [
    {
      object (TuningSnapshot)
    }
  ],
  "trainingData": {
    object (Dataset)
  },
  "hyperparameters": {
    object (Hyperparameters)
  }
}

TuningSnapshot

為單一調整步驟錄製。

欄位
step integer

僅供輸出。調整步驟。

epoch integer

僅供輸出。這個步驟所屬的紀元。

meanLoss number

僅供輸出。此步驟訓練範例的平均損失。

computeTime string (Timestamp format)

僅供輸出。計算此指標的時間戳記。

使用 RFC 3339,產生的輸出內容一律會經過 Z 標準化,並使用 0、3、6 或 9 小數位數。系統也接受「Z」以外的偏移值。例如:"2014-10-02T15:01:23Z""2014-10-02T15:01:23.045123456Z""2014-10-02T15:01:23+05:30"

JSON 表示法
{
  "step": integer,
  "epoch": integer,
  "meanLoss": number,
  "computeTime": string
}

資料集

用於訓練或驗證的資料集。

欄位
dataset Union type
內嵌資料或資料的參照。dataset 只能是下列其中一項:
examples object (TuningExamples)

(非必要) 內嵌範例,內含簡單的輸入/輸出文字。

JSON 表示法
{

  // dataset
  "examples": {
    object (TuningExamples)
  }
  // Union type
}

TuningExamples

一組調校範例。可以是訓練或驗證資料。

欄位
examples[] object (TuningExample)

範例。輸入的範例可以是文字或討論,但一組中的所有範例都必須是相同類型。

JSON 表示法
{
  "examples": [
    {
      object (TuningExample)
    }
  ]
}

TuningExample

調整的單一示例。

欄位
output string

必要欄位。預期的模型輸出內容。

model_input Union type
這個範例的模型輸入內容。model_input 只能是下列其中一項:
textInput string

(非必要) 文字模型輸入內容。

JSON 表示法
{
  "output": string,

  // model_input
  "textInput": string
  // Union type
}

超參數

控制調整程序的超參數。詳情請參閱 https://ai.google.dev/docs/model_tuning_guidance

欄位
learning_rate_option Union type
在調整期間指定學習率的選項。learning_rate_option 只能是下列其中一項:
learningRate number

(非必要) 不可變動。用於調整的學習率超參數。如未設定,系統會根據訓練樣本數量計算預設值 0.001 或 0.0002。

learningRateMultiplier number

(非必要) 不可變動。學習率調節係數會根據預設 (建議) 值計算最終的 learningRate。實際學習率 := learningRateMultiplier * 預設學習率 預設學習率取決於基礎模型和資料集大小。如果未設定,系統會使用預設值 1.0。

epochCount integer

不可變動。訓練週期數,訓練週期是指一次訓練資料的完整傳遞,如未設定,系統會使用預設值 5。

batchSize integer

不可變動。要調整的批次大小超參數。如果未設定,系統會根據訓練範例數量使用 4 或 16 的預設值。

JSON 表示法
{

  // learning_rate_option
  "learningRate": number,
  "learningRateMultiplier": number
  // Union type
  "epochCount": integer,
  "batchSize": integer
}