Tuning

The Gemini API’s fine tuning support provides a mechanism for curating output when you have a small dataset of input/output examples. For more details, check out the Model tuning guide and tutorial.

Method: tunedModels.create

Creates a tuned model. Check intermediate tuning progress (if any) through the google.longrunning.Operations service.

Access status and results through the Operations service. Example: GET /v1/tunedModels/az2mb0bpw6i/operations/000-111-222

Endpoint

post https://generativelanguage.googleapis.com/v1beta/tunedModels

Query parameters

tunedModelId string

Optional. The unique id for the tuned model if specified. This value should be up to 40 characters, the first character must be a letter, the last could be a letter or a number. The id must match the regular expression: [a-z]([a-z0-9-]{0,38}[a-z0-9])?.

Request body

The request body contains an instance of TunedModel.

Fields
displayName string

Optional. The name to display for this model in user interfaces. The display name must be up to 40 characters including spaces.

description string

Optional. A short description of this model.

tuningTask object (TuningTask)

Required. The tuning task that creates the tuned model.

readerProjectNumbers[] string (int64 format)

Optional. List of project numbers that have read access to the tuned model.

Union field source_model. The model used as the starting point for tuning. source_model can be only one of the following:
tunedModelSource object (TunedModelSource)

Optional. TunedModel to use as the starting point for training the new model.

baseModel string

Immutable. The name of the Model to tune. Example: models/gemini-1.5-flash-001

temperature number

Optional. Controls the randomness of the output.

Values can range over [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the model.

This value specifies default to be the one used by the base model while creating the model.

topP number

Optional. For Nucleus sampling.

Nucleus sampling considers the smallest set of tokens whose probability sum is at least topP.

This value specifies default to be the one used by the base model while creating the model.

topK integer

Optional. For Top-k sampling.

Top-k sampling considers the set of topK most probable tokens. This value specifies default to be used by the backend while making the call to the model.

This value specifies default to be the one used by the base model while creating the model.

Example request

Python

import time

base_model = "models/gemini-1.5-flash-001-tuning"
training_data = [
    {"text_input": "1", "output": "2"},
    # ... more examples ...
    # ...
    {"text_input": "seven", "output": "eight"},
]
operation = genai.create_tuned_model(
    # You can use a tuned model here too. Set `source_model="tunedModels/..."`
    display_name="increment",
    source_model=base_model,
    epoch_count=20,
    batch_size=4,
    learning_rate=0.001,
    training_data=training_data,
)

for status in operation.wait_bar():
    time.sleep(10)

result = operation.result()
print(result)
# # You can plot the loss curve with:
# snapshots = pd.DataFrame(result.tuning_task.snapshots)
# sns.lineplot(data=snapshots, x='epoch', y='mean_loss')

model = genai.GenerativeModel(model_name=result.name)
result = model.generate_content("III")
print(result.text)  # IV

Response body

This resource represents a long-running operation that is the result of a network API call.

If successful, the response body contains data with the following structure:

Fields
name string

The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the name should be a resource name ending with operations/{unique_id}.

metadata object

Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.

An object containing fields of an arbitrary type. An additional field "@type" contains a URI identifying the type. Example: { "id": 1234, "@type": "types.example.com/standard/id" }.

done boolean

If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.

Union field result. The operation result, which can be either an error or a valid response. If done == false, neither error nor response is set. If done == true, exactly one of error or response can be set. Some services might not provide the result. result can be only one of the following:
error object (Status)

The error result of the operation in case of failure or cancellation.

response object

The normal, successful response of the operation. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is standard Get/Create/Update, the response should be the resource. For other methods, the response should have the type XxxResponse, where Xxx is the original method name. For example, if the original method name is TakeSnapshot(), the inferred response type is TakeSnapshotResponse.

An object containing fields of an arbitrary type. An additional field "@type" contains a URI identifying the type. Example: { "id": 1234, "@type": "types.example.com/standard/id" }.

JSON representation
{
  "name": string,
  "metadata": {
    "@type": string,
    field1: ...,
    ...
  },
  "done": boolean,

  // Union field result can be only one of the following:
  "error": {
    object (Status)
  },
  "response": {
    "@type": string,
    field1: ...,
    ...
  }
  // End of list of possible types for union field result.
}

Method: tunedModels.generateContent

Generates a model response given an input GenerateContentRequest. Refer to the text generation guide for detailed usage information. Input capabilities differ between models, including tuned models. Refer to the model guide and tuning guide for details.

Endpoint

post https://generativelanguage.googleapis.com/v1beta/{model=tunedModels/*}:generateContent

Path parameters

model string

Required. The name of the Model to use for generating the completion.

Format: name=models/{model}. It takes the form tunedModels/{tunedmodel}.

Request body

The request body contains data with the following structure:

Fields
contents[] object (Content)

Required. The content of the current conversation with the model.

For single-turn queries, this is a single instance. For multi-turn queries like chat, this is a repeated field that contains the conversation history and the latest request.

tools[] object (Tool)

Optional. A list of Tools the Model may use to generate the next response.

A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the Model. Supported Tools are Function and codeExecution. Refer to the Function calling and the Code execution guides to learn more.

toolConfig object (ToolConfig)

Optional. Tool configuration for any Tool specified in the request. Refer to the Function calling guide for a usage example.

safetySettings[] object (SafetySetting)

Optional. A list of unique SafetySetting instances for blocking unsafe content.

This will be enforced on the GenerateContentRequest.contents and GenerateContentResponse.candidates. There should not be more than one setting for each SafetyCategory type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory specified in the safetySettings. If there is no SafetySetting for a given SafetyCategory provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.

systemInstruction object (Content)

Optional. Developer set system instruction(s). Currently, text only.

generationConfig object (GenerationConfig)

Optional. Configuration options for model generation and outputs.

cachedContent string

Optional. The name of the content cached to use as context to serve the prediction. Format: cachedContents/{cachedContent}

Example request

Text

Python

model = genai.GenerativeModel("gemini-1.5-flash")
response = model.generate_content("Write a story about a magic backpack.")
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

const prompt = "Write a story about a magic backpack.";

const result = await model.generateContent(prompt);
console.log(result.response.text());

Go

model := client.GenerativeModel("gemini-1.5-flash")
resp, err := model.GenerateContent(ctx, genai.Text("Write a story about a magic backpack."))
if err != nil {
	log.Fatal(err)
}

printResponse(resp)

Shell

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[{"text": "Write a story about a magic backpack."}]
        }]
       }' 2> /dev/null

Kotlin

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val prompt = "Write a story about a magic backpack."
val response = generativeModel.generateContent(prompt)
print(response.text)

Swift

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

let prompt = "Write a story about a magic backpack."
let response = try await generativeModel.generateContent(prompt)
if let text = response.text {
  print(text)
}

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final prompt = 'Write a story about a magic backpack.';

final response = await model.generateContent([Content.text(prompt)]);
print(response.text);

Java

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Content content =
    new Content.Builder().addText("Write a story about a magic backpack.").build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

ListenableFuture<GenerateContentResponse> response = model.generateContent(content);
Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }
    },
    executor);

Image

Python

import PIL.Image

model = genai.GenerativeModel("gemini-1.5-flash")
organ = PIL.Image.open(media / "organ.jpg")
response = model.generate_content(["Tell me about this instrument", organ])
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

function fileToGenerativePart(path, mimeType) {
  return {
    inlineData: {
      data: Buffer.from(fs.readFileSync(path)).toString("base64"),
      mimeType,
    },
  };
}

const prompt = "Describe how this product might be manufactured.";
// Note: The only accepted mime types are some image types, image/*.
const imagePart = fileToGenerativePart(
  `${mediaPath}/jetpack.jpg`,
  "image/jpeg",
);

const result = await model.generateContent([prompt, imagePart]);
console.log(result.response.text());

Go

model := client.GenerativeModel("gemini-1.5-flash")

imgData, err := os.ReadFile(filepath.Join(testDataDir, "organ.jpg"))
if err != nil {
	log.Fatal(err)
}

resp, err := model.GenerateContent(ctx,
	genai.Text("Tell me about this instrument"),
	genai.ImageData("jpeg", imgData))
if err != nil {
	log.Fatal(err)
}

printResponse(resp)

Shell

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
            {"text": "Tell me about this instrument"},
            {
              "inline_data": {
                "mime_type":"image/jpeg",
                "data": "'$(base64 $B64FLAGS $IMG_PATH)'"
              }
            }
        ]
        }]
       }' 2> /dev/null

Kotlin

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val image: Bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.image)
val inputContent = content {
  image(image)
  text("What's in this picture?")
}

val response = generativeModel.generateContent(inputContent)
print(response.text)

Swift

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

guard let image = UIImage(systemName: "cloud.sun") else { fatalError() }

let prompt = "What's in this picture?"

let response = try await generativeModel.generateContent(image, prompt)
if let text = response.text {
  print(text)
}

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);

Future<DataPart> fileToPart(String mimeType, String path) async {
  return DataPart(mimeType, await File(path).readAsBytes());
}

final prompt = 'Describe how this product might be manufactured.';
final image = await fileToPart('image/jpeg', 'resources/jetpack.jpg');

final response = await model.generateContent([
  Content.multi([TextPart(prompt), image])
]);
print(response.text);

Java

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Bitmap image = BitmapFactory.decodeResource(context.getResources(), R.drawable.image);

Content content =
    new Content.Builder()
        .addText("What's different between these pictures?")
        .addImage(image)
        .build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

ListenableFuture<GenerateContentResponse> response = model.generateContent(content);
Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }
    },
    executor);

Audio

Python

model = genai.GenerativeModel("gemini-1.5-flash")
sample_audio = genai.upload_file(media / "sample.mp3")
response = model.generate_content(["Give me a summary of this audio file.", sample_audio])
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

function fileToGenerativePart(path, mimeType) {
  return {
    inlineData: {
      data: Buffer.from(fs.readFileSync(path)).toString("base64"),
      mimeType,
    },
  };
}

const prompt = "Give me a summary of this audio file.";
// Note: The only accepted mime types are some image types, image/*.
const audioPart = fileToGenerativePart(
  `${mediaPath}/samplesmall.mp3`,
  "audio/mp3",
);

const result = await model.generateContent([prompt, audioPart]);
console.log(result.response.text());

Shell

# Use File API to upload audio data to API request.
MIME_TYPE=$(file -b --mime-type "${AUDIO_PATH}")
NUM_BYTES=$(wc -c < "${AUDIO_PATH}")
DISPLAY_NAME=AUDIO

tmp_header_file=upload-header.tmp

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GOOGLE_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${AUDIO_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Please describe this file."},
          {"file_data":{"mime_type": "audio/mpeg", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo

jq ".candidates[].content.parts[].text" response.json

Video

Python

import time

# Video clip (CC BY 3.0) from https://peach.blender.org/download/
myfile = genai.upload_file(media / "Big_Buck_Bunny.mp4")
print(f"{myfile=}")

# Videos need to be processed before you can use them.
while myfile.state.name == "PROCESSING":
    print("processing video...")
    time.sleep(5)
    myfile = genai.get_file(myfile.name)

model = genai.GenerativeModel("gemini-1.5-flash")
response = model.generate_content([myfile, "Describe this video clip"])
print(f"{response.text=}")

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
// import { GoogleAIFileManager, FileState } from "@google/generative-ai/server";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

const fileManager = new GoogleAIFileManager(process.env.API_KEY);

const uploadResult = await fileManager.uploadFile(
  `${mediaPath}/Big_Buck_Bunny.mp4`,
  { mimeType: "video/mp4" },
);

let file = await fileManager.getFile(uploadResult.file.name);
while (file.state === FileState.PROCESSING) {
  process.stdout.write(".");
  // Sleep for 10 seconds
  await new Promise((resolve) => setTimeout(resolve, 10_000));
  // Fetch the file from the API again
  file = await fileManager.getFile(uploadResult.file.name);
}

if (file.state === FileState.FAILED) {
  throw new Error("Video processing failed.");
}

const prompt = "Describe this video clip";
const videoPart = {
  fileData: {
    fileUri: uploadResult.file.uri,
    mimeType: uploadResult.file.mimeType,
  },
};

const result = await model.generateContent([prompt, videoPart]);
console.log(result.response.text());

Go

model := client.GenerativeModel("gemini-1.5-flash")

file, err := client.UploadFileFromPath(ctx, filepath.Join(testDataDir, "earth.mp4"), nil)
if err != nil {
	log.Fatal(err)
}
defer client.DeleteFile(ctx, file.Name)

// Videos need to be processed before you can use them.
for file.State == genai.FileStateProcessing {
	log.Printf("processing %s", file.Name)
	time.Sleep(5 * time.Second)
	var err error
	if file, err = client.GetFile(ctx, file.Name); err != nil {
		log.Fatal(err)
	}
}
if file.State != genai.FileStateActive {
	log.Fatalf("uploaded file has state %s, not active", file.State)
}

resp, err := model.GenerateContent(ctx,
	genai.Text("Describe this video clip"),
	genai.FileData{URI: file.URI})
if err != nil {
	log.Fatal(err)
}

printResponse(resp)

Shell

# Use File API to upload audio data to API request.
MIME_TYPE=$(file -b --mime-type "${VIDEO_PATH}")
NUM_BYTES=$(wc -c < "${VIDEO_PATH}")
DISPLAY_NAME=VIDEO

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GOOGLE_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${VIDEO_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

state=$(jq ".file.state" file_info.json)
echo state=$state

name=$(jq ".file.name" file_info.json)
echo name=$name

while [[ "($state)" = *"PROCESSING"* ]];
do
  echo "Processing video..."
  sleep 5
  # Get the file of interest to check state
  curl https://generativelanguage.googleapis.com/v1beta/files/$name > file_info.json
  state=$(jq ".file.state" file_info.json)
done

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Please describe this file."},
          {"file_data":{"mime_type": "video/mp4", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo

jq ".candidates[].content.parts[].text" response.json

PDF

Python

model = genai.GenerativeModel("gemini-1.5-flash")
sample_pdf = genai.upload_file(media / "test.pdf")
response = model.generate_content(["Give me a summary of this document:", sample_pdf])
print(f"{response.text=}")

Shell

MIME_TYPE=$(file -b --mime-type "${PDF_PATH}")
NUM_BYTES=$(wc -c < "${PDF_PATH}")
DISPLAY_NAME=TEXT


echo $MIME_TYPE
tmp_header_file=upload-header.tmp

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GOOGLE_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${PDF_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

# Now generate content using that file
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Can you add a few more lines to this poem?"},
          {"file_data":{"mime_type": "application/pdf", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo

jq ".candidates[].content.parts[].text" response.json

Chat

Python

model = genai.GenerativeModel("gemini-1.5-flash")
chat = model.start_chat(
    history=[
        {"role": "user", "parts": "Hello"},
        {"role": "model", "parts": "Great to meet you. What would you like to know?"},
    ]
)
response = chat.send_message("I have 2 dogs in my house.")
print(response.text)
response = chat.send_message("How many paws are in my house?")
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
const chat = model.startChat({
  history: [
    {
      role: "user",
      parts: [{ text: "Hello" }],
    },
    {
      role: "model",
      parts: [{ text: "Great to meet you. What would you like to know?" }],
    },
  ],
});
let result = await chat.sendMessage("I have 2 dogs in my house.");
console.log(result.response.text());
result = await chat.sendMessage("How many paws are in my house?");
console.log(result.response.text());

Go

model := client.GenerativeModel("gemini-1.5-flash")
cs := model.StartChat()

cs.History = []*genai.Content{
	{
		Parts: []genai.Part{
			genai.Text("Hello, I have 2 dogs in my house."),
		},
		Role: "user",
	},
	{
		Parts: []genai.Part{
			genai.Text("Great to meet you. What would you like to know?"),
		},
		Role: "model",
	},
}

res, err := cs.SendMessage(ctx, genai.Text("How many paws are in my house?"))
if err != nil {
	log.Fatal(err)
}
printResponse(res)

Shell

curl https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [
        {"role":"user",
         "parts":[{
           "text": "Hello"}]},
        {"role": "model",
         "parts":[{
           "text": "Great to meet you. What would you like to know?"}]},
        {"role":"user",
         "parts":[{
           "text": "I have two dogs in my house. How many paws are in my house?"}]},
      ]
    }' 2> /dev/null | grep "text"

Kotlin

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val chat =
    generativeModel.startChat(
        history =
            listOf(
                content(role = "user") { text("Hello, I have 2 dogs in my house.") },
                content(role = "model") {
                  text("Great to meet you. What would you like to know?")
                }))

val response = chat.sendMessage("How many paws are in my house?")
print(response.text)

Swift

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

// Optionally specify existing chat history
let history = [
  ModelContent(role: "user", parts: "Hello, I have 2 dogs in my house."),
  ModelContent(role: "model", parts: "Great to meet you. What would you like to know?"),
]

// Initialize the chat with optional chat history
let chat = generativeModel.startChat(history: history)

// To generate text output, call sendMessage and pass in the message
let response = try await chat.sendMessage("How many paws are in my house?")
if let text = response.text {
  print(text)
}

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final chat = model.startChat(history: [
  Content.text('hello'),
  Content.model([TextPart('Great to meet you. What would you like to know?')])
]);
var response =
    await chat.sendMessage(Content.text('I have 2 dogs in my house.'));
print(response.text);
response =
    await chat.sendMessage(Content.text('How many paws are in my house?'));
print(response.text);

Java

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

// (optional) Create previous chat history for context
Content.Builder userContentBuilder = new Content.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText("Hello, I have 2 dogs in my house.");
Content userContent = userContentBuilder.build();

Content.Builder modelContentBuilder = new Content.Builder();
modelContentBuilder.setRole("model");
modelContentBuilder.addText("Great to meet you. What would you like to know?");
Content modelContent = userContentBuilder.build();

List<Content> history = Arrays.asList(userContent, modelContent);

// Initialize the chat
ChatFutures chat = model.startChat(history);

// Create a new user message
Content.Builder userMessageBuilder = new Content.Builder();
userMessageBuilder.setRole("user");
userMessageBuilder.addText("How many paws are in my house?");
Content userMessage = userMessageBuilder.build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

// Send the message
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(userMessage);

Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }
    },
    executor);

Cache

Python

document = genai.upload_file(path=media / "a11.txt")
model_name = "gemini-1.5-flash-001"
cache = genai.caching.CachedContent.create(
    model=model_name,
    system_instruction="You are an expert analyzing transcripts.",
    contents=[document],
)
print(cache)

model = genai.GenerativeModel.from_cached_content(cache)
response = model.generate_content("Please summarize this transcript")
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleAICacheManager, GoogleAIFileManager } from "@google/generative-ai/server";
// import { GoogleGenerativeAI } from "@google/generative-ai";
const cacheManager = new GoogleAICacheManager(process.env.API_KEY);
const fileManager = new GoogleAIFileManager(process.env.API_KEY);

const uploadResult = await fileManager.uploadFile(`${mediaPath}/a11.txt`, {
  mimeType: "text/plain",
});

const cacheResult = await cacheManager.create({
  model: "models/gemini-1.5-flash-001",
  contents: [
    {
      role: "user",
      parts: [
        {
          fileData: {
            fileUri: uploadResult.file.uri,
            mimeType: uploadResult.file.mimeType,
          },
        },
      ],
    },
  ],
});

console.log(cacheResult);

const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModelFromCachedContent(cacheResult);
const result = await model.generateContent(
  "Please summarize this transcript.",
);
console.log(result.response.text());

Tuned Model

Python

model = genai.GenerativeModel(model_name="tunedModels/my-increment-model")
result = model.generate_content("III")
print(result.text)  # "IV"

JSON Mode

Python

import typing_extensions as typing

class Recipe(typing.TypedDict):
    recipe_name: str
    ingredients: list[str]

model = genai.GenerativeModel("gemini-1.5-pro-latest")
result = model.generate_content(
    "List a few popular cookie recipes.",
    generation_config=genai.GenerationConfig(
        response_mime_type="application/json", response_schema=list[Recipe]
    ),
)
print(result)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI, SchemaType } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);

const schema = {
  description: "List of recipes",
  type: SchemaType.ARRAY,
  items: {
    type: SchemaType.OBJECT,
    properties: {
      recipeName: {
        type: SchemaType.STRING,
        description: "Name of the recipe",
        nullable: false,
      },
    },
    required: ["recipeName"],
  },
};

const model = genAI.getGenerativeModel({
  model: "gemini-1.5-pro",
  generationConfig: {
    responseMimeType: "application/json",
    responseSchema: schema,
  },
});

const result = await model.generateContent(
  "List a few popular cookie recipes.",
);
console.log(result.response.text());

Go

model := client.GenerativeModel("gemini-1.5-pro-latest")
// Ask the model to respond with JSON.
model.ResponseMIMEType = "application/json"
// Specify the schema.
model.ResponseSchema = &genai.Schema{
	Type:  genai.TypeArray,
	Items: &genai.Schema{Type: genai.TypeString},
}
resp, err := model.GenerateContent(ctx, genai.Text("List a few popular cookie recipes using this JSON schema."))
if err != nil {
	log.Fatal(err)
}
for _, part := range resp.Candidates[0].Content.Parts {
	if txt, ok := part.(genai.Text); ok {
		var recipes []string
		if err := json.Unmarshal([]byte(txt), &recipes); err != nil {
			log.Fatal(err)
		}
		fmt.Println(recipes)
	}
}

Shell

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
    "contents": [{
      "parts":[
        {"text": "List 5 popular cookie recipes"}
        ]
    }],
    "generationConfig": {
        "response_mime_type": "application/json",
        "response_schema": {
          "type": "ARRAY",
          "items": {
            "type": "OBJECT",
            "properties": {
              "recipe_name": {"type":"STRING"},
            }
          }
        }
    }
}' 2> /dev/null | head

Kotlin

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-pro",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey,
        generationConfig = generationConfig {
            responseMimeType = "application/json"
            responseSchema = Schema(
                name = "recipes",
                description = "List of recipes",
                type = FunctionType.ARRAY,
                items = Schema(
                    name = "recipe",
                    description = "A recipe",
                    type = FunctionType.OBJECT,
                    properties = mapOf(
                        "recipeName" to Schema(
                            name = "recipeName",
                            description = "Name of the recipe",
                            type = FunctionType.STRING,
                            nullable = false
                        ),
                    ),
                    required = listOf("recipeName")
                ),
            )
        })

val prompt = "List a few popular cookie recipes."
val response = generativeModel.generateContent(prompt)
print(response.text)

Swift

let jsonSchema = Schema(
  type: .array,
  description: "List of recipes",
  items: Schema(
    type: .object,
    properties: [
      "recipeName": Schema(type: .string, description: "Name of the recipe", nullable: false),
    ],
    requiredProperties: ["recipeName"]
  )
)

let generativeModel = GenerativeModel(
  // Specify a model that supports controlled generation like Gemini 1.5 Pro
  name: "gemini-1.5-pro",
  // Access your API key from your on-demand resource .plist file (see "Set up your API key"
  // above)
  apiKey: APIKey.default,
  generationConfig: GenerationConfig(
    responseMIMEType: "application/json",
    responseSchema: jsonSchema
  )
)

let prompt = "List a few popular cookie recipes."
let response = try await generativeModel.generateContent(prompt)
if let text = response.text {
  print(text)
}

Dart

final schema = Schema.array(
    description: 'List of recipes',
    items: Schema.object(properties: {
      'recipeName':
          Schema.string(description: 'Name of the recipe.', nullable: false)
    }, requiredProperties: [
      'recipeName'
    ]));

final model = GenerativeModel(
    model: 'gemini-1.5-pro',
    apiKey: apiKey,
    generationConfig: GenerationConfig(
        responseMimeType: 'application/json', responseSchema: schema));

final prompt = 'List a few popular cookie recipes.';
final response = await model.generateContent([Content.text(prompt)]);
print(response.text);

Java

Schema<List<String>> schema =
    new Schema(
        /* name */ "recipes",
        /* description */ "List of recipes",
        /* format */ null,
        /* nullable */ false,
        /* list */ null,
        /* properties */ null,
        /* required */ null,
        /* items */ new Schema(
            /* name */ "recipe",
            /* description */ "A recipe",
            /* format */ null,
            /* nullable */ false,
            /* list */ null,
            /* properties */ Map.of(
                "recipeName",
                new Schema(
                    /* name */ "recipeName",
                    /* description */ "Name of the recipe",
                    /* format */ null,
                    /* nullable */ false,
                    /* list */ null,
                    /* properties */ null,
                    /* required */ null,
                    /* items */ null,
                    /* type */ FunctionType.STRING)),
            /* required */ null,
            /* items */ null,
            /* type */ FunctionType.OBJECT),
        /* type */ FunctionType.ARRAY);

GenerationConfig.Builder configBuilder = new GenerationConfig.Builder();
configBuilder.responseMimeType = "application/json";
configBuilder.responseSchema = schema;

GenerationConfig generationConfig = configBuilder.build();

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-pro",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey,
        /* generationConfig */ generationConfig);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Content content = new Content.Builder().addText("List a few popular cookie recipes.").build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

ListenableFuture<GenerateContentResponse> response = model.generateContent(content);
Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }
    },
    executor);

Code execution

Python

model = genai.GenerativeModel(model_name="gemini-1.5-flash", tools="code_execution")
response = model.generate_content(
    (
        "What is the sum of the first 50 prime numbers? "
        "Generate and run code for the calculation, and make sure you get all 50."
    )
)

# Each `part` either contains `text`, `executable_code` or an `execution_result`
for part in response.candidates[0].content.parts:
    print(part, "\n")

print("-" * 80)
# The `.text` accessor joins the parts into a markdown compatible text representation.
print("\n\n", response.text)

Kotlin


val model = GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    modelName = "gemini-1.5-pro",
    // Access your API key as a Build Configuration variable (see "Set up your API key" above)
    apiKey = BuildConfig.apiKey,
    tools = listOf(Tool.CODE_EXECUTION)
)

val response = model.generateContent("What is the sum of the first 50 prime numbers?")

// Each `part` either contains `text`, `executable_code` or an `execution_result`
println(response.candidates[0].content.parts.joinToString("\n"))

// Alternatively, you can use the `text` accessor which joins the parts into a markdown compatible
// text representation
println(response.text)

Java

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
        new GenerativeModel(
                /* modelName */ "gemini-1.5-pro",
                // Access your API key as a Build Configuration variable (see "Set up your API key"
                // above)
                /* apiKey */ BuildConfig.apiKey,
                /* generationConfig */ null,
                /* safetySettings */ null,
                /* requestOptions */ new RequestOptions(),
                /* tools */ Collections.singletonList(Tool.CODE_EXECUTION));
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Content inputContent =
        new Content.Builder().addText("What is the sum of the first 50 prime numbers?").build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

ListenableFuture<GenerateContentResponse> response = model.generateContent(inputContent);
Futures.addCallback(
        response,
        new FutureCallback<GenerateContentResponse>() {
            @Override
            public void onSuccess(GenerateContentResponse result) {
                // Each `part` either contains `text`, `executable_code` or an
                // `execution_result`
                Candidate candidate = result.getCandidates().get(0);
                for (Part part : candidate.getContent().getParts()) {
                    System.out.println(part);
                }

                // Alternatively, you can use the `text` accessor which joins the parts into a
                // markdown compatible text representation
                String resultText = result.getText();
                System.out.println(resultText);
            }

            @Override
            public void onFailure(Throwable t) {
                t.printStackTrace();
            }
        },
        executor);

Function Calling

Python

def add(a: float, b: float):
    """returns a + b."""
    return a + b

def subtract(a: float, b: float):
    """returns a - b."""
    return a - b

def multiply(a: float, b: float):
    """returns a * b."""
    return a * b

def divide(a: float, b: float):
    """returns a / b."""
    return a / b

model = genai.GenerativeModel(
    model_name="gemini-1.5-flash", tools=[add, subtract, multiply, divide]
)
chat = model.start_chat(enable_automatic_function_calling=True)
response = chat.send_message(
    "I have 57 cats, each owns 44 mittens, how many mittens is that in total?"
)
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
async function setLightValues(brightness, colorTemperature) {
  // This mock API returns the requested lighting values
  return {
    brightness,
    colorTemperature,
  };
}

const controlLightFunctionDeclaration = {
  name: "controlLight",
  parameters: {
    type: "OBJECT",
    description: "Set the brightness and color temperature of a room light.",
    properties: {
      brightness: {
        type: "NUMBER",
        description:
          "Light level from 0 to 100. Zero is off and 100 is full brightness.",
      },
      colorTemperature: {
        type: "STRING",
        description:
          "Color temperature of the light fixture which can be `daylight`, `cool` or `warm`.",
      },
    },
    required: ["brightness", "colorTemperature"],
  },
};

// Executable function code. Put it in a map keyed by the function name
// so that you can call it once you get the name string from the model.
const functions = {
  controlLight: ({ brightness, colorTemperature }) => {
    return setLightValues(brightness, colorTemperature);
  },
};

const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({
  model: "gemini-1.5-flash",
  tools: { functionDeclarations: [controlLightFunctionDeclaration] },
});
const chat = model.startChat();
const prompt = "Dim the lights so the room feels cozy and warm.";

// Send the message to the model.
const result = await chat.sendMessage(prompt);

// For simplicity, this uses the first function call found.
const call = result.response.functionCalls()[0];

if (call) {
  // Call the executable function named in the function call
  // with the arguments specified in the function call and
  // let it call the hypothetical API.
  const apiResponse = await functions[call.name](call.args);

  // Send the API response back to the model so it can generate
  // a text response that can be displayed to the user.
  const result2 = await chat.sendMessage([
    {
      functionResponse: {
        name: "controlLight",
        response: apiResponse,
      },
    },
  ]);

  // Log the text response.
  console.log(result2.response.text());
}

Shell


cat > tools.json << EOF
{
  "function_declarations": [
    {
      "name": "enable_lights",
      "description": "Turn on the lighting system.",
      "parameters": { "type": "object" }
    },
    {
      "name": "set_light_color",
      "description": "Set the light color. Lights must be enabled for this to work.",
      "parameters": {
        "type": "object",
        "properties": {
          "rgb_hex": {
            "type": "string",
            "description": "The light color as a 6-digit hex string, e.g. ff0000 for red."
          }
        },
        "required": [
          "rgb_hex"
        ]
      }
    },
    {
      "name": "stop_lights",
      "description": "Turn off the lighting system.",
      "parameters": { "type": "object" }
    }
  ]
} 
EOF

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-latest:generateContent?key=$GOOGLE_API_KEY" \
  -H 'Content-Type: application/json' \
  -d @<(echo '
  {
    "system_instruction": {
      "parts": {
        "text": "You are a helpful lighting system bot. You can turn lights on and off, and you can set the color. Do not perform any other tasks."
      }
    },
    "tools": ['$(source "$tools")'],

    "tool_config": {
      "function_calling_config": {"mode": "none"}
    },

    "contents": {
      "role": "user",
      "parts": {
        "text": "What can you do?"
      }
    }
  }
') 2>/dev/null |sed -n '/"content"/,/"finishReason"/p'

Kotlin

fun multiply(a: Double, b: Double) = a * b

val multiplyDefinition = defineFunction(
    name = "multiply",
    description = "returns the product of the provided numbers.",
    parameters = listOf(
    Schema.double("a", "First number"),
    Schema.double("b", "Second number")
    )
)

val usableFunctions = listOf(multiplyDefinition)

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey,
        // List the functions definitions you want to make available to the model
        tools = listOf(Tool(usableFunctions))
    )

val chat = generativeModel.startChat()
val prompt = "I have 57 cats, each owns 44 mittens, how many mittens is that in total?"

// Send the message to the generative model
var response = chat.sendMessage(prompt)

// Check if the model responded with a function call
response.functionCalls.first { it.name == "multiply" }.apply {
    val a: String by args
    val b: String by args

    val result = JSONObject(mapOf("result" to multiply(a.toDouble(), b.toDouble())))
    response = chat.sendMessage(
        content(role = "function") {
            part(FunctionResponsePart("multiply", result))
        }
    )
}

// Whenever the model responds with text, show it in the UI
response.text?.let { modelResponse ->
    println(modelResponse)
}

Swift

// Calls a hypothetical API to control a light bulb and returns the values that were set.
func controlLight(brightness: Double, colorTemperature: String) -> JSONObject {
  return ["brightness": .number(brightness), "colorTemperature": .string(colorTemperature)]
}

let generativeModel =
  GenerativeModel(
    // Use a model that supports function calling, like a Gemini 1.5 model
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default,
    tools: [Tool(functionDeclarations: [
      FunctionDeclaration(
        name: "controlLight",
        description: "Set the brightness and color temperature of a room light.",
        parameters: [
          "brightness": Schema(
            type: .number,
            format: "double",
            description: "Light level from 0 to 100. Zero is off and 100 is full brightness."
          ),
          "colorTemperature": Schema(
            type: .string,
            format: "enum",
            description: "Color temperature of the light fixture.",
            enumValues: ["daylight", "cool", "warm"]
          ),
        ],
        requiredParameters: ["brightness", "colorTemperature"]
      ),
    ])]
  )

let chat = generativeModel.startChat()

let prompt = "Dim the lights so the room feels cozy and warm."

// Send the message to the model.
let response1 = try await chat.sendMessage(prompt)

// Check if the model responded with a function call.
// For simplicity, this sample uses the first function call found.
guard let functionCall = response1.functionCalls.first else {
  fatalError("Model did not respond with a function call.")
}
// Print an error if the returned function was not declared
guard functionCall.name == "controlLight" else {
  fatalError("Unexpected function called: \(functionCall.name)")
}
// Verify that the names and types of the parameters match the declaration
guard case let .number(brightness) = functionCall.args["brightness"] else {
  fatalError("Missing argument: brightness")
}
guard case let .string(colorTemperature) = functionCall.args["colorTemperature"] else {
  fatalError("Missing argument: colorTemperature")
}

// Call the executable function named in the FunctionCall with the arguments specified in the
// FunctionCall and let it call the hypothetical API.
let apiResponse = controlLight(brightness: brightness, colorTemperature: colorTemperature)

// Send the API response back to the model so it can generate a text response that can be
// displayed to the user.
let response2 = try await chat.sendMessage([ModelContent(
  role: "function",
  parts: [.functionResponse(FunctionResponse(name: "controlLight", response: apiResponse))]
)])

if let text = response2.text {
  print(text)
}

Dart

Map<String, Object?> setLightValues(Map<String, Object?> args) {
  return args;
}

final controlLightFunction = FunctionDeclaration(
    'controlLight',
    'Set the brightness and color temperature of a room light.',
    Schema.object(properties: {
      'brightness': Schema.number(
          description:
              'Light level from 0 to 100. Zero is off and 100 is full brightness.',
          nullable: false),
      'colorTemperatur': Schema.string(
          description:
              'Color temperature of the light fixture which can be `daylight`, `cool`, or `warm`',
          nullable: false),
    }));

final functions = {controlLightFunction.name: setLightValues};
FunctionResponse dispatchFunctionCall(FunctionCall call) {
  final function = functions[call.name]!;
  final result = function(call.args);
  return FunctionResponse(call.name, result);
}

final model = GenerativeModel(
  model: 'gemini-1.5-pro',
  apiKey: apiKey,
  tools: [
    Tool(functionDeclarations: [controlLightFunction])
  ],
);

final prompt = 'Dim the lights so the room feels cozy and warm.';
final content = [Content.text(prompt)];
var response = await model.generateContent(content);

List<FunctionCall> functionCalls;
while ((functionCalls = response.functionCalls.toList()).isNotEmpty) {
  var responses = <FunctionResponse>[
    for (final functionCall in functionCalls)
      dispatchFunctionCall(functionCall)
  ];
  content
    ..add(response.candidates.first.content)
    ..add(Content.functionResponses(responses));
  response = await model.generateContent(content);
}
print('Response: ${response.text}');

Java

FunctionDeclaration multiplyDefinition =
    defineFunction(
        /* name  */ "multiply",
        /* description */ "returns a * b.",
        /* parameters */ Arrays.asList(
            Schema.numDouble("a", "First parameter"),
            Schema.numDouble("b", "Second parameter")),
        /* required */ Arrays.asList("a", "b"));

Tool tool = new Tool(Arrays.asList(multiplyDefinition), null);

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey,
        /* generationConfig (optional) */ null,
        /* safetySettings (optional) */ null,
        /* requestOptions (optional) */ new RequestOptions(),
        /* functionDeclarations (optional) */ Arrays.asList(tool));
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

// Create prompt
Content.Builder userContentBuilder = new Content.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText(
    "I have 57 cats, each owns 44 mittens, how many mittens is that in total?");
Content userMessage = userContentBuilder.build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

// Initialize the chat
ChatFutures chat = model.startChat();

// Send the message
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(userMessage);

Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        if (!result.getFunctionCalls().isEmpty()) {
          handleFunctionCall(result);
        }
        if (!result.getText().isEmpty()) {
          System.out.println(result.getText());
        }
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }

      private void handleFunctionCall(GenerateContentResponse result) {
        FunctionCallPart multiplyFunctionCallPart =
            result.getFunctionCalls().stream()
                .filter(fun -> fun.getName().equals("multiply"))
                .findFirst()
                .get();
        double a = Double.parseDouble(multiplyFunctionCallPart.getArgs().get("a"));
        double b = Double.parseDouble(multiplyFunctionCallPart.getArgs().get("b"));

        try {
          // `multiply(a, b)` is a regular java function defined in another class
          FunctionResponsePart functionResponsePart =
              new FunctionResponsePart(
                  "multiply", new JSONObject().put("result", multiply(a, b)));

          // Create prompt
          Content.Builder functionCallResponse = new Content.Builder();
          userContentBuilder.setRole("user");
          userContentBuilder.addPart(functionResponsePart);
          Content userMessage = userContentBuilder.build();

          chat.sendMessage(userMessage);
        } catch (JSONException e) {
          throw new RuntimeException(e);
        }
      }
    },
    executor);

Generation config

Python

model = genai.GenerativeModel("gemini-1.5-flash")
response = model.generate_content(
    "Tell me a story about a magic backpack.",
    generation_config=genai.types.GenerationConfig(
        # Only one candidate for now.
        candidate_count=1,
        stop_sequences=["x"],
        max_output_tokens=20,
        temperature=1.0,
    ),
)

print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({
  model: "gemini-1.5-flash",
  generationConfig: {
    candidateCount: 1,
    stopSequences: ["x"],
    maxOutputTokens: 20,
    temperature: 1.0,
  },
});

const result = await model.generateContent(
  "Tell me a story about a magic backpack.",
);
console.log(result.response.text());

Go

model := client.GenerativeModel("gemini-1.5-pro-latest")
model.SetTemperature(0.9)
model.SetTopP(0.5)
model.SetTopK(20)
model.SetMaxOutputTokens(100)
model.SystemInstruction = genai.NewUserContent(genai.Text("You are Yoda from Star Wars."))
model.ResponseMIMEType = "application/json"
resp, err := model.GenerateContent(ctx, genai.Text("What is the average size of a swallow?"))
if err != nil {
	log.Fatal(err)
}
printResponse(resp)

Shell

curl https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
        "contents": [{
            "parts":[
                {"text": "Write a story about a magic backpack."}
            ]
        }],
        "safetySettings": [
            {
                "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
                "threshold": "BLOCK_ONLY_HIGH"
            }
        ],
        "generationConfig": {
            "stopSequences": [
                "Title"
            ],
            "temperature": 1.0,
            "maxOutputTokens": 800,
            "topP": 0.8,
            "topK": 10
        }
    }'  2> /dev/null | grep "text"

Kotlin

val config = generationConfig {
  temperature = 0.9f
  topK = 16
  topP = 0.1f
  maxOutputTokens = 200
  stopSequences = listOf("red")
}

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        apiKey = BuildConfig.apiKey,
        generationConfig = config)

Swift

let config = GenerationConfig(
  temperature: 0.9,
  topP: 0.1,
  topK: 16,
  candidateCount: 1,
  maxOutputTokens: 200,
  stopSequences: ["red", "orange"]
)

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default,
    generationConfig: config
  )

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final prompt = 'Tell me a story about a magic backpack.';

final response = await model.generateContent(
  [Content.text(prompt)],
  generationConfig: GenerationConfig(
    candidateCount: 1,
    stopSequences: ['x'],
    maxOutputTokens: 20,
    temperature: 1.0,
  ),
);
print(response.text);

Java

GenerationConfig.Builder configBuilder = new GenerationConfig.Builder();
configBuilder.temperature = 0.9f;
configBuilder.topK = 16;
configBuilder.topP = 0.1f;
configBuilder.maxOutputTokens = 200;
configBuilder.stopSequences = Arrays.asList("red");

GenerationConfig generationConfig = configBuilder.build();

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel("gemini-1.5-flash", BuildConfig.apiKey, generationConfig);

GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Safety Settings

Python

model = genai.GenerativeModel("gemini-1.5-flash")
unsafe_prompt = "I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them."
response = model.generate_content(
    unsafe_prompt,
    safety_settings={
        "HATE": "MEDIUM",
        "HARASSMENT": "BLOCK_ONLY_HIGH",
    },
)
# If you want to set all the safety_settings to the same value you can just pass that value:
response = model.generate_content(unsafe_prompt, safety_settings="MEDIUM")
try:
    print(response.text)
except:
    print("No information generated by the model.")

print(response.candidates[0].safety_ratings)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI, HarmCategory, HarmBlockThreshold } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({
  model: "gemini-1.5-flash",
  safetySettings: [
    {
      category: HarmCategory.HARM_CATEGORY_HARASSMENT,
      threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
    },
    {
      category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
      threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
    },
  ],
});

const unsafePrompt =
  "I support Martians Soccer Club and I think " +
  "Jupiterians Football Club sucks! Write an ironic phrase telling " +
  "them how I feel about them.";

const result = await model.generateContent(unsafePrompt);

try {
  result.response.text();
} catch (e) {
  console.error(e);
  console.log(result.response.candidates[0].safetyRatings);
}

Go

model := client.GenerativeModel("gemini-1.5-flash")
model.SafetySettings = []*genai.SafetySetting{
	{
		Category:  genai.HarmCategoryDangerousContent,
		Threshold: genai.HarmBlockLowAndAbove,
	},
	{
		Category:  genai.HarmCategoryHarassment,
		Threshold: genai.HarmBlockMediumAndAbove,
	},
}
resp, err := model.GenerateContent(ctx, genai.Text("I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them."))
if err != nil {
	log.Fatal(err)
}
printResponse(resp)

Shell

echo '{
    "safetySettings": [
        {"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_ONLY_HIGH"},
        {"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_MEDIUM_AND_ABOVE"}
    ],
    "contents": [{
        "parts":[{
            "text": "'I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them.'"}]}]}' > request.json

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d @request.json 2> /dev/null

Kotlin

val harassmentSafety = SafetySetting(HarmCategory.HARASSMENT, BlockThreshold.ONLY_HIGH)

val hateSpeechSafety = SafetySetting(HarmCategory.HATE_SPEECH, BlockThreshold.MEDIUM_AND_ABOVE)

val generativeModel =
    GenerativeModel(
        // The Gemini 1.5 models are versatile and work with most use cases
        modelName = "gemini-1.5-flash",
        apiKey = BuildConfig.apiKey,
        safetySettings = listOf(harassmentSafety, hateSpeechSafety))

Swift

let safetySettings = [
  SafetySetting(harmCategory: .dangerousContent, threshold: .blockLowAndAbove),
  SafetySetting(harmCategory: .harassment, threshold: .blockMediumAndAbove),
  SafetySetting(harmCategory: .hateSpeech, threshold: .blockOnlyHigh),
]

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default,
    safetySettings: safetySettings
  )

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final prompt = 'I support Martians Soccer Club and I think '
    'Jupiterians Football Club sucks! Write an ironic phrase telling '
    'them how I feel about them.';

final response = await model.generateContent(
  [Content.text(prompt)],
  safetySettings: [
    SafetySetting(HarmCategory.harassment, HarmBlockThreshold.medium),
    SafetySetting(HarmCategory.hateSpeech, HarmBlockThreshold.low),
  ],
);
try {
  print(response.text);
} catch (e) {
  print(e);
  for (final SafetyRating(:category, :probability)
      in response.candidates.first.safetyRatings!) {
    print('Safety Rating: $category - $probability');
  }
}

Java

SafetySetting harassmentSafety =
    new SafetySetting(HarmCategory.HARASSMENT, BlockThreshold.ONLY_HIGH);

SafetySetting hateSpeechSafety =
    new SafetySetting(HarmCategory.HATE_SPEECH, BlockThreshold.MEDIUM_AND_ABOVE);

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        "gemini-1.5-flash",
        BuildConfig.apiKey,
        null, // generation config is optional
        Arrays.asList(harassmentSafety, hateSpeechSafety));

GenerativeModelFutures model = GenerativeModelFutures.from(gm);

System Instruction

Python

model = genai.GenerativeModel(
    "models/gemini-1.5-flash",
    system_instruction="You are a cat. Your name is Neko.",
)
response = model.generate_content("Good morning! How are you?")
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({
  model: "gemini-1.5-flash",
  systemInstruction: "You are a cat. Your name is Neko.",
});

const prompt = "Good morning! How are you?";

const result = await model.generateContent(prompt);
const response = result.response;
const text = response.text();
console.log(text);

Go

model := client.GenerativeModel("gemini-1.5-flash")
model.SystemInstruction = genai.NewUserContent(genai.Text("You are a cat. Your name is Neko."))
resp, err := model.GenerateContent(ctx, genai.Text("Good morning! How are you?"))
if err != nil {
	log.Fatal(err)
}
printResponse(resp)

Shell

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{ "system_instruction": {
    "parts":
      { "text": "You are a cat. Your name is Neko."}},
    "contents": {
      "parts": {
        "text": "Hello there"}}}'

Kotlin

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        apiKey = BuildConfig.apiKey,
        systemInstruction = content { text("You are a cat. Your name is Neko.") },
    )

Swift

let generativeModel =
  GenerativeModel(
    // Specify a model that supports system instructions, like a Gemini 1.5 model
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default,
    systemInstruction: ModelContent(role: "system", parts: "You are a cat. Your name is Neko.")
  )

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
  systemInstruction: Content.system('You are a cat. Your name is Neko.'),
);
final prompt = 'Good morning! How are you?';

final response = await model.generateContent([Content.text(prompt)]);
print(response.text);

Java

GenerativeModel model =
    new GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        /* modelName */ "gemini-1.5-flash",
        /* apiKey */ BuildConfig.apiKey,
        /* generationConfig (optional) */ null,
        /* safetySettings (optional) */ null,
        /* requestOptions (optional) */ new RequestOptions(),
        /* tools (optional) */ null,
        /* toolsConfig (optional) */ null,
        /* systemInstruction (optional) */ new Content.Builder()
            .addText("You are a cat. Your name is Neko.")
            .build());

Response body

If successful, the response body contains an instance of GenerateContentResponse.

Method: tunedModels.get

Gets information about a specific TunedModel.

Endpoint

get https://generativelanguage.googleapis.com/v1beta/{name=tunedModels/*}

Path parameters

name string

Required. The resource name of the model.

Format: tunedModels/my-model-id It takes the form tunedModels/{tunedmodel}.

Request body

The request body must be empty.

Example request

Python

model_info = genai.get_model("tunedModels/my-increment-model")
print(model_info)

Response body

If successful, the response body contains an instance of TunedModel.

Method: tunedModels.list

Lists created tuned models.

Endpoint

get https://generativelanguage.googleapis.com/v1beta/tunedModels

Query parameters

pageSize integer

Optional. The maximum number of TunedModels to return (per page). The service may return fewer tuned models.

If unspecified, at most 10 tuned models will be returned. This method returns at most 1000 models per page, even if you pass a larger pageSize.

pageToken string

Optional. A page token, received from a previous tunedModels.list call.

Provide the pageToken returned by one request as an argument to the next request to retrieve the next page.

When paginating, all other parameters provided to tunedModels.list must match the call that provided the page token.

filter string

Optional. A filter is a full text search over the tuned model's description and display name. By default, results will not include tuned models shared with everyone.

Additional operators: - owner:me - writers:me - readers:me - readers:everyone

Examples: "owner:me" returns all tuned models to which caller has owner role "readers:me" returns all tuned models to which caller has reader role "readers:everyone" returns all tuned models that are shared with everyone

Request body

The request body must be empty.

Example request

Python

for model_info in genai.list_tuned_models():
    print(model_info.name)

Response body

Response from tunedModels.list containing a paginated list of Models.

If successful, the response body contains data with the following structure:

Fields
tunedModels[] object (TunedModel)

The returned Models.

nextPageToken string

A token, which can be sent as pageToken to retrieve the next page.

If this field is omitted, there are no more pages.

JSON representation
{
  "tunedModels": [
    {
      object (TunedModel)
    }
  ],
  "nextPageToken": string
}

Method: tunedModels.patch

Updates a tuned model.

Endpoint

patch https://generativelanguage.googleapis.com/v1beta/{tunedModel.name=tunedModels/*}

PATCH https://generativelanguage.googleapis.com/v1beta/{tunedModel.name=tunedModels/*}

Path parameters

tunedModel.name string

Output only. The tuned model name. A unique name will be generated on create. Example: tunedModels/az2mb0bpw6i If displayName is set on create, the id portion of the name will be set by concatenating the words of the displayName with hyphens and adding a random portion for uniqueness.

Example:

  • displayName = Sentence Translator
  • name = tunedModels/sentence-translator-u3b7m It takes the form tunedModels/{tunedmodel}.

Query parameters

updateMask string (FieldMask format)

Required. The list of fields to update.

This is a comma-separated list of fully qualified names of fields. Example: "user.displayName,photo".

Request body

The request body contains an instance of TunedModel.

Fields
displayName string

Optional. The name to display for this model in user interfaces. The display name must be up to 40 characters including spaces.

description string

Optional. A short description of this model.

tuningTask object (TuningTask)

Required. The tuning task that creates the tuned model.

readerProjectNumbers[] string (int64 format)

Optional. List of project numbers that have read access to the tuned model.

Union field source_model. The model used as the starting point for tuning. source_model can be only one of the following:
tunedModelSource object (TunedModelSource)

Optional. TunedModel to use as the starting point for training the new model.

temperature number

Optional. Controls the randomness of the output.

Values can range over [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the model.

This value specifies default to be the one used by the base model while creating the model.

topP number

Optional. For Nucleus sampling.

Nucleus sampling considers the smallest set of tokens whose probability sum is at least topP.

This value specifies default to be the one used by the base model while creating the model.

topK integer

Optional. For Top-k sampling.

Top-k sampling considers the set of topK most probable tokens. This value specifies default to be used by the backend while making the call to the model.

This value specifies default to be the one used by the base model while creating the model.

Response body

If successful, the response body contains an instance of TunedModel.

Method: tunedModels.delete

Deletes a tuned model.

Endpoint

delete https://generativelanguage.googleapis.com/v1beta/{name=tunedModels/*}

Path parameters

name string

Required. The resource name of the model. Format: tunedModels/my-model-id It takes the form tunedModels/{tunedmodel}.

Request body

The request body must be empty.

Response body

If successful, the response body is empty.

REST Resource: tunedModels

Resource: TunedModel

A fine-tuned model created using ModelService.CreateTunedModel.

Fields
name string

Output only. The tuned model name. A unique name will be generated on create. Example: tunedModels/az2mb0bpw6i If displayName is set on create, the id portion of the name will be set by concatenating the words of the displayName with hyphens and adding a random portion for uniqueness.

Example:

  • displayName = Sentence Translator
  • name = tunedModels/sentence-translator-u3b7m
displayName string

Optional. The name to display for this model in user interfaces. The display name must be up to 40 characters including spaces.

description string

Optional. A short description of this model.

state enum (State)

Output only. The state of the tuned model.

createTime string (Timestamp format)

Output only. The timestamp when this model was created.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

updateTime string (Timestamp format)

Output only. The timestamp when this model was updated.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

tuningTask object (TuningTask)

Required. The tuning task that creates the tuned model.

readerProjectNumbers[] string (int64 format)

Optional. List of project numbers that have read access to the tuned model.

Union field source_model. The model used as the starting point for tuning. source_model can be only one of the following:
tunedModelSource object (TunedModelSource)

Optional. TunedModel to use as the starting point for training the new model.

baseModel string

Immutable. The name of the Model to tune. Example: models/gemini-1.5-flash-001

temperature number

Optional. Controls the randomness of the output.

Values can range over [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the model.

This value specifies default to be the one used by the base model while creating the model.

topP number

Optional. For Nucleus sampling.

Nucleus sampling considers the smallest set of tokens whose probability sum is at least topP.

This value specifies default to be the one used by the base model while creating the model.

topK integer

Optional. For Top-k sampling.

Top-k sampling considers the set of topK most probable tokens. This value specifies default to be used by the backend while making the call to the model.

This value specifies default to be the one used by the base model while creating the model.

JSON representation
{
  "name": string,
  "displayName": string,
  "description": string,
  "state": enum (State),
  "createTime": string,
  "updateTime": string,
  "tuningTask": {
    object (TuningTask)
  },
  "readerProjectNumbers": [
    string
  ],

  // Union field source_model can be only one of the following:
  "tunedModelSource": {
    object (TunedModelSource)
  },
  "baseModel": string
  // End of list of possible types for union field source_model.
  "temperature": number,
  "topP": number,
  "topK": integer
}

TunedModelSource

Tuned model as a source for training a new model.

Fields
tunedModel string

Immutable. The name of the TunedModel to use as the starting point for training the new model. Example: tunedModels/my-tuned-model

baseModel string

Output only. The name of the base Model this TunedModel was tuned from. Example: models/gemini-1.5-flash-001

JSON representation
{
  "tunedModel": string,
  "baseModel": string
}

State

The state of the tuned model.

Enums
STATE_UNSPECIFIED The default value. This value is unused.
CREATING The model is being created.
ACTIVE The model is ready to be used.
FAILED The model failed to be created.

TuningTask

Tuning tasks that create tuned models.

Fields
startTime string (Timestamp format)

Output only. The timestamp when tuning this model started.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

completeTime string (Timestamp format)

Output only. The timestamp when tuning this model completed.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

snapshots[] object (TuningSnapshot)

Output only. Metrics collected during tuning.

trainingData object (Dataset)

Required. Input only. Immutable. The model training data.

hyperparameters object (Hyperparameters)

Immutable. Hyperparameters controlling the tuning process. If not provided, default values will be used.

JSON representation
{
  "startTime": string,
  "completeTime": string,
  "snapshots": [
    {
      object (TuningSnapshot)
    }
  ],
  "trainingData": {
    object (Dataset)
  },
  "hyperparameters": {
    object (Hyperparameters)
  }
}

TuningSnapshot

Record for a single tuning step.

Fields
step integer

Output only. The tuning step.

epoch integer

Output only. The epoch this step was part of.

meanLoss number

Output only. The mean loss of the training examples for this step.

computeTime string (Timestamp format)

Output only. The timestamp when this metric was computed.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

JSON representation
{
  "step": integer,
  "epoch": integer,
  "meanLoss": number,
  "computeTime": string
}

Dataset

Dataset for training or validation.

Fields
Union field dataset. Inline data or a reference to the data. dataset can be only one of the following:
examples object (TuningExamples)

Optional. Inline examples.

JSON representation
{

  // Union field dataset can be only one of the following:
  "examples": {
    object (TuningExamples)
  }
  // End of list of possible types for union field dataset.
}

TuningExamples

A set of tuning examples. Can be training or validation data.

Fields
examples[] object (TuningExample)

Required. The examples. Example input can be for text or discuss, but all examples in a set must be of the same type.

JSON representation
{
  "examples": [
    {
      object (TuningExample)
    }
  ]
}

TuningExample

A single example for tuning.

Fields
output string

Required. The expected model output.

Union field model_input. The input to the model for this example. model_input can be only one of the following:
textInput string

Optional. Text model input.

JSON representation
{
  "output": string,

  // Union field model_input can be only one of the following:
  "textInput": string
  // End of list of possible types for union field model_input.
}

Hyperparameters

Hyperparameters controlling the tuning process. Read more at https://ai.google.dev/docs/model_tuning_guidance

Fields
Union field learning_rate_option. Options for specifying learning rate during tuning. learning_rate_option can be only one of the following:
learningRate number

Optional. Immutable. The learning rate hyperparameter for tuning. If not set, a default of 0.001 or 0.0002 will be calculated based on the number of training examples.

learningRateMultiplier number

Optional. Immutable. The learning rate multiplier is used to calculate a final learningRate based on the default (recommended) value. Actual learning rate := learningRateMultiplier * default learning rate Default learning rate is dependent on base model and dataset size. If not set, a default of 1.0 will be used.

epochCount integer

Immutable. The number of training epochs. An epoch is one pass through the training data. If not set, a default of 5 will be used.

batchSize integer

Immutable. The batch size hyperparameter for tuning. If not set, a default of 4 or 16 will be used based on the number of training examples.

JSON representation
{

  // Union field learning_rate_option can be only one of the following:
  "learningRate": number,
  "learningRateMultiplier": number
  // End of list of possible types for union field learning_rate_option.
  "epochCount": integer,
  "batchSize": integer
}