Generating content

Gemini API는 이미지, 오디오, 코드, 도구 등을 사용한 콘텐츠 생성을 지원합니다. 이러한 각 기능에 대한 자세한 내용은 작업 중심의 샘플 코드를 읽고 확인하거나 종합적인 가이드를 읽어보세요.

메서드: models.generateContent

입력 GenerateContentRequest가 주어지면 모델 응답을 생성합니다. 자세한 사용 정보는 텍스트 생성 가이드를 참고하세요. 입력 기능은 조정된 모델을 비롯한 모델마다 다릅니다. 자세한 내용은 모델 가이드조정 가이드를 참고하세요.

엔드포인트

<ph type="x-smartling-placeholder"></ph> <ph type="x-smartling-placeholder"></ph> 게시물 https://generativelanguage.googleapis.com/v1beta/{model=models/*}:generateContent

경로 매개변수

model string

필수 항목입니다. 완료를 생성하는 데 사용할 Model의 이름입니다.

형식: name=models/{model} models/{model} 형식을 사용합니다.

요청 본문

요청 본문에는 다음과 같은 구조의 데이터가 포함됩니다.

<ph type="x-smartling-placeholder">
</ph> 입력란
contents[] object (Content)

필수 항목입니다. 모델과의 현재 대화 콘텐츠입니다.

싱글턴 쿼리의 경우 이는 단일 인스턴스입니다. 채팅과 같은 멀티턴 쿼리의 경우 이 필드는 대화 기록과 최신 요청을 포함하는 반복 필드입니다.

tools[] object (Tool)

선택사항입니다. Model가 다음 응답을 생성하는 데 사용할 수 있는 Tools 목록입니다.

Tool는 시스템이 외부 시스템과 상호작용하여 Model의 지식과 범위를 벗어난 작업 또는 작업 집합을 실행할 수 있도록 하는 코드입니다. 지원되는 ToolFunctioncodeExecution입니다. 자세한 내용은 함수 호출코드 실행 가이드를 참고하세요.

toolConfig object (ToolConfig)

선택사항입니다. 요청에 지정된 Tool의 도구 구성입니다. 사용 예는 함수 호출 가이드를 참고하세요.

safetySettings[] object (SafetySetting)

선택사항입니다. 안전하지 않은 콘텐츠를 차단하기 위한 고유한 SafetySetting 인스턴스 목록입니다.

이는 GenerateContentRequest.contentsGenerateContentResponse.candidates에 적용됩니다. 각 SafetyCategory 유형에 대해 두 개 이상의 설정이 있어서는 안 됩니다. API는 이러한 설정에 의해 설정된 기준을 충족하지 못하는 모든 콘텐츠 및 응답을 차단합니다. 이 목록은 safetySettings에 지정된 각 SafetyCategory의 기본 설정을 재정의합니다. 목록에 제공된 특정 SafetyCategorySafetySetting가 없는 경우 API는 해당 카테고리의 기본 안전 설정을 사용합니다. 유해한 카테고리 HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT가 지원됩니다. 사용 가능한 안전 설정에 대한 자세한 내용은 가이드를 참고하세요. 또한 AI 애플리케이션에 안전 고려사항을 통합하는 방법을 알아보려면 안전 안내를 참고하세요.

systemInstruction object (Content)

선택사항입니다. 개발자가 설정한 시스템 안내입니다. 현재는 텍스트만 지원합니다.

generationConfig object (GenerationConfig)

선택사항입니다. 모델 생성 및 출력을 위한 구성 옵션입니다.

cachedContent string

선택사항입니다. 예측을 제공하기 위한 컨텍스트로 사용하기 위해 캐시된 콘텐츠의 이름입니다. 형식: cachedContents/{cachedContent}

요청 예시

텍스트

Python

model = genai.GenerativeModel("gemini-1.5-flash")
response = model.generate_content("Write a story about a magic backpack.")
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

const prompt = "Write a story about a magic backpack.";

const result = await model.generateContent(prompt);
console.log(result.response.text());

Go

model := client.GenerativeModel("gemini-1.5-flash")
resp, err := model.GenerateContent(ctx, genai.Text("Write a story about a magic backpack."))
if err != nil {
	log.Fatal(err)
}

printResponse(resp)

Shell

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[{"text": "Write a story about a magic backpack."}]
        }]
       }' 2> /dev/null

Kotlin

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val prompt = "Write a story about a magic backpack."
val response = generativeModel.generateContent(prompt)
print(response.text)

Swift

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

let prompt = "Write a story about a magic backpack."
let response = try await generativeModel.generateContent(prompt)
if let text = response.text {
  print(text)
}

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final prompt = 'Write a story about a magic backpack.';

final response = await model.generateContent([Content.text(prompt)]);
print(response.text);

자바

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Content content =
    new Content.Builder().addText("Write a story about a magic backpack.").build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

ListenableFuture<GenerateContentResponse> response = model.generateContent(content);
Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }
    },
    executor);

이미지

Python

import PIL.Image

model = genai.GenerativeModel("gemini-1.5-flash")
organ = PIL.Image.open(media / "organ.jpg")
response = model.generate_content(["Tell me about this instrument", organ])
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

function fileToGenerativePart(path, mimeType) {
  return {
    inlineData: {
      data: Buffer.from(fs.readFileSync(path)).toString("base64"),
      mimeType,
    },
  };
}

const prompt = "Describe how this product might be manufactured.";
// Note: The only accepted mime types are some image types, image/*.
const imagePart = fileToGenerativePart(
  `${mediaPath}/jetpack.jpg`,
  "image/jpeg",
);

const result = await model.generateContent([prompt, imagePart]);
console.log(result.response.text());

Go

model := client.GenerativeModel("gemini-1.5-flash")

imgData, err := os.ReadFile(filepath.Join(testDataDir, "organ.jpg"))
if err != nil {
	log.Fatal(err)
}

resp, err := model.GenerateContent(ctx,
	genai.Text("Tell me about this instrument"),
	genai.ImageData("jpeg", imgData))
if err != nil {
	log.Fatal(err)
}

printResponse(resp)

Shell

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
            {"text": "Tell me about this instrument"},
            {
              "inline_data": {
                "mime_type":"image/jpeg",
                "data": "'$(base64 $B64FLAGS $IMG_PATH)'"
              }
            }
        ]
        }]
       }' 2> /dev/null

Kotlin

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val image: Bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.image)
val inputContent = content {
  image(image)
  text("What's in this picture?")
}

val response = generativeModel.generateContent(inputContent)
print(response.text)

Swift

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

guard let image = UIImage(systemName: "cloud.sun") else { fatalError() }

let prompt = "What's in this picture?"

let response = try await generativeModel.generateContent(image, prompt)
if let text = response.text {
  print(text)
}

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);

Future<DataPart> fileToPart(String mimeType, String path) async {
  return DataPart(mimeType, await File(path).readAsBytes());
}

final prompt = 'Describe how this product might be manufactured.';
final image = await fileToPart('image/jpeg', 'resources/jetpack.jpg');

final response = await model.generateContent([
  Content.multi([TextPart(prompt), image])
]);
print(response.text);

자바

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Bitmap image = BitmapFactory.decodeResource(context.getResources(), R.drawable.image);

Content content =
    new Content.Builder()
        .addText("What's different between these pictures?")
        .addImage(image)
        .build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

ListenableFuture<GenerateContentResponse> response = model.generateContent(content);
Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }
    },
    executor);

오디오

Python

model = genai.GenerativeModel("gemini-1.5-flash")
sample_audio = genai.upload_file(media / "sample.mp3")
response = model.generate_content(["Give me a summary of this audio file.", sample_audio])
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

function fileToGenerativePart(path, mimeType) {
  return {
    inlineData: {
      data: Buffer.from(fs.readFileSync(path)).toString("base64"),
      mimeType,
    },
  };
}

const prompt = "Give me a summary of this audio file.";
// Note: The only accepted mime types are some image types, image/*.
const audioPart = fileToGenerativePart(
  `${mediaPath}/samplesmall.mp3`,
  "audio/mp3",
);

const result = await model.generateContent([prompt, audioPart]);
console.log(result.response.text());

Shell

# Use File API to upload audio data to API request.
MIME_TYPE=$(file -b --mime-type "${AUDIO_PATH}")
NUM_BYTES=$(wc -c < "${AUDIO_PATH}")
DISPLAY_NAME=AUDIO

tmp_header_file=upload-header.tmp

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GOOGLE_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${AUDIO_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Please describe this file."},
          {"file_data":{"mime_type": "audio/mpeg", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo

jq ".candidates[].content.parts[].text" response.json

동영상

Python

import time

# Video clip (CC BY 3.0) from https://peach.blender.org/download/
myfile = genai.upload_file(media / "Big_Buck_Bunny.mp4")
print(f"{myfile=}")

# Videos need to be processed before you can use them.
while myfile.state.name == "PROCESSING":
    print("processing video...")
    time.sleep(5)
    myfile = genai.get_file(myfile.name)

model = genai.GenerativeModel("gemini-1.5-flash")
response = model.generate_content([myfile, "Describe this video clip"])
print(f"{response.text=}")

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
// import { GoogleAIFileManager, FileState } from "@google/generative-ai/server";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

const fileManager = new GoogleAIFileManager(process.env.API_KEY);

const uploadResult = await fileManager.uploadFile(
  `${mediaPath}/Big_Buck_Bunny.mp4`,
  { mimeType: "video/mp4" },
);

let file = await fileManager.getFile(uploadResult.file.name);
while (file.state === FileState.PROCESSING) {
  process.stdout.write(".");
  // Sleep for 10 seconds
  await new Promise((resolve) => setTimeout(resolve, 10_000));
  // Fetch the file from the API again
  file = await fileManager.getFile(uploadResult.file.name);
}

if (file.state === FileState.FAILED) {
  throw new Error("Video processing failed.");
}

const prompt = "Describe this video clip";
const videoPart = {
  fileData: {
    fileUri: uploadResult.file.uri,
    mimeType: uploadResult.file.mimeType,
  },
};

const result = await model.generateContent([prompt, videoPart]);
console.log(result.response.text());

Go

model := client.GenerativeModel("gemini-1.5-flash")

file, err := client.UploadFileFromPath(ctx, filepath.Join(testDataDir, "earth.mp4"), nil)
if err != nil {
	log.Fatal(err)
}
defer client.DeleteFile(ctx, file.Name)

// Videos need to be processed before you can use them.
for file.State == genai.FileStateProcessing {
	log.Printf("processing %s", file.Name)
	time.Sleep(5 * time.Second)
	var err error
	if file, err = client.GetFile(ctx, file.Name); err != nil {
		log.Fatal(err)
	}
}
if file.State != genai.FileStateActive {
	log.Fatalf("uploaded file has state %s, not active", file.State)
}

resp, err := model.GenerateContent(ctx,
	genai.Text("Describe this video clip"),
	genai.FileData{URI: file.URI})
if err != nil {
	log.Fatal(err)
}

printResponse(resp)

Shell

# Use File API to upload audio data to API request.
MIME_TYPE=$(file -b --mime-type "${VIDEO_PATH}")
NUM_BYTES=$(wc -c < "${VIDEO_PATH}")
DISPLAY_NAME=VIDEO

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GOOGLE_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${VIDEO_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

state=$(jq ".file.state" file_info.json)
echo state=$state

name=$(jq ".file.name" file_info.json)
echo name=$name

while [[ "($state)" = *"PROCESSING"* ]];
do
  echo "Processing video..."
  sleep 5
  # Get the file of interest to check state
  curl https://generativelanguage.googleapis.com/v1beta/files/$name > file_info.json
  state=$(jq ".file.state" file_info.json)
done

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Please describe this file."},
          {"file_data":{"mime_type": "video/mp4", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo

jq ".candidates[].content.parts[].text" response.json

PDF

Python

model = genai.GenerativeModel("gemini-1.5-flash")
sample_pdf = genai.upload_file(media / "test.pdf")
response = model.generate_content(["Give me a summary of this document:", sample_pdf])
print(f"{response.text=}")

Shell

MIME_TYPE=$(file -b --mime-type "${PDF_PATH}")
NUM_BYTES=$(wc -c < "${PDF_PATH}")
DISPLAY_NAME=TEXT


echo $MIME_TYPE
tmp_header_file=upload-header.tmp

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GOOGLE_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${PDF_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

# Now generate content using that file
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Can you add a few more lines to this poem?"},
          {"file_data":{"mime_type": "application/pdf", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo

jq ".candidates[].content.parts[].text" response.json

채팅

Python

model = genai.GenerativeModel("gemini-1.5-flash")
chat = model.start_chat(
    history=[
        {"role": "user", "parts": "Hello"},
        {"role": "model", "parts": "Great to meet you. What would you like to know?"},
    ]
)
response = chat.send_message("I have 2 dogs in my house.")
print(response.text)
response = chat.send_message("How many paws are in my house?")
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
const chat = model.startChat({
  history: [
    {
      role: "user",
      parts: [{ text: "Hello" }],
    },
    {
      role: "model",
      parts: [{ text: "Great to meet you. What would you like to know?" }],
    },
  ],
});
let result = await chat.sendMessage("I have 2 dogs in my house.");
console.log(result.response.text());
result = await chat.sendMessage("How many paws are in my house?");
console.log(result.response.text());

Go

model := client.GenerativeModel("gemini-1.5-flash")
cs := model.StartChat()

cs.History = []*genai.Content{
	{
		Parts: []genai.Part{
			genai.Text("Hello, I have 2 dogs in my house."),
		},
		Role: "user",
	},
	{
		Parts: []genai.Part{
			genai.Text("Great to meet you. What would you like to know?"),
		},
		Role: "model",
	},
}

res, err := cs.SendMessage(ctx, genai.Text("How many paws are in my house?"))
if err != nil {
	log.Fatal(err)
}
printResponse(res)

Shell

curl https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [
        {"role":"user",
         "parts":[{
           "text": "Hello"}]},
        {"role": "model",
         "parts":[{
           "text": "Great to meet you. What would you like to know?"}]},
        {"role":"user",
         "parts":[{
           "text": "I have two dogs in my house. How many paws are in my house?"}]},
      ]
    }' 2> /dev/null | grep "text"

Kotlin

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val chat =
    generativeModel.startChat(
        history =
            listOf(
                content(role = "user") { text("Hello, I have 2 dogs in my house.") },
                content(role = "model") {
                  text("Great to meet you. What would you like to know?")
                }))

val response = chat.sendMessage("How many paws are in my house?")
print(response.text)

Swift

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

// Optionally specify existing chat history
let history = [
  ModelContent(role: "user", parts: "Hello, I have 2 dogs in my house."),
  ModelContent(role: "model", parts: "Great to meet you. What would you like to know?"),
]

// Initialize the chat with optional chat history
let chat = generativeModel.startChat(history: history)

// To generate text output, call sendMessage and pass in the message
let response = try await chat.sendMessage("How many paws are in my house?")
if let text = response.text {
  print(text)
}

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final chat = model.startChat(history: [
  Content.text('hello'),
  Content.model([TextPart('Great to meet you. What would you like to know?')])
]);
var response =
    await chat.sendMessage(Content.text('I have 2 dogs in my house.'));
print(response.text);
response =
    await chat.sendMessage(Content.text('How many paws are in my house?'));
print(response.text);

자바

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

// (optional) Create previous chat history for context
Content.Builder userContentBuilder = new Content.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText("Hello, I have 2 dogs in my house.");
Content userContent = userContentBuilder.build();

Content.Builder modelContentBuilder = new Content.Builder();
modelContentBuilder.setRole("model");
modelContentBuilder.addText("Great to meet you. What would you like to know?");
Content modelContent = userContentBuilder.build();

List<Content> history = Arrays.asList(userContent, modelContent);

// Initialize the chat
ChatFutures chat = model.startChat(history);

// Create a new user message
Content.Builder userMessageBuilder = new Content.Builder();
userMessageBuilder.setRole("user");
userMessageBuilder.addText("How many paws are in my house?");
Content userMessage = userMessageBuilder.build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

// Send the message
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(userMessage);

Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }
    },
    executor);

캐시

Python

document = genai.upload_file(path=media / "a11.txt")
model_name = "gemini-1.5-flash-001"
cache = genai.caching.CachedContent.create(
    model=model_name,
    system_instruction="You are an expert analyzing transcripts.",
    contents=[document],
)
print(cache)

model = genai.GenerativeModel.from_cached_content(cache)
response = model.generate_content("Please summarize this transcript")
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleAICacheManager, GoogleAIFileManager } from "@google/generative-ai/server";
// import { GoogleGenerativeAI } from "@google/generative-ai";
const cacheManager = new GoogleAICacheManager(process.env.API_KEY);
const fileManager = new GoogleAIFileManager(process.env.API_KEY);

const uploadResult = await fileManager.uploadFile(`${mediaPath}/a11.txt`, {
  mimeType: "text/plain",
});

const cacheResult = await cacheManager.create({
  model: "models/gemini-1.5-flash-001",
  contents: [
    {
      role: "user",
      parts: [
        {
          fileData: {
            fileUri: uploadResult.file.uri,
            mimeType: uploadResult.file.mimeType,
          },
        },
      ],
    },
  ],
});

console.log(cacheResult);

const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModelFromCachedContent(cacheResult);
const result = await model.generateContent(
  "Please summarize this transcript.",
);
console.log(result.response.text());

조정된 모델

Python

model = genai.GenerativeModel(model_name="tunedModels/my-increment-model")
result = model.generate_content("III")
print(result.text)  # "IV"

JSON 모드

Python

import typing_extensions as typing

class Recipe(typing.TypedDict):
    recipe_name: str
    ingredients: list[str]

model = genai.GenerativeModel("gemini-1.5-pro-latest")
result = model.generate_content(
    "List a few popular cookie recipes.",
    generation_config=genai.GenerationConfig(
        response_mime_type="application/json", response_schema=list[Recipe]
    ),
)
print(result)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI, SchemaType } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);

const schema = {
  description: "List of recipes",
  type: SchemaType.ARRAY,
  items: {
    type: SchemaType.OBJECT,
    properties: {
      recipeName: {
        type: SchemaType.STRING,
        description: "Name of the recipe",
        nullable: false,
      },
    },
    required: ["recipeName"],
  },
};

const model = genAI.getGenerativeModel({
  model: "gemini-1.5-pro",
  generationConfig: {
    responseMimeType: "application/json",
    responseSchema: schema,
  },
});

const result = await model.generateContent(
  "List a few popular cookie recipes.",
);
console.log(result.response.text());

Go

model := client.GenerativeModel("gemini-1.5-pro-latest")
// Ask the model to respond with JSON.
model.ResponseMIMEType = "application/json"
// Specify the schema.
model.ResponseSchema = &genai.Schema{
	Type:  genai.TypeArray,
	Items: &genai.Schema{Type: genai.TypeString},
}
resp, err := model.GenerateContent(ctx, genai.Text("List a few popular cookie recipes using this JSON schema."))
if err != nil {
	log.Fatal(err)
}
for _, part := range resp.Candidates[0].Content.Parts {
	if txt, ok := part.(genai.Text); ok {
		var recipes []string
		if err := json.Unmarshal([]byte(txt), &recipes); err != nil {
			log.Fatal(err)
		}
		fmt.Println(recipes)
	}
}

Shell

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
    "contents": [{
      "parts":[
        {"text": "List 5 popular cookie recipes"}
        ]
    }],
    "generationConfig": {
        "response_mime_type": "application/json",
        "response_schema": {
          "type": "ARRAY",
          "items": {
            "type": "OBJECT",
            "properties": {
              "recipe_name": {"type":"STRING"},
            }
          }
        }
    }
}' 2> /dev/null | head

Kotlin

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-pro",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey,
        generationConfig = generationConfig {
            responseMimeType = "application/json"
            responseSchema = Schema(
                name = "recipes",
                description = "List of recipes",
                type = FunctionType.ARRAY,
                items = Schema(
                    name = "recipe",
                    description = "A recipe",
                    type = FunctionType.OBJECT,
                    properties = mapOf(
                        "recipeName" to Schema(
                            name = "recipeName",
                            description = "Name of the recipe",
                            type = FunctionType.STRING,
                            nullable = false
                        ),
                    ),
                    required = listOf("recipeName")
                ),
            )
        })

val prompt = "List a few popular cookie recipes."
val response = generativeModel.generateContent(prompt)
print(response.text)

Swift

let jsonSchema = Schema(
  type: .array,
  description: "List of recipes",
  items: Schema(
    type: .object,
    properties: [
      "recipeName": Schema(type: .string, description: "Name of the recipe", nullable: false),
    ],
    requiredProperties: ["recipeName"]
  )
)

let generativeModel = GenerativeModel(
  // Specify a model that supports controlled generation like Gemini 1.5 Pro
  name: "gemini-1.5-pro",
  // Access your API key from your on-demand resource .plist file (see "Set up your API key"
  // above)
  apiKey: APIKey.default,
  generationConfig: GenerationConfig(
    responseMIMEType: "application/json",
    responseSchema: jsonSchema
  )
)

let prompt = "List a few popular cookie recipes."
let response = try await generativeModel.generateContent(prompt)
if let text = response.text {
  print(text)
}

Dart

final schema = Schema.array(
    description: 'List of recipes',
    items: Schema.object(properties: {
      'recipeName':
          Schema.string(description: 'Name of the recipe.', nullable: false)
    }, requiredProperties: [
      'recipeName'
    ]));

final model = GenerativeModel(
    model: 'gemini-1.5-pro',
    apiKey: apiKey,
    generationConfig: GenerationConfig(
        responseMimeType: 'application/json', responseSchema: schema));

final prompt = 'List a few popular cookie recipes.';
final response = await model.generateContent([Content.text(prompt)]);
print(response.text);

자바

Schema<List<String>> schema =
    new Schema(
        /* name */ "recipes",
        /* description */ "List of recipes",
        /* format */ null,
        /* nullable */ false,
        /* list */ null,
        /* properties */ null,
        /* required */ null,
        /* items */ new Schema(
            /* name */ "recipe",
            /* description */ "A recipe",
            /* format */ null,
            /* nullable */ false,
            /* list */ null,
            /* properties */ Map.of(
                "recipeName",
                new Schema(
                    /* name */ "recipeName",
                    /* description */ "Name of the recipe",
                    /* format */ null,
                    /* nullable */ false,
                    /* list */ null,
                    /* properties */ null,
                    /* required */ null,
                    /* items */ null,
                    /* type */ FunctionType.STRING)),
            /* required */ null,
            /* items */ null,
            /* type */ FunctionType.OBJECT),
        /* type */ FunctionType.ARRAY);

GenerationConfig.Builder configBuilder = new GenerationConfig.Builder();
configBuilder.responseMimeType = "application/json";
configBuilder.responseSchema = schema;

GenerationConfig generationConfig = configBuilder.build();

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-pro",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey,
        /* generationConfig */ generationConfig);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Content content = new Content.Builder().addText("List a few popular cookie recipes.").build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

ListenableFuture<GenerateContentResponse> response = model.generateContent(content);
Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }
    },
    executor);

코드 실행

Python

model = genai.GenerativeModel(model_name="gemini-1.5-flash", tools="code_execution")
response = model.generate_content(
    (
        "What is the sum of the first 50 prime numbers? "
        "Generate and run code for the calculation, and make sure you get all 50."
    )
)

# Each `part` either contains `text`, `executable_code` or an `execution_result`
for part in response.candidates[0].content.parts:
    print(part, "\n")

print("-" * 80)
# The `.text` accessor joins the parts into a markdown compatible text representation.
print("\n\n", response.text)

Kotlin


val model = GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    modelName = "gemini-1.5-pro",
    // Access your API key as a Build Configuration variable (see "Set up your API key" above)
    apiKey = BuildConfig.apiKey,
    tools = listOf(Tool.CODE_EXECUTION)
)

val response = model.generateContent("What is the sum of the first 50 prime numbers?")

// Each `part` either contains `text`, `executable_code` or an `execution_result`
println(response.candidates[0].content.parts.joinToString("\n"))

// Alternatively, you can use the `text` accessor which joins the parts into a markdown compatible
// text representation
println(response.text)

자바

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
        new GenerativeModel(
                /* modelName */ "gemini-1.5-pro",
                // Access your API key as a Build Configuration variable (see "Set up your API key"
                // above)
                /* apiKey */ BuildConfig.apiKey,
                /* generationConfig */ null,
                /* safetySettings */ null,
                /* requestOptions */ new RequestOptions(),
                /* tools */ Collections.singletonList(Tool.CODE_EXECUTION));
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Content inputContent =
        new Content.Builder().addText("What is the sum of the first 50 prime numbers?").build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

ListenableFuture<GenerateContentResponse> response = model.generateContent(inputContent);
Futures.addCallback(
        response,
        new FutureCallback<GenerateContentResponse>() {
            @Override
            public void onSuccess(GenerateContentResponse result) {
                // Each `part` either contains `text`, `executable_code` or an
                // `execution_result`
                Candidate candidate = result.getCandidates().get(0);
                for (Part part : candidate.getContent().getParts()) {
                    System.out.println(part);
                }

                // Alternatively, you can use the `text` accessor which joins the parts into a
                // markdown compatible text representation
                String resultText = result.getText();
                System.out.println(resultText);
            }

            @Override
            public void onFailure(Throwable t) {
                t.printStackTrace();
            }
        },
        executor);

함수 호출

Python

def add(a: float, b: float):
    """returns a + b."""
    return a + b

def subtract(a: float, b: float):
    """returns a - b."""
    return a - b

def multiply(a: float, b: float):
    """returns a * b."""
    return a * b

def divide(a: float, b: float):
    """returns a / b."""
    return a / b

model = genai.GenerativeModel(
    model_name="gemini-1.5-flash", tools=[add, subtract, multiply, divide]
)
chat = model.start_chat(enable_automatic_function_calling=True)
response = chat.send_message(
    "I have 57 cats, each owns 44 mittens, how many mittens is that in total?"
)
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
async function setLightValues(brightness, colorTemperature) {
  // This mock API returns the requested lighting values
  return {
    brightness,
    colorTemperature,
  };
}

const controlLightFunctionDeclaration = {
  name: "controlLight",
  parameters: {
    type: "OBJECT",
    description: "Set the brightness and color temperature of a room light.",
    properties: {
      brightness: {
        type: "NUMBER",
        description:
          "Light level from 0 to 100. Zero is off and 100 is full brightness.",
      },
      colorTemperature: {
        type: "STRING",
        description:
          "Color temperature of the light fixture which can be `daylight`, `cool` or `warm`.",
      },
    },
    required: ["brightness", "colorTemperature"],
  },
};

// Executable function code. Put it in a map keyed by the function name
// so that you can call it once you get the name string from the model.
const functions = {
  controlLight: ({ brightness, colorTemperature }) => {
    return setLightValues(brightness, colorTemperature);
  },
};

const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({
  model: "gemini-1.5-flash",
  tools: { functionDeclarations: [controlLightFunctionDeclaration] },
});
const chat = model.startChat();
const prompt = "Dim the lights so the room feels cozy and warm.";

// Send the message to the model.
const result = await chat.sendMessage(prompt);

// For simplicity, this uses the first function call found.
const call = result.response.functionCalls()[0];

if (call) {
  // Call the executable function named in the function call
  // with the arguments specified in the function call and
  // let it call the hypothetical API.
  const apiResponse = await functions[call.name](call.args);

  // Send the API response back to the model so it can generate
  // a text response that can be displayed to the user.
  const result2 = await chat.sendMessage([
    {
      functionResponse: {
        name: "controlLight",
        response: apiResponse,
      },
    },
  ]);

  // Log the text response.
  console.log(result2.response.text());
}

Shell


cat > tools.json << EOF
{
  "function_declarations": [
    {
      "name": "enable_lights",
      "description": "Turn on the lighting system.",
      "parameters": { "type": "object" }
    },
    {
      "name": "set_light_color",
      "description": "Set the light color. Lights must be enabled for this to work.",
      "parameters": {
        "type": "object",
        "properties": {
          "rgb_hex": {
            "type": "string",
            "description": "The light color as a 6-digit hex string, e.g. ff0000 for red."
          }
        },
        "required": [
          "rgb_hex"
        ]
      }
    },
    {
      "name": "stop_lights",
      "description": "Turn off the lighting system.",
      "parameters": { "type": "object" }
    }
  ]
} 
EOF

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-latest:generateContent?key=$GOOGLE_API_KEY" \
  -H 'Content-Type: application/json' \
  -d @<(echo '
  {
    "system_instruction": {
      "parts": {
        "text": "You are a helpful lighting system bot. You can turn lights on and off, and you can set the color. Do not perform any other tasks."
      }
    },
    "tools": ['$(source "$tools")'],

    "tool_config": {
      "function_calling_config": {"mode": "none"}
    },

    "contents": {
      "role": "user",
      "parts": {
        "text": "What can you do?"
      }
    }
  }
') 2>/dev/null |sed -n '/"content"/,/"finishReason"/p'

Kotlin

fun multiply(a: Double, b: Double) = a * b

val multiplyDefinition = defineFunction(
    name = "multiply",
    description = "returns the product of the provided numbers.",
    parameters = listOf(
    Schema.double("a", "First number"),
    Schema.double("b", "Second number")
    )
)

val usableFunctions = listOf(multiplyDefinition)

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey,
        // List the functions definitions you want to make available to the model
        tools = listOf(Tool(usableFunctions))
    )

val chat = generativeModel.startChat()
val prompt = "I have 57 cats, each owns 44 mittens, how many mittens is that in total?"

// Send the message to the generative model
var response = chat.sendMessage(prompt)

// Check if the model responded with a function call
response.functionCalls.first { it.name == "multiply" }.apply {
    val a: String by args
    val b: String by args

    val result = JSONObject(mapOf("result" to multiply(a.toDouble(), b.toDouble())))
    response = chat.sendMessage(
        content(role = "function") {
            part(FunctionResponsePart("multiply", result))
        }
    )
}

// Whenever the model responds with text, show it in the UI
response.text?.let { modelResponse ->
    println(modelResponse)
}

Swift

// Calls a hypothetical API to control a light bulb and returns the values that were set.
func controlLight(brightness: Double, colorTemperature: String) -> JSONObject {
  return ["brightness": .number(brightness), "colorTemperature": .string(colorTemperature)]
}

let generativeModel =
  GenerativeModel(
    // Use a model that supports function calling, like a Gemini 1.5 model
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default,
    tools: [Tool(functionDeclarations: [
      FunctionDeclaration(
        name: "controlLight",
        description: "Set the brightness and color temperature of a room light.",
        parameters: [
          "brightness": Schema(
            type: .number,
            format: "double",
            description: "Light level from 0 to 100. Zero is off and 100 is full brightness."
          ),
          "colorTemperature": Schema(
            type: .string,
            format: "enum",
            description: "Color temperature of the light fixture.",
            enumValues: ["daylight", "cool", "warm"]
          ),
        ],
        requiredParameters: ["brightness", "colorTemperature"]
      ),
    ])]
  )

let chat = generativeModel.startChat()

let prompt = "Dim the lights so the room feels cozy and warm."

// Send the message to the model.
let response1 = try await chat.sendMessage(prompt)

// Check if the model responded with a function call.
// For simplicity, this sample uses the first function call found.
guard let functionCall = response1.functionCalls.first else {
  fatalError("Model did not respond with a function call.")
}
// Print an error if the returned function was not declared
guard functionCall.name == "controlLight" else {
  fatalError("Unexpected function called: \(functionCall.name)")
}
// Verify that the names and types of the parameters match the declaration
guard case let .number(brightness) = functionCall.args["brightness"] else {
  fatalError("Missing argument: brightness")
}
guard case let .string(colorTemperature) = functionCall.args["colorTemperature"] else {
  fatalError("Missing argument: colorTemperature")
}

// Call the executable function named in the FunctionCall with the arguments specified in the
// FunctionCall and let it call the hypothetical API.
let apiResponse = controlLight(brightness: brightness, colorTemperature: colorTemperature)

// Send the API response back to the model so it can generate a text response that can be
// displayed to the user.
let response2 = try await chat.sendMessage([ModelContent(
  role: "function",
  parts: [.functionResponse(FunctionResponse(name: "controlLight", response: apiResponse))]
)])

if let text = response2.text {
  print(text)
}

Dart

Map<String, Object?> setLightValues(Map<String, Object?> args) {
  return args;
}

final controlLightFunction = FunctionDeclaration(
    'controlLight',
    'Set the brightness and color temperature of a room light.',
    Schema.object(properties: {
      'brightness': Schema.number(
          description:
              'Light level from 0 to 100. Zero is off and 100 is full brightness.',
          nullable: false),
      'colorTemperatur': Schema.string(
          description:
              'Color temperature of the light fixture which can be `daylight`, `cool`, or `warm`',
          nullable: false),
    }));

final functions = {controlLightFunction.name: setLightValues};
FunctionResponse dispatchFunctionCall(FunctionCall call) {
  final function = functions[call.name]!;
  final result = function(call.args);
  return FunctionResponse(call.name, result);
}

final model = GenerativeModel(
  model: 'gemini-1.5-pro',
  apiKey: apiKey,
  tools: [
    Tool(functionDeclarations: [controlLightFunction])
  ],
);

final prompt = 'Dim the lights so the room feels cozy and warm.';
final content = [Content.text(prompt)];
var response = await model.generateContent(content);

List<FunctionCall> functionCalls;
while ((functionCalls = response.functionCalls.toList()).isNotEmpty) {
  var responses = <FunctionResponse>[
    for (final functionCall in functionCalls)
      dispatchFunctionCall(functionCall)
  ];
  content
    ..add(response.candidates.first.content)
    ..add(Content.functionResponses(responses));
  response = await model.generateContent(content);
}
print('Response: ${response.text}');

자바

FunctionDeclaration multiplyDefinition =
    defineFunction(
        /* name  */ "multiply",
        /* description */ "returns a * b.",
        /* parameters */ Arrays.asList(
            Schema.numDouble("a", "First parameter"),
            Schema.numDouble("b", "Second parameter")),
        /* required */ Arrays.asList("a", "b"));

Tool tool = new Tool(Arrays.asList(multiplyDefinition), null);

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey,
        /* generationConfig (optional) */ null,
        /* safetySettings (optional) */ null,
        /* requestOptions (optional) */ new RequestOptions(),
        /* functionDeclarations (optional) */ Arrays.asList(tool));
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

// Create prompt
Content.Builder userContentBuilder = new Content.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText(
    "I have 57 cats, each owns 44 mittens, how many mittens is that in total?");
Content userMessage = userContentBuilder.build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

// Initialize the chat
ChatFutures chat = model.startChat();

// Send the message
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(userMessage);

Futures.addCallback(
    response,
    new FutureCallback<GenerateContentResponse>() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
        if (!result.getFunctionCalls().isEmpty()) {
          handleFunctionCall(result);
        }
        if (!result.getText().isEmpty()) {
          System.out.println(result.getText());
        }
      }

      @Override
      public void onFailure(Throwable t) {
        t.printStackTrace();
      }

      private void handleFunctionCall(GenerateContentResponse result) {
        FunctionCallPart multiplyFunctionCallPart =
            result.getFunctionCalls().stream()
                .filter(fun -> fun.getName().equals("multiply"))
                .findFirst()
                .get();
        double a = Double.parseDouble(multiplyFunctionCallPart.getArgs().get("a"));
        double b = Double.parseDouble(multiplyFunctionCallPart.getArgs().get("b"));

        try {
          // `multiply(a, b)` is a regular java function defined in another class
          FunctionResponsePart functionResponsePart =
              new FunctionResponsePart(
                  "multiply", new JSONObject().put("result", multiply(a, b)));

          // Create prompt
          Content.Builder functionCallResponse = new Content.Builder();
          userContentBuilder.setRole("user");
          userContentBuilder.addPart(functionResponsePart);
          Content userMessage = userContentBuilder.build();

          chat.sendMessage(userMessage);
        } catch (JSONException e) {
          throw new RuntimeException(e);
        }
      }
    },
    executor);

생성 구성

Python

model = genai.GenerativeModel("gemini-1.5-flash")
response = model.generate_content(
    "Tell me a story about a magic backpack.",
    generation_config=genai.types.GenerationConfig(
        # Only one candidate for now.
        candidate_count=1,
        stop_sequences=["x"],
        max_output_tokens=20,
        temperature=1.0,
    ),
)

print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({
  model: "gemini-1.5-flash",
  generationConfig: {
    candidateCount: 1,
    stopSequences: ["x"],
    maxOutputTokens: 20,
    temperature: 1.0,
  },
});

const result = await model.generateContent(
  "Tell me a story about a magic backpack.",
);
console.log(result.response.text());

Go

model := client.GenerativeModel("gemini-1.5-pro-latest")
model.SetTemperature(0.9)
model.SetTopP(0.5)
model.SetTopK(20)
model.SetMaxOutputTokens(100)
model.SystemInstruction = genai.NewUserContent(genai.Text("You are Yoda from Star Wars."))
model.ResponseMIMEType = "application/json"
resp, err := model.GenerateContent(ctx, genai.Text("What is the average size of a swallow?"))
if err != nil {
	log.Fatal(err)
}
printResponse(resp)

Shell

curl https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
        "contents": [{
            "parts":[
                {"text": "Write a story about a magic backpack."}
            ]
        }],
        "safetySettings": [
            {
                "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
                "threshold": "BLOCK_ONLY_HIGH"
            }
        ],
        "generationConfig": {
            "stopSequences": [
                "Title"
            ],
            "temperature": 1.0,
            "maxOutputTokens": 800,
            "topP": 0.8,
            "topK": 10
        }
    }'  2> /dev/null | grep "text"

Kotlin

val config = generationConfig {
  temperature = 0.9f
  topK = 16
  topP = 0.1f
  maxOutputTokens = 200
  stopSequences = listOf("red")
}

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        apiKey = BuildConfig.apiKey,
        generationConfig = config)

Swift

let config = GenerationConfig(
  temperature: 0.9,
  topP: 0.1,
  topK: 16,
  candidateCount: 1,
  maxOutputTokens: 200,
  stopSequences: ["red", "orange"]
)

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default,
    generationConfig: config
  )

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final prompt = 'Tell me a story about a magic backpack.';

final response = await model.generateContent(
  [Content.text(prompt)],
  generationConfig: GenerationConfig(
    candidateCount: 1,
    stopSequences: ['x'],
    maxOutputTokens: 20,
    temperature: 1.0,
  ),
);
print(response.text);

자바

GenerationConfig.Builder configBuilder = new GenerationConfig.Builder();
configBuilder.temperature = 0.9f;
configBuilder.topK = 16;
configBuilder.topP = 0.1f;
configBuilder.maxOutputTokens = 200;
configBuilder.stopSequences = Arrays.asList("red");

GenerationConfig generationConfig = configBuilder.build();

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel("gemini-1.5-flash", BuildConfig.apiKey, generationConfig);

GenerativeModelFutures model = GenerativeModelFutures.from(gm);

안전 설정

Python

model = genai.GenerativeModel("gemini-1.5-flash")
unsafe_prompt = "I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them."
response = model.generate_content(
    unsafe_prompt,
    safety_settings={
        "HATE": "MEDIUM",
        "HARASSMENT": "BLOCK_ONLY_HIGH",
    },
)
# If you want to set all the safety_settings to the same value you can just pass that value:
response = model.generate_content(unsafe_prompt, safety_settings="MEDIUM")
try:
    print(response.text)
except:
    print("No information generated by the model.")

print(response.candidates[0].safety_ratings)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI, HarmCategory, HarmBlockThreshold } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({
  model: "gemini-1.5-flash",
  safetySettings: [
    {
      category: HarmCategory.HARM_CATEGORY_HARASSMENT,
      threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
    },
    {
      category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
      threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
    },
  ],
});

const unsafePrompt =
  "I support Martians Soccer Club and I think " +
  "Jupiterians Football Club sucks! Write an ironic phrase telling " +
  "them how I feel about them.";

const result = await model.generateContent(unsafePrompt);

try {
  result.response.text();
} catch (e) {
  console.error(e);
  console.log(result.response.candidates[0].safetyRatings);
}

Go

model := client.GenerativeModel("gemini-1.5-flash")
model.SafetySettings = []*genai.SafetySetting{
	{
		Category:  genai.HarmCategoryDangerousContent,
		Threshold: genai.HarmBlockLowAndAbove,
	},
	{
		Category:  genai.HarmCategoryHarassment,
		Threshold: genai.HarmBlockMediumAndAbove,
	},
}
resp, err := model.GenerateContent(ctx, genai.Text("I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them."))
if err != nil {
	log.Fatal(err)
}
printResponse(resp)

Shell

echo '{
    "safetySettings": [
        {"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_ONLY_HIGH"},
        {"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_MEDIUM_AND_ABOVE"}
    ],
    "contents": [{
        "parts":[{
            "text": "'I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them.'"}]}]}' > request.json

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d @request.json 2> /dev/null

Kotlin

val harassmentSafety = SafetySetting(HarmCategory.HARASSMENT, BlockThreshold.ONLY_HIGH)

val hateSpeechSafety = SafetySetting(HarmCategory.HATE_SPEECH, BlockThreshold.MEDIUM_AND_ABOVE)

val generativeModel =
    GenerativeModel(
        // The Gemini 1.5 models are versatile and work with most use cases
        modelName = "gemini-1.5-flash",
        apiKey = BuildConfig.apiKey,
        safetySettings = listOf(harassmentSafety, hateSpeechSafety))

Swift

let safetySettings = [
  SafetySetting(harmCategory: .dangerousContent, threshold: .blockLowAndAbove),
  SafetySetting(harmCategory: .harassment, threshold: .blockMediumAndAbove),
  SafetySetting(harmCategory: .hateSpeech, threshold: .blockOnlyHigh),
]

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default,
    safetySettings: safetySettings
  )

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final prompt = 'I support Martians Soccer Club and I think '
    'Jupiterians Football Club sucks! Write an ironic phrase telling '
    'them how I feel about them.';

final response = await model.generateContent(
  [Content.text(prompt)],
  safetySettings: [
    SafetySetting(HarmCategory.harassment, HarmBlockThreshold.medium),
    SafetySetting(HarmCategory.hateSpeech, HarmBlockThreshold.low),
  ],
);
try {
  print(response.text);
} catch (e) {
  print(e);
  for (final SafetyRating(:category, :probability)
      in response.candidates.first.safetyRatings!) {
    print('Safety Rating: $category - $probability');
  }
}

자바

SafetySetting harassmentSafety =
    new SafetySetting(HarmCategory.HARASSMENT, BlockThreshold.ONLY_HIGH);

SafetySetting hateSpeechSafety =
    new SafetySetting(HarmCategory.HATE_SPEECH, BlockThreshold.MEDIUM_AND_ABOVE);

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        "gemini-1.5-flash",
        BuildConfig.apiKey,
        null, // generation config is optional
        Arrays.asList(harassmentSafety, hateSpeechSafety));

GenerativeModelFutures model = GenerativeModelFutures.from(gm);

시스템 안내

Python

model = genai.GenerativeModel(
    "models/gemini-1.5-flash",
    system_instruction="You are a cat. Your name is Neko.",
)
response = model.generate_content("Good morning! How are you?")
print(response.text)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({
  model: "gemini-1.5-flash",
  systemInstruction: "You are a cat. Your name is Neko.",
});

const prompt = "Good morning! How are you?";

const result = await model.generateContent(prompt);
const response = result.response;
const text = response.text();
console.log(text);

Go

model := client.GenerativeModel("gemini-1.5-flash")
model.SystemInstruction = genai.NewUserContent(genai.Text("You are a cat. Your name is Neko."))
resp, err := model.GenerateContent(ctx, genai.Text("Good morning! How are you?"))
if err != nil {
	log.Fatal(err)
}
printResponse(resp)

Shell

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{ "system_instruction": {
    "parts":
      { "text": "You are a cat. Your name is Neko."}},
    "contents": {
      "parts": {
        "text": "Hello there"}}}'

Kotlin

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        apiKey = BuildConfig.apiKey,
        systemInstruction = content { text("You are a cat. Your name is Neko.") },
    )

Swift

let generativeModel =
  GenerativeModel(
    // Specify a model that supports system instructions, like a Gemini 1.5 model
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default,
    systemInstruction: ModelContent(role: "system", parts: "You are a cat. Your name is Neko.")
  )

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
  systemInstruction: Content.system('You are a cat. Your name is Neko.'),
);
final prompt = 'Good morning! How are you?';

final response = await model.generateContent([Content.text(prompt)]);
print(response.text);

자바

GenerativeModel model =
    new GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        /* modelName */ "gemini-1.5-flash",
        /* apiKey */ BuildConfig.apiKey,
        /* generationConfig (optional) */ null,
        /* safetySettings (optional) */ null,
        /* requestOptions (optional) */ new RequestOptions(),
        /* tools (optional) */ null,
        /* toolsConfig (optional) */ null,
        /* systemInstruction (optional) */ new Content.Builder()
            .addText("You are a cat. Your name is Neko.")
            .build());

응답 본문

성공한 경우 응답 본문에 GenerateContentResponse의 인스턴스가 포함됩니다.

메서드: models.streamGenerateContent

입력 GenerateContentRequest가 주어지면 모델에서 스트리밍 응답을 생성합니다.

엔드포인트

<ph type="x-smartling-placeholder"></ph> <ph type="x-smartling-placeholder"></ph> 게시물 https://generativelanguage.googleapis.com/v1beta/{model=models/*}:streamGenerateContent

경로 매개변수

model string

필수 항목입니다. 완료를 생성하는 데 사용할 Model의 이름입니다.

형식: name=models/{model} models/{model} 형식을 사용합니다.

요청 본문

요청 본문에는 다음과 같은 구조의 데이터가 포함됩니다.

<ph type="x-smartling-placeholder">
</ph> 입력란
contents[] object (Content)

필수 항목입니다. 모델과의 현재 대화 콘텐츠입니다.

싱글턴 쿼리의 경우 이는 단일 인스턴스입니다. 채팅과 같은 멀티턴 쿼리의 경우 이 필드는 대화 기록과 최신 요청을 포함하는 반복 필드입니다.

tools[] object (Tool)

선택사항입니다. Model가 다음 응답을 생성하는 데 사용할 수 있는 Tools 목록입니다.

Tool는 시스템이 외부 시스템과 상호작용하여 Model의 지식과 범위를 벗어난 작업 또는 작업 집합을 실행할 수 있도록 하는 코드입니다. 지원되는 ToolFunctioncodeExecution입니다. 자세한 내용은 함수 호출코드 실행 가이드를 참고하세요.

toolConfig object (ToolConfig)

선택사항입니다. 요청에 지정된 Tool의 도구 구성입니다. 사용 예는 함수 호출 가이드를 참고하세요.

safetySettings[] object (SafetySetting)

선택사항입니다. 안전하지 않은 콘텐츠를 차단하기 위한 고유한 SafetySetting 인스턴스 목록입니다.

이는 GenerateContentRequest.contentsGenerateContentResponse.candidates에 적용됩니다. 각 SafetyCategory 유형에 대해 두 개 이상의 설정이 있어서는 안 됩니다. API는 이러한 설정에 의해 설정된 기준을 충족하지 못하는 모든 콘텐츠 및 응답을 차단합니다. 이 목록은 safetySettings에 지정된 각 SafetyCategory의 기본 설정을 재정의합니다. 목록에 제공된 특정 SafetyCategorySafetySetting가 없는 경우 API는 해당 카테고리의 기본 안전 설정을 사용합니다. 유해한 카테고리 HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT가 지원됩니다. 사용 가능한 안전 설정에 대한 자세한 내용은 가이드를 참고하세요. 또한 AI 애플리케이션에 안전 고려사항을 통합하는 방법을 알아보려면 안전 안내를 참고하세요.

systemInstruction object (Content)

선택사항입니다. 개발자가 설정한 시스템 안내입니다. 현재는 텍스트만 지원합니다.

generationConfig object (GenerationConfig)

선택사항입니다. 모델 생성 및 출력을 위한 구성 옵션입니다.

cachedContent string

선택사항입니다. 예측을 제공하기 위한 컨텍스트로 사용하기 위해 캐시된 콘텐츠의 이름입니다. 형식: cachedContents/{cachedContent}

요청 예시

텍스트

Python

model = genai.GenerativeModel("gemini-1.5-flash")
response = model.generate_content("Write a story about a magic backpack.", stream=True)
for chunk in response:
    print(chunk.text)
    print("_" * 80)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

const prompt = "Write a story about a magic backpack.";

const result = await model.generateContentStream(prompt);

// Print text as it comes in.
for await (const chunk of result.stream) {
  const chunkText = chunk.text();
  process.stdout.write(chunkText);
}

Go

model := client.GenerativeModel("gemini-1.5-flash")
iter := model.GenerateContentStream(ctx, genai.Text("Write a story about a magic backpack."))
for {
	resp, err := iter.Next()
	if err == iterator.Done {
		break
	}
	if err != nil {
		log.Fatal(err)
	}
	printResponse(resp)
}

Shell

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:streamGenerateContent?alt=sse&key=${GOOGLE_API_KEY}" \
        -H 'Content-Type: application/json' \
        --no-buffer \
        -d '{ "contents":[{"parts":[{"text": "Write a story about a magic backpack."}]}]}'

Kotlin

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val prompt = "Write a story about a magic backpack."
// Use streaming with text-only input
generativeModel.generateContentStream(prompt).collect { chunk -> print(chunk.text) }

Swift

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

let prompt = "Write a story about a magic backpack."
// Use streaming with text-only input
for try await response in generativeModel.generateContentStream(prompt) {
  if let text = response.text {
    print(text)
  }
}

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final prompt = 'Write a story about a magic backpack.';

final responses = model.generateContentStream([Content.text(prompt)]);
await for (final response in responses) {
  print(response.text);
}

자바

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Content content =
    new Content.Builder().addText("Write a story about a magic backpack.").build();

Publisher<GenerateContentResponse> streamingResponse = model.generateContentStream(content);

StringBuilder outputContent = new StringBuilder();

streamingResponse.subscribe(
    new Subscriber<GenerateContentResponse>() {
      @Override
      public void onNext(GenerateContentResponse generateContentResponse) {
        String chunk = generateContentResponse.getText();
        outputContent.append(chunk);
      }

      @Override
      public void onComplete() {
        System.out.println(outputContent);
      }

      @Override
      public void onError(Throwable t) {
        t.printStackTrace();
      }

      @Override
      public void onSubscribe(Subscription s) {
        s.request(Long.MAX_VALUE);
      }
    });

이미지

Python

import PIL.Image

model = genai.GenerativeModel("gemini-1.5-flash")
organ = PIL.Image.open(media / "organ.jpg")
response = model.generate_content(["Tell me about this instrument", organ], stream=True)
for chunk in response:
    print(chunk.text)
    print("_" * 80)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

function fileToGenerativePart(path, mimeType) {
  return {
    inlineData: {
      data: Buffer.from(fs.readFileSync(path)).toString("base64"),
      mimeType,
    },
  };
}

const prompt = "Describe how this product might be manufactured.";
// Note: The only accepted mime types are some image types, image/*.
const imagePart = fileToGenerativePart(
  `${mediaPath}/jetpack.jpg`,
  "image/jpeg",
);

const result = await model.generateContentStream([prompt, imagePart]);

// Print text as it comes in.
for await (const chunk of result.stream) {
  const chunkText = chunk.text();
  process.stdout.write(chunkText);
}

Go

model := client.GenerativeModel("gemini-1.5-flash")

imgData, err := os.ReadFile(filepath.Join(testDataDir, "organ.jpg"))
if err != nil {
	log.Fatal(err)
}
iter := model.GenerateContentStream(ctx,
	genai.Text("Tell me about this instrument"),
	genai.ImageData("jpeg", imgData))
for {
	resp, err := iter.Next()
	if err == iterator.Done {
		break
	}
	if err != nil {
		log.Fatal(err)
	}
	printResponse(resp)
}

Shell

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:streamGenerateContent?alt=sse&key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
            {"text": "Tell me about this instrument"},
            {
              "inline_data": {
                "mime_type":"image/jpeg",
                "data": "'$(base64 $B64FLAGS $IMG_PATH)'"
              }
            }
        ]
        }]
       }' 2> /dev/null

Kotlin

val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val image: Bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.image)
val inputContent = content {
  image(image)
  text("What's in this picture?")
}

generativeModel.generateContentStream(inputContent).collect { chunk -> print(chunk.text) }

Swift

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

guard let image = UIImage(systemName: "cloud.sun") else { fatalError() }

let prompt = "What's in this picture?"

for try await response in generativeModel.generateContentStream(image, prompt) {
  if let text = response.text {
    print(text)
  }
}

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);

Future<DataPart> fileToPart(String mimeType, String path) async {
  return DataPart(mimeType, await File(path).readAsBytes());
}

final prompt = 'Describe how this product might be manufactured.';
final image = await fileToPart('image/jpeg', 'resources/jetpack.jpg');

final responses = model.generateContentStream([
  Content.multi([TextPart(prompt), image])
]);
await for (final response in responses) {
  print(response.text);
}

자바

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Bitmap image1 = BitmapFactory.decodeResource(context.getResources(), R.drawable.image1);
Bitmap image2 = BitmapFactory.decodeResource(context.getResources(), R.drawable.image2);

Content content =
    new Content.Builder()
        .addText("What's different between these pictures?")
        .addImage(image1)
        .addImage(image2)
        .build();

// For illustrative purposes only. You should use an executor that fits your needs.
Executor executor = Executors.newSingleThreadExecutor();

Publisher<GenerateContentResponse> streamingResponse = model.generateContentStream(content);

StringBuilder outputContent = new StringBuilder();

streamingResponse.subscribe(
    new Subscriber<GenerateContentResponse>() {
      @Override
      public void onNext(GenerateContentResponse generateContentResponse) {
        String chunk = generateContentResponse.getText();
        outputContent.append(chunk);
      }

      @Override
      public void onComplete() {
        System.out.println(outputContent);
      }

      @Override
      public void onError(Throwable t) {
        t.printStackTrace();
      }

      @Override
      public void onSubscribe(Subscription s) {
        s.request(Long.MAX_VALUE);
      }
    });

오디오

Python

model = genai.GenerativeModel("gemini-1.5-flash")
sample_audio = genai.upload_file(media / "sample.mp3")
response = model.generate_content(["Give me a summary of this audio file.", sample_audio])

for chunk in response:
    print(chunk.text)
    print("_" * 80)

Shell

# Use File API to upload audio data to API request.
MIME_TYPE=$(file -b --mime-type "${AUDIO_PATH}")
NUM_BYTES=$(wc -c < "${AUDIO_PATH}")
DISPLAY_NAME=AUDIO

tmp_header_file=upload-header.tmp

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GOOGLE_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${AUDIO_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:streamGenerateContent?alt=sse&key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Please describe this file."},
          {"file_data":{"mime_type": "audio/mpeg", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo

동영상

Python

import time

# Video clip (CC BY 3.0) from https://peach.blender.org/download/
myfile = genai.upload_file(media / "Big_Buck_Bunny.mp4")
print(f"{myfile=}")

# Videos need to be processed before you can use them.
while myfile.state.name == "PROCESSING":
    print("processing video...")
    time.sleep(5)
    myfile = genai.get_file(myfile.name)

model = genai.GenerativeModel("gemini-1.5-flash")

response = model.generate_content([myfile, "Describe this video clip"])
for chunk in response:
    print(chunk.text)
    print("_" * 80)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
// import { GoogleAIFileManager, FileState } from "@google/generative-ai/server";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

const fileManager = new GoogleAIFileManager(process.env.API_KEY);

const uploadResult = await fileManager.uploadFile(
  `${mediaPath}/Big_Buck_Bunny.mp4`,
  { mimeType: "video/mp4" },
);

let file = await fileManager.getFile(uploadResult.file.name);
while (file.state === FileState.PROCESSING) {
  process.stdout.write(".");
  // Sleep for 10 seconds
  await new Promise((resolve) => setTimeout(resolve, 10_000));
  // Fetch the file from the API again
  file = await fileManager.getFile(uploadResult.file.name);
}

if (file.state === FileState.FAILED) {
  throw new Error("Video processing failed.");
}

const prompt = "Describe this video clip";
const videoPart = {
  fileData: {
    fileUri: uploadResult.file.uri,
    mimeType: uploadResult.file.mimeType,
  },
};

const result = await model.generateContentStream([prompt, videoPart]);
// Print text as it comes in.
for await (const chunk of result.stream) {
  const chunkText = chunk.text();
  process.stdout.write(chunkText);
}

Go

model := client.GenerativeModel("gemini-1.5-flash")

file, err := client.UploadFileFromPath(ctx, filepath.Join(testDataDir, "earth.mp4"), nil)
if err != nil {
	log.Fatal(err)
}
defer client.DeleteFile(ctx, file.Name)

iter := model.GenerateContentStream(ctx,
	genai.Text("Describe this video clip"),
	genai.FileData{URI: file.URI})
for {
	resp, err := iter.Next()
	if err == iterator.Done {
		break
	}
	if err != nil {
		log.Fatal(err)
	}
	printResponse(resp)
}

Shell

# Use File API to upload audio data to API request.
MIME_TYPE=$(file -b --mime-type "${VIDEO_PATH}")
NUM_BYTES=$(wc -c < "${VIDEO_PATH}")
DISPLAY_NAME=VIDEO_PATH

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GOOGLE_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${VIDEO_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

state=$(jq ".file.state" file_info.json)
echo state=$state

while [[ "($state)" = *"PROCESSING"* ]];
do
  echo "Processing video..."
  sleep 5
  # Get the file of interest to check state
  curl https://generativelanguage.googleapis.com/v1beta/files/$name > file_info.json
  state=$(jq ".file.state" file_info.json)
done

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:streamGenerateContent?alt=sse&key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Please describe this file."},
          {"file_data":{"mime_type": "video/mp4", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo

PDF

Python

model = genai.GenerativeModel("gemini-1.5-flash")
sample_pdf = genai.upload_file(media / "test.pdf")
response = model.generate_content(["Give me a summary of this document:", sample_pdf])

for chunk in response:
    print(chunk.text)
    print("_" * 80)

Shell

MIME_TYPE=$(file -b --mime-type "${PDF_PATH}")
NUM_BYTES=$(wc -c < "${PDF_PATH}")
DISPLAY_NAME=TEXT


echo $MIME_TYPE
tmp_header_file=upload-header.tmp

# Initial resumable request defining metadata.
# The upload url is in the response headers dump them to a file.
curl "${BASE_URL}/upload/v1beta/files?key=${GOOGLE_API_KEY}" \
  -D upload-header.tmp \
  -H "X-Goog-Upload-Protocol: resumable" \
  -H "X-Goog-Upload-Command: start" \
  -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \
  -H "Content-Type: application/json" \
  -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null

upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r")
rm "${tmp_header_file}"

# Upload the actual bytes.
curl "${upload_url}" \
  -H "Content-Length: ${NUM_BYTES}" \
  -H "X-Goog-Upload-Offset: 0" \
  -H "X-Goog-Upload-Command: upload, finalize" \
  --data-binary "@${PDF_PATH}" 2> /dev/null > file_info.json

file_uri=$(jq ".file.uri" file_info.json)
echo file_uri=$file_uri

# Now generate content using that file
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:streamGenerateContent?alt=sse&key=$GOOGLE_API_KEY" \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[
          {"text": "Can you add a few more lines to this poem?"},
          {"file_data":{"mime_type": "application/pdf", "file_uri": '$file_uri'}}]
        }]
       }' 2> /dev/null > response.json

cat response.json
echo

채팅

Python

model = genai.GenerativeModel("gemini-1.5-flash")
chat = model.start_chat(
    history=[
        {"role": "user", "parts": "Hello"},
        {"role": "model", "parts": "Great to meet you. What would you like to know?"},
    ]
)
response = chat.send_message("I have 2 dogs in my house.", stream=True)
for chunk in response:
    print(chunk.text)
    print("_" * 80)
response = chat.send_message("How many paws are in my house?", stream=True)
for chunk in response:
    print(chunk.text)
    print("_" * 80)

print(chat.history)

Node.js

// Make sure to include these imports:
// import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
const chat = model.startChat({
  history: [
    {
      role: "user",
      parts: [{ text: "Hello" }],
    },
    {
      role: "model",
      parts: [{ text: "Great to meet you. What would you like to know?" }],
    },
  ],
});
let result = await chat.sendMessageStream("I have 2 dogs in my house.");
for await (const chunk of result.stream) {
  const chunkText = chunk.text();
  process.stdout.write(chunkText);
}
result = await chat.sendMessageStream("How many paws are in my house?");
for await (const chunk of result.stream) {
  const chunkText = chunk.text();
  process.stdout.write(chunkText);
}

Go

model := client.GenerativeModel("gemini-1.5-flash")
cs := model.StartChat()

cs.History = []*genai.Content{
	{
		Parts: []genai.Part{
			genai.Text("Hello, I have 2 dogs in my house."),
		},
		Role: "user",
	},
	{
		Parts: []genai.Part{
			genai.Text("Great to meet you. What would you like to know?"),
		},
		Role: "model",
	},
}

iter := cs.SendMessageStream(ctx, genai.Text("How many paws are in my house?"))
for {
	resp, err := iter.Next()
	if err == iterator.Done {
		break
	}
	if err != nil {
		log.Fatal(err)
	}
	printResponse(resp)
}

Shell

curl https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:streamGenerateContent?alt=sse&key=$GOOGLE_API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [
        {"role":"user",
         "parts":[{
           "text": "Hello"}]},
        {"role": "model",
         "parts":[{
           "text": "Great to meet you. What would you like to know?"}]},
        {"role":"user",
         "parts":[{
           "text": "I have two dogs in my house. How many paws are in my house?"}]},
      ]
    }' 2> /dev/null | grep "text"

Kotlin

// Use streaming with multi-turn conversations (like chat)
val generativeModel =
    GenerativeModel(
        // Specify a Gemini model appropriate for your use case
        modelName = "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key" above)
        apiKey = BuildConfig.apiKey)

val chat =
    generativeModel.startChat(
        history =
            listOf(
                content(role = "user") { text("Hello, I have 2 dogs in my house.") },
                content(role = "model") {
                  text("Great to meet you. What would you like to know?")
                }))

chat.sendMessageStream("How many paws are in my house?").collect { chunk -> print(chunk.text) }

Swift

let generativeModel =
  GenerativeModel(
    // Specify a Gemini model appropriate for your use case
    name: "gemini-1.5-flash",
    // Access your API key from your on-demand resource .plist file (see "Set up your API key"
    // above)
    apiKey: APIKey.default
  )

// Optionally specify existing chat history
let history = [
  ModelContent(role: "user", parts: "Hello, I have 2 dogs in my house."),
  ModelContent(role: "model", parts: "Great to meet you. What would you like to know?"),
]

// Initialize the chat with optional chat history
let chat = generativeModel.startChat(history: history)

// To stream generated text output, call sendMessageStream and pass in the message
let contentStream = chat.sendMessageStream("How many paws are in my house?")
for try await chunk in contentStream {
  if let text = chunk.text {
    print(text)
  }
}

Dart

final model = GenerativeModel(
  model: 'gemini-1.5-flash',
  apiKey: apiKey,
);
final chat = model.startChat(history: [
  Content.text('hello'),
  Content.model([TextPart('Great to meet you. What would you like to know?')])
]);
var responses =
    chat.sendMessageStream(Content.text('I have 2 dogs in my house.'));
await for (final response in responses) {
  print(response.text);
  print('_' * 80);
}
responses =
    chat.sendMessageStream(Content.text('How many paws are in my house?'));
await for (final response in responses) {
  print(response.text);
  print('_' * 80);
}

자바

// Specify a Gemini model appropriate for your use case
GenerativeModel gm =
    new GenerativeModel(
        /* modelName */ "gemini-1.5-flash",
        // Access your API key as a Build Configuration variable (see "Set up your API key"
        // above)
        /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

// (optional) Create previous chat history for context
Content.Builder userContentBuilder = new Content.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText("Hello, I have 2 dogs in my house.");
Content userContent = userContentBuilder.build();

Content.Builder modelContentBuilder = new Content.Builder();
modelContentBuilder.setRole("model");
modelContentBuilder.addText("Great to meet you. What would you like to know?");
Content modelContent = userContentBuilder.build();

List<Content> history = Arrays.asList(userContent, modelContent);

// Initialize the chat
ChatFutures chat = model.startChat(history);

// Create a new user message
Content.Builder userMessageBuilder = new Content.Builder();
userMessageBuilder.setRole("user");
userMessageBuilder.addText("How many paws are in my house?");
Content userMessage = userMessageBuilder.build();

// Use streaming with text-only input
Publisher<GenerateContentResponse> streamingResponse = model.generateContentStream(userMessage);

StringBuilder outputContent = new StringBuilder();

streamingResponse.subscribe(
    new Subscriber<GenerateContentResponse>() {
      @Override
      public void onNext(GenerateContentResponse generateContentResponse) {
        String chunk = generateContentResponse.getText();
        outputContent.append(chunk);
      }

      @Override
      public void onComplete() {
        System.out.println(outputContent);
      }

      @Override
      public void onSubscribe(Subscription s) {
        s.request(Long.MAX_VALUE);
      }

      @Override
      public void onError(Throwable t) {}

    });

응답 본문

요청이 성공하면 응답 본문에 GenerateContentResponse 인스턴스 스트림이 포함됩니다.

GenerateContentResponse

여러 후보 응답을 지원하는 모델의 응답입니다.

GenerateContentResponse.prompt_feedback의 프롬프트와 finishReasonsafetyRatings의 각 후보 단어에 대해 안전 등급 및 콘텐츠 필터링이 보고됩니다. API: - 요청된 모든 후보자를 반환하거나 하나도 반환하지 않음 - 프롬프트에 문제가 있는 경우에만 후보를 반환하지 않습니다 (promptFeedback 확인). - finishReasonsafetyRatings에서 각 후보자에 대한 의견을 보고합니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란
candidates[] object (Candidate)

모델의 후보 응답입니다.

promptFeedback object (PromptFeedback)

콘텐츠 필터와 관련된 프롬프트의 의견을 반환합니다.

usageMetadata object (UsageMetadata)

출력 전용입니다. 생성 요청의 메타데이터 토큰 사용량입니다.

JSON 표현
{
  "candidates": [
    {
      object (Candidate)
    }
  ],
  "promptFeedback": {
    object (PromptFeedback)
  },
  "usageMetadata": {
    object (UsageMetadata)
  }
}

PromptFeedback

GenerateContentRequest.content에 지정된 프롬프트의 의견 메타데이터 집합입니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란
blockReason enum (BlockReason)

선택사항입니다. 설정된 경우 프롬프트가 차단되며 후보가 반환되지 않습니다. 프롬프트를 다르게 표현합니다.

safetyRatings[] object (SafetyRating)

프롬프트의 안전성을 위한 평점입니다. 카테고리당 최대 1개의 평점이 있습니다.

JSON 표현
{
  "blockReason": enum (BlockReason),
  "safetyRatings": [
    {
      object (SafetyRating)
    }
  ]
}

BlockReason

메시지가 차단된 이유를 지정합니다.

열거형
BLOCK_REASON_UNSPECIFIED 기본값 이 값은 사용되지 않습니다.
SAFETY 안전상의 이유로 메시지가 차단되었습니다. safetyRatings를 검사하여 차단된 안전 카테고리를 파악합니다.
OTHER 알 수 없는 이유로 메시지가 차단되었습니다.
BLOCKLIST 용어 차단 목록에 포함된 용어로 인해 프롬프트가 차단되었습니다.
PROHIBITED_CONTENT 금지된 콘텐츠로 인해 메시지가 차단되었습니다.

UsageMetadata

생성 요청의 토큰 사용에 관한 메타데이터입니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란
promptTokenCount integer

프롬프트의 토큰 수입니다. cachedContent가 설정되어 있으면 여전히 총 유효 프롬프트 크기입니다. 즉, 캐시된 콘텐츠의 토큰 수가 포함됩니다.

cachedContentTokenCount integer

프롬프트의 캐시된 부분 (캐시된 콘텐츠)의 토큰 수

candidatesTokenCount integer

생성된 모든 응답 후보의 총 토큰 수입니다.

totalTokenCount integer

생성 요청의 총 토큰 수입니다 (프롬프트 + 응답 후보).

JSON 표현
{
  "promptTokenCount": integer,
  "cachedContentTokenCount": integer,
  "candidatesTokenCount": integer,
  "totalTokenCount": integer
}

후보자

모델에서 생성된 응답 후보입니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란
content object (Content)

출력 전용입니다. 모델에서 반환된 생성된 콘텐츠입니다.

finishReason enum (FinishReason)

선택사항입니다. 출력 전용입니다. 모델 토큰 생성이 중지된 이유입니다.

비어 있으면 모델이 토큰 생성을 중지하지 않은 것입니다.

safetyRatings[] object (SafetyRating)

응답 후보의 안전성에 관한 평점 목록입니다.

카테고리당 최대 1개의 평점이 있습니다.

citationMetadata object (CitationMetadata)

출력 전용입니다. 모델 생성 후보에 대한 인용 정보입니다.

이 필드는 content에 포함된 텍스트에 대한 인용 정보로 채워질 수 있습니다. '인용'한 구절 저작권 보호 자료에서 추출하는 방법을 학습합니다.

tokenCount integer

출력 전용입니다. 이 후보의 토큰 수입니다.

groundingAttributions[] object (GroundingAttribution)

출력 전용입니다. 근거에 기반한 답변에 기여한 소스의 기여 분석 정보입니다.

이 필드는 GenerateAnswer 호출을 위해 채워집니다.

index integer

출력 전용입니다. 응답 후보 목록에서 후보의 색인입니다.

JSON 표현
{
  "content": {
    object (Content)
  },
  "finishReason": enum (FinishReason),
  "safetyRatings": [
    {
      object (SafetyRating)
    }
  ],
  "citationMetadata": {
    object (CitationMetadata)
  },
  "tokenCount": integer,
  "groundingAttributions": [
    {
      object (GroundingAttribution)
    }
  ],
  "index": integer
}

FinishReason

모델이 토큰 생성을 중지한 이유를 정의합니다.

열거형
FINISH_REASON_UNSPECIFIED 기본값 이 값은 사용되지 않습니다.
STOP 모델의 자연스러운 정지 지점이거나 제공된 정지 시퀀스입니다.
MAX_TOKENS 요청에 지정된 최대 토큰 수에 도달했습니다.
SAFETY 응답 후보 콘텐츠가 안전상의 이유로 신고되었습니다.
RECITATION 응답 후보 콘텐츠가 인용 이유로 신고되었습니다.
LANGUAGE 응답 후보 콘텐츠가 지원되지 않는 언어를 사용하여 신고되었습니다.
OTHER 알 수 없는 이유입니다.
BLOCKLIST 콘텐츠에 금지된 용어가 포함되어 있어 토큰 생성이 중지되었습니다.
PROHIBITED_CONTENT 금지된 콘텐츠가 포함되어 있을 수 있어 토큰 생성이 중지되었습니다.
SPII 콘텐츠에 민감한 개인 식별 정보 (SPII)가 포함되어 있을 수 있어 토큰 생성이 중지되었습니다.
MALFORMED_FUNCTION_CALL 모델에서 생성한 함수 호출이 잘못되었습니다.

GroundingAttribution

답변에 기여한 소스의 출처입니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란
sourceId object (AttributionSourceId)

출력 전용입니다. 이 기여 분석에 기여한 소스의 식별자입니다.

content object (Content)

이 속성을 구성하는 소스 콘텐츠의 그라운딩

JSON 표현
{
  "sourceId": {
    object (AttributionSourceId)
  },
  "content": {
    object (Content)
  }
}

AttributionSourceId

이 기여 분석에 기여한 소스의 식별자입니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란

통합 필드 source.

source는 다음 중 하나여야 합니다.

groundingPassage object (GroundingPassageId)

인라인 문구의 식별자입니다.

semanticRetrieverChunk object (SemanticRetrieverChunk)

시맨틱 검색기를 통해 가져온 Chunk의 식별자입니다.

JSON 표현
{

  // Union field source can be only one of the following:
  "groundingPassage": {
    object (GroundingPassageId)
  },
  "semanticRetrieverChunk": {
    object (SemanticRetrieverChunk)
  }
  // End of list of possible types for union field source.
}

GroundingPassageId

GroundingPassage 내 부분의 식별자입니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란
passageId string

출력 전용입니다. GenerateAnswerRequestGroundingPassage.id와 일치하는 문구의 ID입니다.

partIndex integer

출력 전용입니다. GenerateAnswerRequestGroundingPassage.content 내 부분의 색인입니다.

JSON 표현
{
  "passageId": string,
  "partIndex": integer
}

SemanticRetrieverChunk

SemanticRetrieverConfig를 사용하여 GenerateAnswerRequest에 지정된 Semantic Retriever를 통해 검색된 Chunk의 식별자입니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란
source string

출력 전용입니다. 요청의 SemanticRetrieverConfig.source와 일치하는 소스의 이름입니다. 예: corpora/123 또는 corpora/123/documents/abc

chunk string

출력 전용입니다. 속성 텍스트가 포함된 Chunk의 이름입니다. 예: corpora/123/documents/abc/chunks/xyz

JSON 표현
{
  "source": string,
  "chunk": string
}

CitationMetadata

콘텐츠의 소스 저작자 표시 모음입니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란
citationSources[] object (CitationSource)

특정 응답의 출처 인용

JSON 표현
{
  "citationSources": [
    {
      object (CitationSource)
    }
  ]
}

CitationSource

특정 응답의 일부분에 대한 출처 인용입니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란
startIndex integer

선택사항입니다. 이 소스가 기여한 응답 세그먼트의 시작입니다.

색인은 바이트 단위로 측정된 세그먼트의 시작을 나타냅니다.

endIndex integer

선택사항입니다. 기여 세그먼트의 끝(제외)입니다.

uri string

선택사항입니다. 텍스트 일부의 소스로 지정된 URI입니다.

license string

선택사항입니다. 세그먼트의 소스로 기여도가 부여된 GitHub 프로젝트의 라이선스입니다.

코드 인용을 위해 라이선스 정보가 필요합니다.

JSON 표현
{
  "startIndex": integer,
  "endIndex": integer,
  "uri": string,
  "license": string
}

GenerationConfig

모델 생성 및 출력을 위한 구성 옵션입니다. 모든 모델에서 모든 매개변수를 구성할 수 있는 것은 아닙니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란
stopSequences[] string

선택사항입니다. 출력 생성을 중지하는 문자 시퀀스 (최대 5개)입니다. 지정하면 stop_sequence가 처음 표시될 때 API가 중지됩니다. 중지 시퀀스는 응답에 포함되지 않습니다.

responseMimeType string

선택사항입니다. 생성된 후보 텍스트의 MIME 유형입니다. 지원되는 MIME 유형은 text/plain: (기본값) 텍스트 출력입니다. application/json: 응답 후보의 JSON 응답입니다. 지원되는 모든 텍스트 MIME 유형 목록은 문서를 참조하세요.

responseSchema object (Schema)

선택사항입니다. 생성된 후보 텍스트의 출력 스키마입니다. 스키마는 OpenAPI 스키마의 하위 집합이어야 하며 객체, 기본형 또는 배열일 수 있습니다.

설정하는 경우 호환되는 responseMimeType도 설정해야 합니다. 호환되는 MIME 유형: application/json: JSON 응답의 스키마입니다. 자세한 내용은 JSON 텍스트 생성 가이드를 참고하세요.

candidateCount integer

선택사항입니다. 반환할 생성된 응답 수입니다.

현재 이 값은 1로만 설정할 수 있습니다. 설정하지 않으면 기본값은 1입니다.

maxOutputTokens integer

선택사항입니다. 응답 후보에 포함할 최대 토큰 수입니다.

참고: 기본값은 모델에 따라 다릅니다. getModel 함수에서 반환된 ModelModel.output_token_limit 속성을 참고하세요.

temperature number

선택사항입니다. 출력의 무작위성을 제어합니다.

참고: 기본값은 모델에 따라 다릅니다. getModel 함수에서 반환된 ModelModel.temperature 속성을 참고하세요.

값의 범위는 [0.0, 2.0]입니다.

topP number

선택사항입니다. 샘플링 시 고려할 토큰의 최대 누적 확률입니다.

이 모델은 Top-k 및 Top-p (핵) 샘플링을 사용합니다.

토큰은 할당된 확률에 따라 정렬되므로 가능성이 가장 높은 토큰만 고려됩니다. Top-K 샘플링은 고려할 최대 토큰 수를 직접 제한하는 반면 Nucleus 샘플링은 누적 확률을 기준으로 토큰 수를 제한합니다.

참고: 기본값은 Model에 따라 다르며 getModel 함수에서 반환된 Model.top_p 속성으로 지정됩니다. topK 속성이 비어 있으면 모델이 최상위 k 샘플링을 적용하지 않으며 요청에 topK 설정을 허용하지 않음을 나타냅니다.

topK integer

선택사항입니다. 샘플링 시 고려할 최대 토큰 수입니다.

Gemini 모델은 Top-p (핵) 샘플링 또는 Top-k 샘플링과 핵 샘플링의 조합을 사용합니다. Top-k 샘플링은 확률이 가장 높은 topK 토큰 집합을 고려합니다. 핵 샘플링으로 실행되는 모델은 최상위 K 설정을 허용하지 않습니다.

참고: 기본값은 Model에 따라 다르며 getModel 함수에서 반환된 Model.top_p 속성으로 지정됩니다. topK 속성이 비어 있으면 모델이 최상위 k 샘플링을 적용하지 않으며 요청에 topK 설정을 허용하지 않음을 나타냅니다.

JSON 표현
{
  "stopSequences": [
    string
  ],
  "responseMimeType": string,
  "responseSchema": {
    object (Schema)
  },
  "candidateCount": integer,
  "maxOutputTokens": integer,
  "temperature": number,
  "topP": number,
  "topK": integer
}

HarmCategory

평점의 카테고리입니다.

이러한 범주에는 개발자가 조정할 수 있는 다양한 종류의 피해가 포함됩니다.

열거형
HARM_CATEGORY_UNSPECIFIED 카테고리가 지정되지 않았습니다.
HARM_CATEGORY_DEROGATORY ID 또는 보호 속성을 타겟팅하는 부정적이거나 유해한 댓글입니다.
HARM_CATEGORY_TOXICITY 무례하거나 모욕적이거나 욕설이 있는 콘텐츠
HARM_CATEGORY_VIOLENCE 개인 또는 그룹에 대한 폭력을 묘사하는 시나리오 또는 유혈 콘텐츠에 대한 일반적인 설명을 묘사
HARM_CATEGORY_SEXUAL 성행위 또는 기타 외설적인 콘텐츠에 대한 참조가 포함
HARM_CATEGORY_MEDICAL 확인되지 않은 의학적 조언을 조장합니다.
HARM_CATEGORY_DANGEROUS 유해한 행위를 조장, 촉진 또는 조장하는 위험한 콘텐츠
HARM_CATEGORY_HARASSMENT 괴롭힘 콘텐츠
HARM_CATEGORY_HATE_SPEECH 증오심 표현 및 콘텐츠.
HARM_CATEGORY_SEXUALLY_EXPLICIT 음란물
HARM_CATEGORY_DANGEROUS_CONTENT 위험한 콘텐츠

SafetyRating

콘텐츠의 안전 등급입니다.

안전 등급에는 콘텐츠에 대한 유해 카테고리 및 해당 카테고리의 피해 가능성 수준이 포함됩니다. 콘텐츠는 여러 유해 카테고리에서 안전을 위해 분류되며 여기에는 위험 분류의 가능성이 포함됩니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란
category enum (HarmCategory)

필수 항목입니다. 이 평점의 카테고리입니다.

probability enum (HarmProbability)

필수 항목입니다. 이 콘텐츠가 피해를 줄 가능성입니다.

blocked boolean

이 등급으로 인해 콘텐츠가 차단되었나요?

JSON 표현
{
  "category": enum (HarmCategory),
  "probability": enum (HarmProbability),
  "blocked": boolean
}

HarmProbability

특정 콘텐츠가 유해할 확률입니다.

분류 시스템은 콘텐츠가 안전하지 않을 확률을 제공합니다. 콘텐츠가 얼마나 심각한 피해를 주는지 나타내지는 않습니다.

열거형
HARM_PROBABILITY_UNSPECIFIED 확률이 지정되지 않았습니다.
NEGLIGIBLE 콘텐츠가 안전하지 않을 가능성이 미미합니다.
LOW 콘텐츠가 안전하지 않을 가능성이 낮습니다.
MEDIUM 콘텐츠가 안전하지 않을 가능성이 중간입니다.
HIGH 콘텐츠가 안전하지 않을 가능성이 높습니다.

SafetySetting

안전 설정으로, 안전 차단 동작에 영향을 미칩니다.

카테고리의 안전 설정을 전달하면 콘텐츠가 차단될 수 있는 허용 가능성이 변경됩니다.

를 통해 개인정보처리방침을 정의할 수 있습니다. <ph type="x-smartling-placeholder">
</ph> 입력란
category enum (HarmCategory)

필수 항목입니다. 이 설정의 카테고리입니다.

threshold enum (HarmBlockThreshold)

필수 항목입니다. 피해가 차단되는 확률 임곗값을 제어합니다.

JSON 표현
{
  "category": enum (HarmCategory),
  "threshold": enum (HarmBlockThreshold)
}

HarmBlockThreshold

지정된 해를 끼칠 확률을 넘어서서 차단합니다.

열거형
HARM_BLOCK_THRESHOLD_UNSPECIFIED 기준이 지정되지 않았습니다.
BLOCK_LOW_AND_ABOVE NEGLIGIBLE이(가) 포함된 콘텐츠는 허용됩니다.
BLOCK_MEDIUM_AND_ABOVE NEGLIGIBLE 및 LOW인 콘텐츠는 허용됩니다.
BLOCK_ONLY_HIGH NEGIBLE(NEGIBLE), LOW(낮음), MEDIUM(보통)인 콘텐츠는 허용됩니다.
BLOCK_NONE 모든 콘텐츠가 허용됩니다.