Gemini API 概览

通过 Gemini API,您可以使用 Google 最新的生成模型。 熟悉该 API 提供的常规功能后,请尝试使用适用于您所选语言的教程开始开发。

模型

Gemini 是 Google 开发的一系列多模态生成式 AI 模型。 Gemini 模型可以接受问题中的文本和图片(具体取决于您选择的模型变体),并输出文本回答。

如需获取更详细的模型信息,请参阅 Gemini 模型页面。您还可以使用 list_models 方法列出所有可用的模型,然后使用 get_model 方法获取特定模型的元数据。

提示数据和设计

特定 Gemini 模型同时接受文本数据和媒体文件作为输入。此功能为生成内容、分析数据和解决问题提供了许多额外可能性。您需要考虑一些限制和要求,包括您所用模型的常规输入词元限制。如需了解特定模型的词元限制,请参阅 Gemini 模型

使用 Gemini API 的提示不得超过 20 MB。Gemini API 提供了一个 File API,用于暂时存储媒体文件以用于提示,让您可以提供超出 20MB 限制的提示数据。如需详细了解如何使用 Files API 以及支持用于提示的文件格式,请参阅使用媒体文件进行提示

提示设计和文本输入

创建有效的提示或者提示工程是一门艺术与科学的结合。如需了解如何处理提示,请参阅提示简介;如需了解不同的提示方法,请参阅提示入门指南

生成内容

通过 Gemini API,您可以同时使用文本和图片数据来给出提示,具体取决于您使用的模型变体。例如,您可以使用 Gemini 1.5 模型根据纯文字提示或多模态提示生成文本。本部分提供了上述各用例的基本代码示例。如需查看涵盖所有参数的更详细的示例,请参阅 generateContent API 参考文档。

文本和图片输入

您可以向 Gemini 1.5 模型发送包含图片的文本提示,以执行与视觉相关的任务。例如,为图片添加说明或识别图片中的内容。

以下代码示例演示了针对每种受支持语言的文本和图片提示的基本实现:

Python

model = genai.GenerativeModel('gemini-1.5-flash')

cookie_picture = {
    'mime_type': 'image/png',
    'data': pathlib.Path('cookie.png').read_bytes()
}
prompt = "Do these look store-bought or homemade?"

response = model.generate_content(
    model="gemini-1.5-flash",
    content=[prompt, cookie_picture]
)
print(response.text)

如需查看完整的代码段,请参阅 Python 教程

Go

vmodel := client.GenerativeModel("gemini-1.5-flash")

data, err := os.ReadFile(filepath.Join("path-to-image", imageFile))
if err != nil {
  log.Fatal(err)
}
resp, err := vmodel.GenerateContent(ctx, genai.Text("Do these look store-bought or homemade?"), genai.ImageData("jpeg", data))
if err != nil {
  log.Fatal(err)
}

如需查看完整示例,请参阅 Go 教程

Node.js

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

const prompt = "Do these look store-bought or homemade?";
const image = {
  inlineData: {
    data: Buffer.from(fs.readFileSync("cookie.png")).toString("base64"),
    mimeType: "image/png",
  },
};

const result = await model.generateContent([prompt, image]);
console.log(result.response.text());

如需查看完整示例,请参阅 Node.js 教程

网站

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

const prompt = "Do these look store-bought or homemade?";
const image = {
  inlineData: {
    data: base64EncodedImage /* see JavaScript quickstart for details */,
    mimeType: "image/png",
  },
};

const result = await model.generateContent([prompt, image]);
console.log(result.response.text());

如需查看完整示例,请参阅 Web 教程

Dart (Flutter)

final model = GenerativeModel(model: 'gemini-1.5-flash', apiKey: apiKey);
final prompt = 'Do these look store-bought or homemade?';
final imageBytes = await File('cookie.png').readAsBytes();
final content = [
  Content.multi([
    TextPart(prompt),
    DataPart('image/png', imageBytes),
  ])
];

final response = await model.generateContent(content);
print(response.text);

如需查看完整示例,请参阅 Dart (Flutter) 教程

Swift

let model = GenerativeModel(name: "gemini-1.5-flash", apiKey: "API_KEY")
let cookieImage = UIImage(...)
let prompt = "Do these look store-bought or homemade?"

let response = try await model.generateContent(prompt, cookieImage)

如需查看完整示例,请参阅 Swift 教程

Android

val generativeModel = GenerativeModel(
    modelName = "gemini-1.5-flash",
    apiKey = BuildConfig.apiKey
)

val cookieImage: Bitmap = // ...
val inputContent = content() {
  image(cookieImage)
  text("Do these look store-bought or homemade?")
}

val response = generativeModel.generateContent(inputContent)
print(response.text)

如需查看完整示例,请参阅 Android 教程

cURL

curl https://generativelanguage.googleapis.com/v1/models/gemini-1.5-flash:generateContent?key=${API_KEY} \
    -H 'Content-Type: application/json' \
    -X POST \
    -d @<(echo'{
          "contents":[
            { "parts":[
                {"text": "Do these look store-bought or homemade?"},
                { "inlineData": {
                    "mimeType": "image/png",
                    "data": "'$(base64 -w0 cookie.png)'"
                  }
                }
              ]
            }
          ]
         }')

如需了解详情,请参阅 REST API 教程

纯文字输入

Gemini API 还可以处理纯文本输入。借助此功能,您可以执行自然语言处理 (NLP) 任务,例如文本补全和摘要。

以下代码示例演示了针对每种受支持语言的纯文本提示的基本实现:

Python

model = genai.GenerativeModel('gemini-1.5-flash')

prompt = "Write a story about a magic backpack."

response = model.generate_content(prompt)

如需查看完整示例,请参阅 Python 教程

Go

ctx := context.Background()
client, err := genai.NewClient(ctx, option.WithAPIKey(os.Getenv("API_KEY")))
if err != nil {
  log.Fatal(err)
}
defer client.Close()

model := client.GenerativeModel("gemini-1.5-flash")
resp, err := model.GenerateContent(ctx, genai.Text("Write a story about a magic backpack."))
if err != nil {
  log.Fatal(err)
}

如需查看完整示例,请参阅 Go 教程

Node.js

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
const prompt = "Write a story about a magic backpack.";

const result = await model.generateContent(prompt);
console.log(result.response.text());

如需查看完整示例,请参阅 Node.js 教程

网站

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
const prompt = "Write a story about a magic backpack.";

const result = await model.generateContent(prompt);
console.log(result.response.text());

如需查看完整示例,请参阅 Web 教程

Dart (Flutter)

final model = GenerativeModel(model: 'gemini-1.5-flash', apiKey: apiKey);
final prompt = 'Write a story about a magic backpack.';
final content = [Content.text(prompt)];
final response = await model.generateContent(content);
print(response.text);

如需查看完整示例,请参阅 Dart (Flutter) 教程

Swift

let model = GenerativeModel(name: "gemini-1.5-flash", apiKey: "API_KEY")
let prompt = "Write a story about a magic backpack."

let response = try await model.generateContent(prompt)

如需查看完整示例,请参阅 Swift 教程

Android

val generativeModel = GenerativeModel(
    modelName = "gemini-1.5-flash",
    apiKey = BuildConfig.apiKey
)

val prompt = "Write a story about a magic backpack."
val response = generativeModel.generateContent(prompt)
print(response.text)

如需查看完整示例,请参阅 Android 教程

cURL

curl https://generativelanguage.googleapis.com/v1/models/gemini-1.5-flash:generateContent?key=$API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{ "contents":[
      { "parts":[{"text": "Write a story about a magic backpack"}]}
    ]
}'

如需了解详情,请参阅 REST API 教程

多轮对话(聊天)

您可以使用 Gemini API 为用户打造互动式聊天体验。 通过使用该 API 的聊天功能,您可以收集多轮问题和回答,让用户能够逐步找到答案或就多部分问题获取帮助。此功能非常适合需要持续通信的应用,例如聊天机器人、交互式导师或客户服务助理。

以下代码示例演示了每种受支持语言的聊天互动的基本实现:

Python

  model = genai.GenerativeModel('gemini-1.5-flash')
  chat = model.start_chat(history=[])

  response = chat.send_message(
      "Pretend you\'re a snowman and stay in character for each response.")
  print(response.text)

  response = chat.send_message(
      "What\'s your favorite season of the year?")
  print(response.text)

如需查看完整示例,请参阅 Python 教程中的聊天演示。

Go

model := client.GenerativeModel("gemini-1.5-flash")
cs := model.StartChat()
cs.History = []*genai.Content{
  &genai.Content{
    Parts: []genai.Part{
      genai.Text("Pretend you're a snowman and stay in character for each response."),
    },
    Role: "user",
  },
  &genai.Content{
    Parts: []genai.Part{
      genai.Text("Hello! It's cold! Isn't that great?"),
    },
    Role: "model",
  },
}

resp, err := cs.SendMessage(ctx, genai.Text("What's your favorite season of the year?"))
if err != nil {
  log.Fatal(err)
}

如需查看完整示例,请参阅 Go 教程中的聊天演示。

Node.js

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash"});

const chat = model.startChat({
  history: [
    {
      role: "user",
      parts: "Pretend you're a snowman and stay in character for each response.",
    },
    {
      role: "model",
      parts: "Hello! It's cold! Isn't that great?",
    },
  ],
  generationConfig: {
    maxOutputTokens: 100,
  },
});

const msg = "What's your favorite season of the year?";
const result = await chat.sendMessage(msg);
console.log(result.response.text());

如需查看完整示例,请参阅 Node.js 教程中的聊天演示。

网站

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash"});

const chat = model.startChat({
  history: [
    {
      role: "user",
      parts: "Pretend you're a snowman and stay in character for each response.",
    },
    {
      role: "model",
      parts: "Hello! It's so cold! Isn't that great?",
    },
  ],
  generationConfig: {
    maxOutputTokens: 100,
  },
});

const msg = "What's your favorite season of the year?";
const result = await chat.sendMessage(msg);
console.log(result.response.text());

如需查看完整示例,请参阅 Web 教程中的聊天演示。

Dart (Flutter)

final model = GenerativeModel(model: 'gemini-1.5-flash', apiKey: apiKey);
final chat = model.startChat(history: [
  Content.text(
      "Pretend you're a snowman and stay in character for each response."),
  Content.model([TextPart("Hello! It's cold! Isn't that great?")]),
]);
final content = Content.text("What's your favorite season of the year?");
final response = await chat.sendMessage(content);
print(response.text);

如需查看完整示例,请参阅 Dart (Flutter) 教程中的聊天演示。

Swift

let model = GenerativeModel(name: "gemini-1.5-flash", apiKey: "API_KEY")
let chat = model.startChat()

var message = "Pretend you're a snowman and stay in character for each response."
var response = try await chat.sendMessage(message)

message = "What\'s your favorite season of the year?"
response = try await chat.sendMessage(message)

如需查看完整示例,请参阅 Swift 教程中的聊天演示。

Android

val generativeModel = GenerativeModel(
    modelName = "gemini-1.5-flash",
    apiKey = BuildConfig.apiKey
)

val chat = generativeModel.startChat()
val response = chat.sendMessage("Pretend you're a snowman and stay in
        character for each response.")
print(response.text)

如需查看完整示例,请参阅 Android 教程

cURL

curl https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [
        {"role":"user",
         "parts":[{
           "text": "Pretend you're a snowman and stay in character for each
        {"role": "model",
            response."}]},
         "parts":[{
           "text": "Hello! It's so cold! Isn't that great?"}]},
        {"role": "user",
         "parts":[{
           "text": "What\'s your favorite season of the year?"}]},
       ]
    }' 2> /dev/null | grep "text"
# response example:
"text": "Winter, of course!"

如需了解详情,请参阅 REST API 教程

逐字逐句给出回答

Gemini API 提供了另一种接收生成式 AI 模型响应的方式:以数据流的形式接收响应。在模型生成数据的过程中,流式响应会将增量数据发送回应用。借助此功能,您可以快速响应用户请求,以显示进度并打造更具互动性的体验。

“流式回答”是一个选项,可用于在 Gemini 模型中以自由格式提出问题和进行对话。以下代码示例展示了如何针对每种受支持的语言请求流式响应:

Python

prompt = "Write a story about a magic backpack."

response = genai.stream_generate_content(
    model="models/gemini-1.5-flash",
    prompt=prompt
)

如需查看完整的代码段,请参阅 Python 教程

Go

ctx := context.Background()
client, err := genai.NewClient(ctx, option.WithAPIKey(os.Getenv("API_KEY")))
if err != nil {
  log.Fatal(err)
}
defer client.Close()

model := client.GenerativeModel("gemini-1.5-flash")

iter := model.GenerateContentStream(ctx, genai.Text("Write a story about a magic backpack."))
for {
  resp, err := iter.Next()
  if err == iterator.Done {
    break
  }
  if err != nil {
    log.Fatal(err)
  }

  // print resp
}

如需查看完整示例,请参阅 Go 教程

Node.js

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
const prompt = "Write a story about a magic backpack.";

const result = await model.generateContentStream([prompt]);
// print text as it comes in
for await (const chunk of result.stream) {
  const chunkText = chunk.text();
  console.log(chunkText);
}

如需查看完整示例,请参阅 Node.js 教程

网站

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
const prompt = "Write a story about a magic backpack.";

const result = await model.generateContentStream([prompt]);
// print text as it comes in
for await (const chunk of result.stream) {
  const chunkText = chunk.text();
  console.log(chunkText);
}

如需查看完整示例,请参阅 Web 教程

Dart (Flutter)

final model = GenerativeModel(model: 'gemini-1.5-flash', apiKey: apiKey);
final prompt = 'Write a story about a magic backpack.';
final content = [Content.text(prompt)];
final response = model.generateContentStream(content);
await for (final chunk in response) {
  print(chunk.text);
}

如需查看完整示例,请参阅 Dart (Flutter) 教程

Swift

let model = GenerativeModel(name: "gemini-1.5-flash", apiKey: "API_KEY")
let prompt = "Write a story about a magic backpack."

let stream = model.generateContentStream(prompt)
for try await chunk in stream {
  print(chunk.text ?? "No content")
}

如需查看完整示例,请参阅 Swift 教程

Android

val generativeModel = GenerativeModel(
    modelName = "gemini-1.5-flash",
    apiKey = BuildConfig.apiKey
)

val inputContent = content {
  text("Write a story about a magic backpack.")
}

var fullResponse = ""
generativeModel.generateContentStream(inputContent).collect { chunk ->
  print(chunk.text)
  fullResponse += chunk.text
}

如需查看完整示例,请参阅 Android 教程

cURL

curl https://generativelanguage.googleapis.com/v1/models/gemini-1.5-flash:streamGenerateContent?key=${API_KEY} \
    -H 'Content-Type: application/json' \
    --no-buffer \
    -d '{ "contents":[
            {"role": "user",
              "parts":[{"text": "Write a story about a magic backpack."}]
            }
          ]
        }' > response.json

如需了解详情,请参阅 REST API 教程

JSON 格式响应

根据您的应用,您可能希望以结构化数据格式返回对提示的响应,尤其是当您使用响应来填充编程接口时。Gemini API 提供了一个配置参数,用于请求 JSON 格式的响应。

您可以让模型输出 JSON,方法是将 response_mime_type 配置选项设置为 application/json,然后在提示中描述您希望响应的 JSON 格式:

Python

model = genai.GenerativeModel('gemini-1.5-flash',
                              generation_config={"response_mime_type": "application/json"})

prompt = """
  List 5 popular cookie recipes.

  Using this JSON schema:

    Recipe = {"recipe_name": str}

  Return a `list[Recipe]`
  """

response = model.generate_content(prompt)
print(response.text)

cURL

curl https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [
        {
          "parts": [
            {
              "text": "\nList 5 popular cookie recipes.\n\nUsing this JSON schema:\n\n  Recipe = {\"recipe_name\": str}\n\nReturn a `list[Recipe]`\n      "
            }
          ]
        }
      ]
      "generationConfig": {
            "response_mime_type": "application/json",
      }
    }'

虽然 Gemini 1.5 Flash 模型仅接受您希望返回的 JSON 架构的文字说明,但 Gemini 1.5 Pro 模型允许您传递架构对象(或等效的 Python 类型),并且模型输出将严格遵循该架构。这也称为“控制生成”或“约束解码”。

例如,如需获取 Recipe 对象列表,请将 list[Recipe] 传递给 generation_config 参数的 response_schema 字段:

Python

import typing_extensions as typing

class Recipe(typing.TypedDict):
  recipe_name: str

model = genai.GenerativeModel(model_name="models/gemini-1.5-pro")

result = model.generate_content(
  "List 5 popular cookie recipes",
  generation_config=genai.GenerationConfig(response_mime_type="application/json",
                                           response_schema = list[Recipe]))

print(result.text)

cURL

  curl https://generativelanguage.googleapis.com/v1beta/models/models/gemini-1.5-pro:generateContent?
      -H 'Content-Type: application/json'
      -X POST \
      -d '{
        "contents": [
          {
            "parts": [
              {
                "text": "List 5 popular cookie recipes"
              }
            ]
          }
        ],
        "generationConfig": {
          "responseMimeType": "application/json",
          "responseSchema": {
            "type": "ARRAY",
            "items": {
              "type": "OBJECT",
              "properties": {
                "recipe_name": {
                  "type": "STRING"
                }
              }
            }
          }
        }
      }'
  ```

如需了解详情,请参阅 Gemini API 实战宝典中的 JSON 模式快速入门

嵌入

Gemini API 中的嵌入服务可以为字词、短语和句子生成先进的嵌入。生成的嵌入随后可用于语义搜索、文本分类和聚类等 NLP 任务。请参阅嵌入指南,了解什么是嵌入以及嵌入服务的一些关键用例,以帮助您快速入门。

后续步骤