Gemini API 概览

借助 Gemini API,您可以使用 Google 的最新生成模型。 熟悉通过 API 提供的一般功能后,请尝试学习所选语言的教程,开始开发工作。

模型

Gemini 是 Google 开发的一系列多模态生成式 AI 模型。 Gemini 模型可以接受提示中的文本和图片(具体取决于您选择的模型变体),并输出文本回复。

如需获取更详细的模型信息,请参阅双子座模型页面。您还可以使用 list_models 方法列出所有可用的模型,然后使用 get_model 方法获取特定模型的元数据。

提示数据和设计

特定的 Gemini 模型接受文本数据和媒体文件作为输入。此功能为生成内容、分析数据和解决问题提供了许多其他的可能性。您需要考虑一些限制和要求,包括您所用模型的一般输入令牌限制。如需了解特定模型的令牌限制,请参阅 Gemini 模型

使用 Gemini API 的提示不能超过 20MB。Gemini API 提供了一个 File API,用于临时存储媒体文件以便在提示中使用,让您可以提供超过 20 MB 限制的提示数据。如需详细了解如何使用 Files API 以及提示支持的文件格式,请参阅使用媒体文件进行提示

提示设计和文本输入

创建有效的提示(即提示工程)是艺术与科学的结合。如需了解如何处理提示,请参阅提示简介;如需了解不同的提示方法,请参阅提示基础知识指南。

生成内容

借助 Gemini API,您可以使用文本和图片数据进行提示,具体取决于您使用的模型变体。例如,您可以使用 Gemini 1.5 模型根据纯文本提示或多模态提示生成文本。本部分提供了每种方法的基本代码示例。如需查看涵盖所有参数的详细示例,请参阅 generateContent API 参考文档。

文本和图片输入

您可以向 Gemini 1.5 模型发送包含图片的文本提示,以执行与视觉相关的任务。例如,为图片添加说明或识别图片中的图片。

以下代码示例演示了每种受支持语言的文字和图片提示的基本实现:

Python

model = genai.GenerativeModel('gemini-1.5-flash')

cookie_picture = {
    'mime_type': 'image/png',
    'data': pathlib.Path('cookie.png').read_bytes()
}
prompt = "Do these look store-bought or homemade?"

response = model.generate_content(
    model="gemini-1.5-flash",
    content=[prompt, cookie_picture]
)
print(response.text)

请参阅 Python 教程,查看完整的代码段。

Go

vmodel := client.GenerativeModel("gemini-1.5-flash")

data, err := os.ReadFile(filepath.Join("path-to-image", imageFile))
if err != nil {
  log.Fatal(err)
}
resp, err := vmodel.GenerateContent(ctx, genai.Text("Do these look store-bought or homemade?"), genai.ImageData("jpeg", data))
if err != nil {
  log.Fatal(err)
}

如需查看完整示例,请参阅 Go 教程

Node.js

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

const prompt = "Do these look store-bought or homemade?";
const image = {
  inlineData: {
    data: Buffer.from(fs.readFileSync("cookie.png")).toString("base64"),
    mimeType: "image/png",
  },
};

const result = await model.generateContent([prompt, image]);
console.log(result.response.text());

如需查看完整示例,请参阅 Node.js 教程

网站

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

const prompt = "Do these look store-bought or homemade?";
const image = {
  inlineData: {
    data: base64EncodedImage /* see JavaScript quickstart for details */,
    mimeType: "image/png",
  },
};

const result = await model.generateContent([prompt, image]);
console.log(result.response.text());

如需查看完整示例,请参阅网页版教程

Dart (Flutter)

final model = GenerativeModel(model: 'gemini-1.5-flash', apiKey: apiKey);
final prompt = 'Do these look store-bought or homemade?';
final imageBytes = await File('cookie.png').readAsBytes();
final content = [
  Content.multi([
    TextPart(prompt),
    DataPart('image/png', imageBytes),
  ])
];

final response = await model.generateContent(content);
print(response.text);

如需查看完整示例,请参阅 Dart (Flutter) 教程

Swift

let model = GenerativeModel(name: "gemini-1.5-flash", apiKey: "API_KEY")
let cookieImage = UIImage(...)
let prompt = "Do these look store-bought or homemade?"

let response = try await model.generateContent(prompt, cookieImage)

如需查看完整示例,请参阅 Swift 教程

Android

val generativeModel = GenerativeModel(
    modelName = "gemini-1.5-flash",
    apiKey = BuildConfig.apiKey
)

val cookieImage: Bitmap = // ...
val inputContent = content() {
  image(cookieImage)
  text("Do these look store-bought or homemade?")
}

val response = generativeModel.generateContent(inputContent)
print(response.text)

如需查看完整示例,请参阅 Android 教程

cURL

curl https://generativelanguage.googleapis.com/v1/models/gemini-1.5-flash:generateContent?key=${API_KEY} \
    -H 'Content-Type: application/json' \
    -X POST \
    -d @<(echo'{
          "contents":[
            { "parts":[
                {"text": "Do these look store-bought or homemade?"},
                { "inlineData": {
                    "mimeType": "image/png",
                    "data": "'$(base64 -w0 cookie.png)'"
                  }
                }
              ]
            }
          ]
         }')

如需了解详情,请参阅 REST API 教程

纯文字输入

Gemini API 还可以处理纯文字输入。借助此功能,您可以执行自然语言处理 (NLP) 任务,例如文本补全和摘要。

以下代码示例针对每种支持的语言演示了纯文本提示的基本实现:

Python

model = genai.GenerativeModel('gemini-1.5-flash')

prompt = "Write a story about a magic backpack."

response = model.generate_content(prompt)

如需查看完整示例,请参阅 Python 教程

Go

ctx := context.Background()
client, err := genai.NewClient(ctx, option.WithAPIKey(os.Getenv("API_KEY")))
if err != nil {
  log.Fatal(err)
}
defer client.Close()

model := client.GenerativeModel("gemini-1.5-flash")
resp, err := model.GenerateContent(ctx, genai.Text("Write a story about a magic backpack."))
if err != nil {
  log.Fatal(err)
}

如需查看完整示例,请参阅 Go 教程

Node.js

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
const prompt = "Write a story about a magic backpack.";

const result = await model.generateContent(prompt);
console.log(result.response.text());

如需查看完整示例,请参阅 Node.js 教程

网站

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
const prompt = "Write a story about a magic backpack.";

const result = await model.generateContent(prompt);
console.log(result.response.text());

如需查看完整示例,请参阅网页版教程

Dart (Flutter)

final model = GenerativeModel(model: 'gemini-1.5-flash', apiKey: apiKey);
final prompt = 'Write a story about a magic backpack.';
final content = [Content.text(prompt)];
final response = await model.generateContent(content);
print(response.text);

如需查看完整示例,请参阅 Dart (Flutter) 教程

Swift

let model = GenerativeModel(name: "gemini-1.5-flash", apiKey: "API_KEY")
let prompt = "Write a story about a magic backpack."

let response = try await model.generateContent(prompt)

如需查看完整示例,请参阅 Swift 教程

Android

val generativeModel = GenerativeModel(
    modelName = "gemini-1.5-flash",
    apiKey = BuildConfig.apiKey
)

val prompt = "Write a story about a magic backpack."
val response = generativeModel.generateContent(prompt)
print(response.text)

如需查看完整示例,请参阅 Android 教程

cURL

curl https://generativelanguage.googleapis.com/v1/models/gemini-1.5-flash:generateContent?key=$API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{ "contents":[
      { "parts":[{"text": "Write a story about a magic backpack"}]}
    ]
}'

如需了解详情,请参阅 REST API 教程

多轮对话(聊天)

您可以使用 Gemini API 为用户打造互动式聊天体验。 借助此 API 的聊天功能,您可以收集多轮问题和回复,让用户能够逐步找到答案或获得有关多部分问题的帮助。此功能非常适合需要持续通信的应用,例如聊天机器人、互动式导师或客户服务助理。

以下代码示例演示了针对每种受支持语言聊天互动的基本实现:

Python

  model = genai.GenerativeModel('gemini-1.5-flash')
  chat = model.start_chat(history=[])

  response = chat.send_message(
      "Pretend you\'re a snowman and stay in character for each response.")
  print(response.text)

  response = chat.send_message(
      "What\'s your favorite season of the year?")
  print(response.text)

如需查看完整示例,请参阅 Python 教程中的聊天演示。

Go

model := client.GenerativeModel("gemini-1.5-flash")
cs := model.StartChat()
cs.History = []*genai.Content{
  &genai.Content{
    Parts: []genai.Part{
      genai.Text("Pretend you're a snowman and stay in character for each response."),
    },
    Role: "user",
  },
  &genai.Content{
    Parts: []genai.Part{
      genai.Text("Hello! It's cold! Isn't that great?"),
    },
    Role: "model",
  },
}

resp, err := cs.SendMessage(ctx, genai.Text("What's your favorite season of the year?"))
if err != nil {
  log.Fatal(err)
}

如需查看完整示例,请参阅 Go 教程中的聊天演示。

Node.js

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash"});

const chat = model.startChat({
  history: [
    {
      role: "user",
      parts: "Pretend you're a snowman and stay in character for each response.",
    },
    {
      role: "model",
      parts: "Hello! It's cold! Isn't that great?",
    },
  ],
  generationConfig: {
    maxOutputTokens: 100,
  },
});

const msg = "What's your favorite season of the year?";
const result = await chat.sendMessage(msg);
console.log(result.response.text());

如需查看完整示例,请参阅 Node.js 教程中的聊天演示。

网站

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash"});

const chat = model.startChat({
  history: [
    {
      role: "user",
      parts: "Pretend you're a snowman and stay in character for each response.",
    },
    {
      role: "model",
      parts: "Hello! It's so cold! Isn't that great?",
    },
  ],
  generationConfig: {
    maxOutputTokens: 100,
  },
});

const msg = "What's your favorite season of the year?";
const result = await chat.sendMessage(msg);
console.log(result.response.text());

如需查看完整示例,请参阅网络教程中的聊天演示。

Dart (Flutter)

final model = GenerativeModel(model: 'gemini-1.5-flash', apiKey: apiKey);
final chat = model.startChat(history: [
  Content.text(
      "Pretend you're a snowman and stay in character for each response."),
  Content.model([TextPart("Hello! It's cold! Isn't that great?")]),
]);
final content = Content.text("What's your favorite season of the year?");
final response = await chat.sendMessage(content);
print(response.text);

如需查看完整示例,请参阅 Dart (Flutter) 教程中的聊天演示。

Swift

let model = GenerativeModel(name: "gemini-1.5-flash", apiKey: "API_KEY")
let chat = model.startChat()

var message = "Pretend you're a snowman and stay in character for each response."
var response = try await chat.sendMessage(message)

message = "What\'s your favorite season of the year?"
response = try await chat.sendMessage(message)

如需查看完整示例,请参阅 Swift 教程中的聊天演示。

Android

val generativeModel = GenerativeModel(
    modelName = "gemini-1.5-flash",
    apiKey = BuildConfig.apiKey
)

val chat = generativeModel.startChat()
val response = chat.sendMessage("Pretend you're a snowman and stay in
        character for each response.")
print(response.text)

如需查看完整示例,请参阅 Android 教程

cURL

curl https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [
        {"role":"user",
         "parts":[{
           "text": "Pretend you're a snowman and stay in character for each
        {"role": "model",
            response."}]},
         "parts":[{
           "text": "Hello! It's so cold! Isn't that great?"}]},
        {"role": "user",
         "parts":[{
           "text": "What\'s your favorite season of the year?"}]},
       ]
    }' 2> /dev/null | grep "text"
# response example:
"text": "Winter, of course!"

如需了解详情,请参阅 REST API 教程

流式响应

Gemini API 提供了另一种从生成式 AI 模型接收响应的方式:以数据流的形式接收。流式响应会在模型生成增量数据时将这些数据发送回您的应用。借助此功能,您可以快速响应用户请求以显示进度,打造更具互动性的体验。

使用 Gemini 模型进行自由格式提示和聊天时可以使用流式回复。以下代码示例展示了如何针对每种受支持的语言请求针对提示的流式响应:

Python

prompt = "Write a story about a magic backpack."

response = genai.stream_generate_content(
    model="models/gemini-1.5-flash",
    prompt=prompt
)

请参阅 Python 教程,查看完整的代码段。

Go

ctx := context.Background()
client, err := genai.NewClient(ctx, option.WithAPIKey(os.Getenv("API_KEY")))
if err != nil {
  log.Fatal(err)
}
defer client.Close()

model := client.GenerativeModel("gemini-1.5-flash")

iter := model.GenerateContentStream(ctx, genai.Text("Write a story about a magic backpack."))
for {
  resp, err := iter.Next()
  if err == iterator.Done {
    break
  }
  if err != nil {
    log.Fatal(err)
  }

  // print resp
}

如需查看完整示例,请参阅 Go 教程

Node.js

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
const prompt = "Write a story about a magic backpack.";

const result = await model.generateContentStream([prompt]);
// print text as it comes in
for await (const chunk of result.stream) {
  const chunkText = chunk.text();
  console.log(chunkText);
}

如需查看完整示例,请参阅 Node.js 教程

网站

const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
const prompt = "Write a story about a magic backpack.";

const result = await model.generateContentStream([prompt]);
// print text as it comes in
for await (const chunk of result.stream) {
  const chunkText = chunk.text();
  console.log(chunkText);
}

如需查看完整示例,请参阅网页版教程

Dart (Flutter)

final model = GenerativeModel(model: 'gemini-1.5-flash', apiKey: apiKey);
final prompt = 'Write a story about a magic backpack.';
final content = [Content.text(prompt)];
final response = model.generateContentStream(content);
await for (final chunk in response) {
  print(chunk.text);
}

如需查看完整示例,请参阅 Dart (Flutter) 教程

Swift

let model = GenerativeModel(name: "gemini-1.5-flash", apiKey: "API_KEY")
let prompt = "Write a story about a magic backpack."

let stream = model.generateContentStream(prompt)
for try await chunk in stream {
  print(chunk.text ?? "No content")
}

如需查看完整示例,请参阅 Swift 教程

Android

val generativeModel = GenerativeModel(
    modelName = "gemini-1.5-flash",
    apiKey = BuildConfig.apiKey
)

val inputContent = content {
  text("Write a story about a magic backpack.")
}

var fullResponse = ""
generativeModel.generateContentStream(inputContent).collect { chunk ->
  print(chunk.text)
  fullResponse += chunk.text
}

如需查看完整示例,请参阅 Android 教程

cURL

curl https://generativelanguage.googleapis.com/v1/models/gemini-1.5-flash:streamGenerateContent?key=${API_KEY} \
    -H 'Content-Type: application/json' \
    --no-buffer \
    -d '{ "contents":[
            {"role": "user",
              "parts":[{"text": "Write a story about a magic backpack."}]
            }
          ]
        }' > response.json

如需了解详情,请参阅 REST API 教程

JSON 格式的响应

根据您的应用,您可能希望以结构化数据格式返回对提示的响应,尤其是在使用响应填充编程接口时。Gemini API 提供了一个配置参数,用于请求 JSON 格式的响应。

要使用此输出功能,请将 response_mime_type 配置选项设置为 application/json,并在请求正文中添加 JSON 格式规范。以下代码示例展示了如何请求提示的 JSON 响应:

cURL

curl https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{ "contents":[{
            "parts":[{"text": "List 5 popular cookie recipes using this JSON schema: \{ \"type\": \"object\", \"properties\": \{ \"recipe_name\": \{ \"type\": \"string\" \},\}\}"}] }],
          "generationConfig": {
            "response_mime_type": "application/json",
          } }'

Embeddings

Gemini API 中的嵌入服务可为字词、短语和句子生成先进的嵌入。生成的嵌入随后可用于 NLP 任务,例如语义搜索、文本分类和聚类等等。请参阅嵌入指南,了解什么是嵌入以及嵌入服务的一些关键用例,以帮助您开始使用。

后续步骤