Этот учебник демонстрирует, как получить доступ к API Gemini непосредственно из вашего приложения Swift, используя Google AI Swift SDK. Вы можете использовать этот SDK, если не хотите работать напрямую с API REST или кодом на стороне сервера (например, Python) для доступа к моделям Gemini в вашем приложении Swift.
В этом уроке вы узнаете, как сделать следующее:
- Настройте свой проект, включая свой ключ API
- Создать текст из ввода только текста
- Создать текст с ввода текста и изображения (мультимодальный)
- Создайте многократные разговоры (чат)
- Используйте потоковую передачу для более быстрых взаимодействий
Кроме того, этот урок содержит разделы о расширенных вариантах использования (например, подсчет токенов ), а также опции для управления генерацией контента .
Предварительные условия
В этом уроке предполагается, что вы знакомы с использованием XCode для разработки приложений Swift.
Чтобы завершить этот урок, убедитесь, что ваша среда разработки и приложение Swift соответствует следующим требованиям:
- Xcode 15.0 или выше
- Ваше приложение Swift должно нацелиться на iOS 15 или выше, или MacOS 12 или выше.
Настройте свой проект
Прежде чем вызывать API Gemini, вам необходимо настроить проект XCode, который включает настройку клавиши API, добавление пакета SDK в ваш проект XCode и инициализацию модели.
Установите свой ключ API
Чтобы использовать API Gemini, вам понадобится ключ API. Если у вас его еще нет, создайте ключ в Google AI Studio.
Закрепите свой ключ API
Настоятельно рекомендуется, чтобы вы не проверяли ключ API в системе управления версиями. Одним из альтернативных вариантов является хранение его в файле GenerativeAI-Info.plist
, а затем прочитать ключ API из файла .plist
. Обязательно поместите этот файл .plist
в корневую папку вашего приложения и исключите его из управления версией.
enum APIKey {
// Fetch the API key from `GenerativeAI-Info.plist`
static var `default`: String {
guard let filePath = Bundle.main.path(forResource: "GenerativeAI-Info", ofType: "plist")
else {
fatalError("Couldn't find file 'GenerativeAI-Info.plist'.")
}
let plist = NSDictionary(contentsOfFile: filePath)
guard let value = plist?.object(forKey: "API_KEY") as? String else {
fatalError("Couldn't find key 'API_KEY' in 'GenerativeAI-Info.plist'.")
}
if value.starts(with: "_") {
fatalError(
"Follow the instructions at https://ai.google.dev/tutorials/setup to get an API key."
)
}
return value
}
}
Вы также можете просмотреть пример примера , чтобы узнать, как хранить свой ключ API в файле .plist
.
Все фрагменты в этом уроке предполагают, что вы получаете доступ к своему ключу API из этого файла по требованию .plist
.
Добавьте пакет SDK в свой проект
Чтобы использовать API Gemini в вашем собственном приложении Swift, добавьте пакет GoogleGenerativeAI
в свое приложение:
В XCode щелкните правой кнопкой мыши по вашему проекту в Project Navigator.
Выберите Добавить пакеты из контекстного меню.
В диалоговом окне «Добавить пакеты» вставьте URL -адрес пакета в строку поиска:
https://github.com/google/generative-ai-swift
Нажмите «Добавить пакет» . Xcode теперь добавит пакет
GoogleGenerativeAI
в ваш проект.
Инициализировать генеративную модель
Прежде чем вы сможете сделать какие -либо вызовы API, вам необходимо инициализировать генеративную модель.
Import the
GoogleGenerativeAI
module:import GoogleGenerativeAI
Initialize the generative model:
// Access your API key from your on-demand resource .plist file // (see "Set up your API key" above) // The Gemini 1.5 models are versatile and work with most use cases let model = GenerativeModel(name: "gemini-1.5-flash", apiKey: APIKey.default)
When specifying a model, note the following:
Use a model that's specific to your use case (for example,
gemini-1.5-flash
is for multimodal input). Within this guide, the instructions for each implementation list the recommended model for each use case.
Implement common use cases
Now that your project is set up, you can explore using the Gemini API to implement different use cases:
- Generate text from text-only input
- Generate text from text-and-image input (multimodal)
- Build multi-turn conversations (chat)
- Use streaming for faster interactions
Generate text from text-only input
When the prompt input includes only text, use a Gemini 1.5 model or the Gemini 1.0 Pro model with generateContent
to generate text output:
import GoogleGenerativeAI
// The Gemini 1.5 models are versatile and work with both text-only and multimodal prompts
// Access your API key from your on-demand resource .plist file (see "Set up your API key" above)
let model = GenerativeModel(name: "gemini-1.5-flash", apiKey: APIKey.default)
let prompt = "Write a story about a magic backpack."
let response = try await model.generateContent(prompt)
if let text = response.text {
print(text)
}
Generate text from text-and-image input (multimodal)
Gemini provides various models that can handle multimodal input (Gemini 1.5 models) so that you can input both text and images. Make sure to review the image requirements for prompts .
When the prompt input includes both text and images, use a Gemini 1.5 model with the generateContent
method to generate text output:
import GoogleGenerativeAI
// The Gemini 1.5 models are versatile and work with both text-only and multimodal prompts
// Access your API key from your on-demand resource .plist file (see "Set up your API key" above)
let model = GenerativeModel(name: "gemini-1.5-flash", apiKey: APIKey.default)
let image1 = UIImage(...)
let image2 = UIImage(...)
let prompt = "What's different between these pictures?"
let response = try await model.generateContent(prompt, image1, image2)
if let text = response.text {
print(text)
}
Build multi-turn conversations (chat)
Using Gemini, you can build freeform conversations across multiple turns. The SDK simplifies the process by managing the state of the conversation, so unlike with generateContent
, you don't have to store the conversation history yourself.
To build a multi-turn conversation (like chat), use a Gemini 1.5 model or the Gemini 1.0 Pro model, and initialize the chat by calling startChat()
. Then use sendMessage()
to send a new user message, which will also append the message and the response to the chat history.
There are two possible options for role
associated with the content in a conversation:
user
: the role which provides the prompts. This value is the default forsendMessage
calls.model
: the role which provides the responses. This role can be used when callingstartChat()
with existinghistory
.
import GoogleGenerativeAI
let config = GenerationConfig(
maxOutputTokens: 100
)
// The Gemini 1.5 models are versatile and work with multi-turn conversations (like chat)
// Access your API key from your on-demand resource .plist file (see "Set up your API key" above)
let model = GenerativeModel(
name: "gemini-1.5-flash",
apiKey: APIKey.default,
generationConfig: config
)
let history = [
ModelContent(role: "user", parts: "Hello, I have 2 dogs in my house."),
ModelContent(role: "model", parts: "Great to meet you. What would you like to know?"),
]
// Initialize the chat
let chat = model.startChat(history: history)
let response = try await chat.sendMessage("How many paws are in my house?")
if let text = response.text {
print(text)
}
Use streaming for faster interactions
By default, the model returns a response after completing the entire generation process. You can achieve faster interactions by not waiting for the entire result, and instead use streaming to handle partial results.
The following example shows how to implement streaming with the generateContentStream
method to generate text from a text-and-image input prompt.
import GoogleGenerativeAI
// The Gemini 1.5 models are versatile and work with both text-only and multimodal prompts
// Access your API key from your on-demand resource .plist file (see "Set up your API key" above)
let model = GenerativeModel(name: "gemini-1.5-flash", apiKey: APIKey.default)
let image1 = UIImage(named: "")!
let image2 = UIImage(named: "")!
let prompt = "What's different between these pictures?"
var fullResponse = ""
let contentStream = model.generateContentStream(prompt, image1, image2)
for try await chunk in contentStream {
if let text = chunk.text {
print(text)
fullResponse += text
}
}
print(fullResponse)
You can use a similar approach for text-only input and chat use cases.
// Use streaming with text-only input
let contentStream = model.generateContentStream(prompt)
// Use streaming with multi-turn conversations (like chat)
let responseStream = chat.sendMessageStream(message)
Implement advanced use cases
The common use cases described in the previous section of this tutorial help you become comfortable with using the Gemini API. This section describes some use cases that might be considered more advanced.
Function calling
Function calling makes it easier for you to get structured data outputs from generative models. You can then use these outputs to call other APIs and return the relevant response data to the model. In other words, function calling helps you connect generative models to external systems so that the generated content includes the most up-to-date and accurate information. Learn more in the function calling tutorial .
Count tokens
When using long prompts, it might be useful to count tokens before sending any content to the model. The following examples show how to use countTokens()
for various use cases:
// For text-only input
let response = try await model.countTokens("Why is the sky blue?")
print(response.totalTokens)
// For text-and-image input (multi-modal)
let response = try await model.countTokens(prompt, image1, image2)
print(response.totalTokens)
// For multi-turn conversations (like chat)
let chat = model.startChat()
let history = chat.history
let message = try ModelContent(role: "user", "Why is the sky blue?")
let contents = history + [message]
let response = try await model.countTokens(contents)
print(response.totalTokens)
Options to control content generation
You can control content generation by configuring model parameters and by using safety settings.
Configure model parameters
Every prompt you send to the model includes parameter values that control how the model generates a response. The model can generate different results for different parameter values. Learn more about Model parameters . The configuration is maintained for the lifetime of your model instance.
let config = GenerationConfig(
temperature: 0.9,
topP: 0.1,
topK: 16,
maxOutputTokens: 200,
stopSequences: ["red"]
)
// Access your API key from your on-demand resource .plist file (see "Set up your API key" above)
let model = GenerativeModel(
// The Gemini 1.5 models are versatile and work with most use cases
name: "gemini-1.5-flash",
apiKey: APIKey.default,
generationConfig: config
)
Use safety settings
You can use safety settings to adjust the likelihood of getting responses that may be considered harmful. By default, safety settings block content with medium and/or high probability of being unsafe content across all dimensions. Learn more about Safety settings .
Here's how to set one safety setting:
// Access your API key from your on-demand resource .plist file (see "Set up your API key" above)
let model = GenerativeModel(
// The Gemini 1.5 models are versatile and work with most use cases
name: "gemini-1.5-flash",
apiKey: APIKey.default,
safetySettings: [
SafetySetting(harmCategory: .harassment, threshold: .blockOnlyHigh)
]
)
You can also set more than one safety setting:
let harassmentSafety = SafetySetting(harmCategory: .harassment, threshold: .blockOnlyHigh)
let hateSpeechSafety = SafetySetting(harmCategory: .hateSpeech, threshold: .blockMediumAndAbove)
// Access your API key from your on-demand resource .plist file (see "Set up your API key" above)
let model = GenerativeModel(
// The Gemini 1.5 models are versatile and work with most use cases
name: "gemini-1.5-flash",
apiKey: APIKey.default,
safetySettings: [harassmentSafety, hateSpeechSafety]
)
Что дальше
Prompt design is the process of creating prompts that elicit the desired response from language models. Writing well structured prompts is an essential part of ensuring accurate, high quality responses from a language model. Learn about best practices for prompt writing .
Gemini offers several model variations to meet the needs of different use cases, such as input types and complexity, implementations for chat or other dialog language tasks, and size constraints. Learn about the available Gemini models .