This tutorial will help you get started with the Gemini API tuning service using either the Python SDK or the REST API using curl. The examples show how to tune the text model behind the Gemini API text generation service.
View on ai.google.dev | Try a Colab notebook | View notebook on GitHub |
Set up authentication
The Gemini API lets you tune models on your own data. Since it's your data and your tuned models this uses stricter access controls than API keys can provide.
Before you can run this tutorial, you'll need to set up OAuth for your project, and then download the "OAuth Client ID" as "client_secret.json".
This gcloud command turns the client_secret.json
file into credentials that
can be used to authenticate with the service.
gcloud auth application-default login \
--client-id-file client_secret.json \
--scopes='https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/generative-language.tuning'
Set variables
CURL
Set variables for recurring values to use for the rest of the REST API
calls. The code is using the Python os
library to set environment
variables which is accessible in all the code cells.
This is specific to the Colab notebook environment. The code in the next code cell is equivalent to running the following commands in a bash terminal.
export access_token=$(gcloud auth application-default print-access-token)
export project_id=my-project-id
export base_url=https://generativelanguage.googleapis.com
import os
access_token = !gcloud auth application-default print-access-token
access_token = '\n'.join(access_token)
os.environ['access_token'] = access_token
os.environ['project_id'] = "[Enter your project-id here]"
os.environ['base_url'] = "https://generativelanguage.googleapis.com"
Python
access_token = !gcloud auth application-default print-access-token
access_token = '\n'.join(access_token)
project = '[Enter your project-id here]'
base_url = "https://generativelanguage.googleapis.com"
Import the requests
library.
import requests
import json
List tuned models
Verify your authentication setup by listing the available tuned models.
CURL
curl -X GET ${base_url}/v1beta/tunedModels \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${access_token}" \
-H "x-goog-user-project: ${project_id}"
Python
headers={
'Authorization': 'Bearer ' + access_token,
'Content-Type': 'application/json',
'x-goog-user-project': project
}
result = requests.get(
url=f'{base_url}/v1beta/tunedModels',
headers = headers,
)
result.json()
Create a tuned model
To create a tuned model, you need to pass your dataset to the model in the
training_data
field.
For this example, you will tune a model to generate the next number in the
sequence. For example, if the input is 1
, the model should output 2
. If the
input is one hundred
, the output should be one hundred one
.
CURL
curl -X POST $base_url/v1beta/tunedModels \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer ${access_token}" \
-H "x-goog-user-project: ${project_id}" \
-d '
{
"display_name": "number generator model",
"base_model": "models/gemini-1.0-pro-001",
"tuning_task": {
"hyperparameters": {
"batch_size": 2,
"learning_rate": 0.001,
"epoch_count":5,
},
"training_data": {
"examples": {
"examples": [
{
"text_input": "1",
"output": "2",
},{
"text_input": "3",
"output": "4",
},{
"text_input": "-3",
"output": "-2",
},{
"text_input": "twenty two",
"output": "twenty three",
},{
"text_input": "two hundred",
"output": "two hundred one",
},{
"text_input": "ninety nine",
"output": "one hundred",
},{
"text_input": "8",
"output": "9",
},{
"text_input": "-98",
"output": "-97",
},{
"text_input": "1,000",
"output": "1,001",
},{
"text_input": "10,100,000",
"output": "10,100,001",
},{
"text_input": "thirteen",
"output": "fourteen",
},{
"text_input": "eighty",
"output": "eighty one",
},{
"text_input": "one",
"output": "two",
},{
"text_input": "three",
"output": "four",
},{
"text_input": "seven",
"output": "eight",
}
]
}
}
}
}' | tee tunemodel.json
{ "name": "tunedModels/number-generator-model-dzlmi0gswwqb/operations/bvl8dymw0fhw", "metadata": { "@type": "type.googleapis.com/google.ai.generativelanguage.v1beta.CreateTunedModelMetadata", "totalSteps": 38, "tunedModel": "tunedModels/number-generator-model-dzlmi0gswwqb" } } % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2280 0 296 100 1984 611 4098 --:--:-- --:--:-- --:--:-- 4720
Python
operation = requests.post(
url = f'{base_url}/v1beta/tunedModels',
headers=headers,
json= {
"display_name": "number generator",
"base_model": "models/gemini-1.0-pro-001",
"tuning_task": {
"hyperparameters": {
"batch_size": 4,
"learning_rate": 0.001,
"epoch_count":5,
},
"training_data": {
"examples": {
"examples": [
{
'text_input': '1',
'output': '2',
},{
'text_input': '3',
'output': '4',
},{
'text_input': '-3',
'output': '-2',
},{
'text_input': 'twenty two',
'output': 'twenty three',
},{
'text_input': 'two hundred',
'output': 'two hundred one',
},{
'text_input': 'ninety nine',
'output': 'one hundred',
},{
'text_input': '8',
'output': '9',
},{
'text_input': '-98',
'output': '-97',
},{
'text_input': '1,000',
'output': '1,001',
},{
'text_input': '10,100,000',
'output': '10,100,001',
},{
'text_input': 'thirteen',
'output': 'fourteen',
},{
'text_input': 'eighty',
'output': 'eighty one',
},{
'text_input': 'one',
'output': 'two',
},{
'text_input': 'three',
'output': 'four',
},{
'text_input': 'seven',
'output': 'eight',
}
]
}
}
}
}
)
operation
<Response [200]>
operation.json()
{'name': 'tunedModels/number-generator-wl1qr34x2py/operations/41vni3zk0a47', 'metadata': {'@type': 'type.googleapis.com/google.ai.generativelanguage.v1beta.CreateTunedModelMetadata', 'totalSteps': 19, 'tunedModel': 'tunedModels/number-generator-wl1qr34x2py'} }
Set a variable with the name of your tuned model to use for the rest of the calls.
name=operation.json()["metadata"]["tunedModel"]
name
'tunedModels/number-generator-wl1qr34x2py'
The optimal values for epoch count, batch size, and learning rate are dependent on your dataset and other constraints of your use case. To learn more about these values, see Advanced tuning settings and Hyperparameters.
Get tuned model state
The state of the model is set to CREATING
during training and will change to
ACTIVE
once its complete.
CURL
Below is a bit of Python code to parse out the generated model name from the response JSON. If you're running this in a terminal you can try using a bash JSON parser to parse the response.
import json
first_page = json.load(open('tunemodel.json'))
os.environ['modelname'] = first_page['metadata']['tunedModel']
print(os.environ['modelname'])
tunedModels/number-generator-model-dzlmi0gswwqb
Do another GET
request with the model name to get the model metadata which
includes the state field.
curl -X GET ${base_url}/v1beta/${modelname} \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer ${access_token}" \
-H "x-goog-user-project: ${project_id}" | grep state
"state": "ACTIVE", % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 5921 0 5921 0 0 13164 0 --:--:-- --:--:-- --:--:-- 13157
Python
tuned_model = requests.get(
url = f'{base_url}/v1beta/{name}',
headers=headers,
)
tuned_model.json()
The code below checks the state field every 5 seconds until it is no longer
in the CREATING
state.
import time
import pprint
op_json = operation.json()
response = op_json.get('response')
error = op_json.get('error')
while response is None and error is None:
time.sleep(5)
operation = requests.get(
url = f'{base_url}/v1/{op_json["name"]}',
headers=headers,
)
op_json = operation.json()
response = op_json.get('response')
error = op_json.get('error')
percent = op_json['metadata'].get('completedPercent')
if percent is not None:
print(f"{percent:.2f}% - {op_json['metadata']['snapshots'][-1]}")
print()
if error is not None:
raise Exception(error)
100.00% - {'step': 19, 'epoch': 5, 'meanLoss': 1.402067, 'computeTime': '2024-03-14T15:11:23.766989274Z'}
Run inference
Once your tuning job is finished, you can use it to generate text with the text service.
CURL
Try to input a Roman numeral, say, 63 (LXIII):
curl -X POST $base_url/v1beta/$modelname:generateContent \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer ${access_token}" \
-H "x-goog-user-project: ${project_id}" \
-d '{
"contents": [{
"parts": [{
"text": "LXIII"
}]
}]
}' 2> /dev/null
{ "candidates": [ { "content": { "parts": [ { "text": "LXIV" } ], "role": "model" }, "finishReason": "STOP", "index": 0, "safetyRatings": [ { "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability": "NEGLIGIBLE" }, { "category": "HARM_CATEGORY_HATE_SPEECH", "probability": "NEGLIGIBLE" }, { "category": "HARM_CATEGORY_HARASSMENT", "probability": "NEGLIGIBLE" }, { "category": "HARM_CATEGORY_DANGEROUS_CONTENT", "probability": "NEGLIGIBLE" } ] } ], "promptFeedback": { "safetyRatings": [ { "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability": "NEGLIGIBLE" }, { "category": "HARM_CATEGORY_HATE_SPEECH", "probability": "NEGLIGIBLE" }, { "category": "HARM_CATEGORY_HARASSMENT", "probability": "NEGLIGIBLE" }, { "category": "HARM_CATEGORY_DANGEROUS_CONTENT", "probability": "NEGLIGIBLE" } ] } }
The output from the model may or may not be correct. If the tuned model isn't performing up to your required standards, you can try adding more high quality examples, tweaking the hyperparameters or adding a preamble to your examples. You can even create another tuned model based on the first one you created.
See the tuning guide for more guidance on improving performance.
Python
Try to input a Japanese numeral, say, 6 (六):
import time
m = requests.post(
url = f'{base_url}/v1beta/{name}:generateContent',
headers=headers,
json= {
"contents": [{
"parts": [{
"text": "六"
}]
}]
})
import pprint
pprint.pprint(m.json())
{'candidates': [{'content': {'parts': [{'text': '七'}], 'role': 'model'}, 'finishReason': 'STOP', 'index': 0, 'safetyRatings': [{'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability': 'NEGLIGIBLE'}, {'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability': 'NEGLIGIBLE'}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability': 'LOW'}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability': 'NEGLIGIBLE'}]}], 'promptFeedback': {'safetyRatings': [{'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability': 'NEGLIGIBLE'}, {'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability': 'NEGLIGIBLE'}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability': 'NEGLIGIBLE'}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability': 'NEGLIGIBLE'}]} }
The output from the model may or may not be correct. If the tuned model isn't performing up to your required standards, you can try adding more high quality examples, tweaking the hyperparameters or adding a preamble to your examples.
Conclusion
Even though the training data did not contain any reference to Roman or Japanese numerals, the model was able to generalize well after fine-tuning. This way, you can fine-tune models to cater to your use cases.
Next steps
To learn how to use the tuning service with the help of Python SDK for the Gemini API, visit the tuning quickstart with Python. To learn how to use other services in the Gemini API, visit the REST getting started tutorial.