使用 Hugging Face Transformers 和 QloRA 微調 Gemma

在 ai.google.dev 上查看 在 Google Colab 中執行 在 Kaggle 中執行 在 Vertex AI 中開啟 在 GitHub 上查看來源

本指南將逐步說明如何使用 Hugging Face TransformersTRL,在自訂文字轉 SQL 資料集上微調 Gemma。您將學會:

  • 什麼是量化低秩適應 (QLoRA)
  • 設定開發環境
  • 建立及準備微調資料集
  • 使用 TRL 和 SFTTrainer 微調 Gemma
  • 測試模型推論並生成 SQL 查詢

什麼是量化低秩適應 (QLoRA)

本指南將示範如何使用量化低秩適應 (QLoRA)。這項技術可減少運算資源需求,同時維持高效能,因此成為有效微調大型語言模型的熱門方法。在 QloRA 中,預先訓練的模型會量化為 4 位元,且權重會凍結。接著附加可訓練的轉換器層 (LoRA),並只訓練轉換器層。之後,轉接器權重可以與基礎模型合併,也可以保留為獨立的轉接器。

設定開發環境

第一步是安裝 Hugging Face 程式庫,包括 TRL 和資料集,以微調開放模型,包括不同的 RLHF 和對齊技術。

# Install Pytorch & other libraries
%pip install torch tensorboard

# Install Transformers
%pip install transformers

# Install Hugging Face libraries
%pip install datasets accelerate evaluate bitsandbytes trl peft protobuf sentencepiece

# COMMENT IN: if you are running on a GPU that supports BF16 data type and flash attn, such as NVIDIA L4 or NVIDIA A100
#%pip install flash-attn

注意:如果您使用 Ampere 架構的 GPU (例如 NVIDIA L4) 或更新版本,可以使用 Flash attention。Flash Attention 是一種方法,可大幅加快運算速度,並將記憶體用量從序列長度的二次方降至線性,進而將訓練速度提升多達 3 倍。詳情請參閱 FlashAttention

您必須提供有效的 Hugging Face 權杖,才能發布模型。如果您在 Google Colab 中執行,可以使用 Colab 密碼安全地使用 Hugging Face 權杖,否則可以直接在 login 方法中設定權杖。訓練期間,您會將模型推送至 Hub,因此請務必確認權杖也具備寫入權限。

# Login into Hugging Face Hub
from huggingface_hub import login
login()

建立及準備微調資料集

微調 LLM 時,請務必瞭解您的用途和要解決的任務。這有助於建立資料集,微調模型。如果尚未定義用途,建議您重新思考。

舉例來說,本指南著重於下列應用實例:

  • 微調自然語言轉 SQL 模型,完美整合至資料分析工具。目標是大幅減少產生 SQL 查詢所需的時間和專業知識,讓非技術人員也能從資料中擷取有意義的洞察資料。

文字轉 SQL 是微調 LLM 的絕佳用途,因為這項工作相當複雜,需要大量 (內部) 資料和 SQL 語言知識。

確定微調是合適的解決方案後,您需要用來微調的資料集。資料集應包含您想解決的任務的多種示範。建立這類資料集的方式有很多種,包括:

  • 使用現有的開放原始碼資料集,例如 Spider
  • 使用 LLM 建立的合成資料集,例如 Alpaca
  • 使用人類建立的資料集,例如 Dolly
  • 結合使用多種方法,例如 Orca

每種方法各有優缺點,取決於預算、時間和品質要求。舉例來說,使用現有資料集最簡單,但可能無法配合特定用途;使用領域專家可能最準確,但耗時且昂貴。您也可以結合多種方法來建立指令資料集,如 Orca:從 GPT-4 的複雜說明追蹤記錄逐步學習所示。

本指南使用現有資料集 (philschmid/gretel-synthetic-text-to-sql),這是高品質的合成 Text-to-SQL 資料集,包含自然語言指令、結構定義、推論和對應的 SQL 查詢。

Hugging Face TRL 支援自動範本化對話資料集格式。也就是說,您只需要將資料集轉換為正確的 JSON 物件,trl 就會負責範本化並轉換為正確格式。

{"messages": [{"role": "system", "content": "You are..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}
{"messages": [{"role": "system", "content": "You are..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}
{"messages": [{"role": "system", "content": "You are..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}

philschmid/gretel-synthetic-text-to-sql 包含超過 10 萬個樣本。為縮小指南大小,系統會將樣本數下取樣至 10,000 個。

您現在可以使用 Hugging Face Datasets 程式庫載入資料集,並建立提示範本,結合自然語言指令、結構定義,以及為助理新增系統訊息。

from datasets import load_dataset

# System message for the assistant
system_message = """You are a text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA."""

# User prompt that combines the user query and the schema
user_prompt = """Given the <USER_QUERY> and the <SCHEMA>, generate the corresponding SQL command to retrieve the desired data, considering the query's syntax, semantics, and schema constraints.

<SCHEMA>
{context}
</SCHEMA>

<USER_QUERY>
{question}
</USER_QUERY>
"""
def create_conversation(sample):
  return {
    "messages": [
      {"role": "system", "content": system_message},
      {"role": "user", "content": user_prompt.format(question=sample["sql_prompt"], context=sample["sql_context"])},
      {"role": "assistant", "content": sample["sql"]}
    ]
  }

# Load dataset from the hub
dataset = load_dataset("philschmid/gretel-synthetic-text-to-sql", split="train")
dataset = dataset.shuffle().select(range(12500))

# Convert dataset to OAI messages
dataset = dataset.map(create_conversation, remove_columns=dataset.features,batched=False)
# split dataset into 80% training samples and 20% test samples
dataset = dataset.train_test_split(test_size=0.2)

# Print formatted user prompt
for item in dataset["train"][0]["messages"]:
  print(item)
README.md:   0%|          | 0.00/737 [00:00<?, ?B/s]
synthetic_text_to_sql_train.snappy.parqu(…):   0%|          | 0.00/32.4M [00:00<?, ?B/s]
synthetic_text_to_sql_test.snappy.parque(…):   0%|          | 0.00/1.90M [00:00<?, ?B/s]
Generating train split:   0%|          | 0/100000 [00:00<?, ? examples/s]
Generating test split:   0%|          | 0/5851 [00:00<?, ? examples/s]
Map:   0%|          | 0/12500 [00:00<?, ? examples/s]
{'content': 'You are a text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA.', 'role': 'system'}
{'content': "Given the <USER_QUERY> and the <SCHEMA>, generate the corresponding SQL command to retrieve the desired data, considering the query's syntax, semantics, and schema constraints.\n\n<SCHEMA>\nCREATE TABLE Menu (id INT PRIMARY KEY, name VARCHAR(255), category VARCHAR(255), price DECIMAL(5,2));\n</SCHEMA>\n\n<USER_QUERY>\nCalculate the average price of all menu items in the Vegan category\n</USER_QUERY>\n", 'role': 'user'}
{'content': "SELECT AVG(price) FROM Menu WHERE category = 'Vegan';", 'role': 'assistant'}

使用 TRL 和 SFTTrainer 微調 Gemma

現在可以開始微調模型了。Hugging Face TRL SFTTrainer 可輕鬆監督微調開放式 LLM。SFTTrainertransformers 程式庫中 Trainer 的子類別,支援所有相同的功能,包括記錄、評估和檢查點,但新增了其他提升生活品質的功能,包括:

  • 資料集格式設定,包括對話和指令格式
  • 只訓練完成內容,忽略提示
  • 封裝資料集以提高訓練效率
  • 支援高效參數微調 (PEFT),包括 QloRA
  • 準備模型和代碼化工具,以進行對話微調 (例如新增特殊符記)

下列程式碼會從 Hugging Face 載入 Gemma 模型和權杖化工具,並初始化量化設定。

import torch
from transformers import AutoTokenizer, AutoModelForImageTextToText, BitsAndBytesConfig

# Hugging Face model id
model_id = "google/gemma-4-E2B" # @param ["google/gemma-4-E2B","google/gemma-4-E4B"] {"allow-input":true}

# Check if GPU benefits from bfloat16
if torch.cuda.get_device_capability()[0] >= 8:
    torch_dtype = torch.bfloat16
else:
    torch_dtype = torch.float16

# Define model init arguments
model_kwargs = dict(
    dtype=torch_dtype,
    device_map="auto", # Let torch decide how to load the model
)

# BitsAndBytesConfig: Enables 4-bit quantization to reduce model size/memory usage
model_kwargs["quantization_config"] = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type='nf4',
    bnb_4bit_compute_dtype=model_kwargs['dtype'],
    bnb_4bit_quant_storage=model_kwargs['dtype'],
)

# Load model and tokenizer
model = AutoModelForImageTextToText.from_pretrained(model_id, **model_kwargs)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-4-E2B-it") # Load the Instruction Tokenizer to use the official Gemma template
config.json: 0.00B [00:00, ?B/s]
model.safetensors:   0%|          | 0.00/10.2G [00:00<?, ?B/s]
Loading weights:   0%|          | 0/2011 [00:00<?, ?it/s]
generation_config.json:   0%|          | 0.00/181 [00:00<?, ?B/s]
config.json: 0.00B [00:00, ?B/s]
tokenizer_config.json: 0.00B [00:00, ?B/s]
tokenizer.json:   0%|          | 0.00/32.2M [00:00<?, ?B/s]
chat_template.jinja: 0.00B [00:00, ?B/s]

SFTTrainer 支援與 peft 的內建整合功能,可輕鬆使用 QLoRA 高效率地微調 LLM。您只需要建立 LoraConfig 並提供給訓練師即可。

from peft import LoraConfig

peft_config = LoraConfig(
    lora_alpha=16,
    lora_dropout=0.05,
    r=16,
    bias="none",
    target_modules="all-linear",
    task_type="CAUSAL_LM",
    modules_to_save=["lm_head", "embed_tokens"], # make sure to save the lm_head and embed_tokens as you train the special tokens
    ensure_weight_tying=True,
)

開始訓練前,您需要先在 SFTConfig 例項中定義要使用的超參數。

import torch
from trl import SFTConfig

args = SFTConfig(
    output_dir="gemma-text-to-sql",         # directory to save and repository id
    max_length=512,                         # max length for model and packing of the dataset
    num_train_epochs=3,                     # number of training epochs
    per_device_train_batch_size=1,          # batch size per device during training
    optim="adamw_torch_fused",              # use fused adamw optimizer
    logging_steps=10,                       # log every 10 steps
    save_strategy="epoch",                  # save checkpoint every epoch
    eval_strategy="epoch",                  # evaluate checkpoint every epoch
    learning_rate=5e-5,                     # learning rate
    fp16=True if model.dtype == torch.float16 else False,   # use float16 precision
    bf16=True if model.dtype == torch.bfloat16 else False,   # use bfloat16 precision
    max_grad_norm=0.3,                      # max gradient norm based on QLoRA paper
    lr_scheduler_type="constant",           # use constant learning rate scheduler
    push_to_hub=True,                           # push model to hub
    report_to="tensorboard",                # report metrics to tensorboard
    dataset_kwargs={
        "add_special_tokens": False, # Template with special tokens
        "append_concat_token": True, # Add EOS token as separator token between examples
    }
)

您現在已擁有建立 SFTTrainer 的所有建構區塊,可以開始訓練模型。

from trl import SFTTrainer

# Create Trainer object
trainer = SFTTrainer(
    model=model,
    args=args,
    train_dataset=dataset["train"],
    eval_dataset=dataset["test"],
    peft_config=peft_config,
    processing_class=tokenizer,
)
Tokenizing train dataset:   0%|          | 0/10000 [00:00<?, ? examples/s]
Tokenizing eval dataset:   0%|          | 0/2500 [00:00<?, ? examples/s]

呼叫 train() 方法開始訓練。

# Start training, the model will be automatically saved to the Hub and the output directory
trainer.train()

# Save the final model again to the Hugging Face Hub
trainer.save_model()
The tokenizer has new PAD/BOS/EOS tokens that differ from the model config and generation config. The model config and generation config were aligned accordingly, being updated with the tokenizer's values. Updated tokens: {'eos_token_id': 1, 'bos_token_id': 2, 'pad_token_id': 0}.

測試模型前,請務必釋放記憶體。

# free the memory again
del model
del trainer
torch.cuda.empty_cache()

使用 QLoRA 時,您只會訓練轉接器,不會訓練完整模型。也就是說,在訓練期間儲存模型時,您只會儲存適應器權重,而非完整模型。如要儲存完整模型,以便搭配 vLLM 或 TGI 等服務堆疊使用,可以先使用 merge_and_unload 方法將轉接器權重合併至模型權重,然後使用 save_pretrained 方法儲存模型。這會儲存預設模型,可用於推論。

from peft import PeftModel

# Load Model base model
model = AutoModelForImageTextToText.from_pretrained(model_id, low_cpu_mem_usage=True)

# Merge LoRA and base model and save
peft_model = PeftModel.from_pretrained(model, args.output_dir)
merged_model = peft_model.merge_and_unload()
merged_model.save_pretrained("merged_model", safe_serialization=True, max_shard_size="2GB")

processor = AutoTokenizer.from_pretrained(args.output_dir)
processor.save_pretrained("merged_model")
Loading weights:   0%|          | 0/2011 [00:00<?, ?it/s]
Writing model shards:   0%|          | 0/5 [00:00<?, ?it/s]
('merged_model/tokenizer_config.json',
 'merged_model/chat_template.jinja',
 'merged_model/tokenizer.json')

測試模型推論並生成 SQL 查詢

訓練完成後,請評估及測試模型。您可以從測試資料集載入不同樣本,並評估模型在這些樣本上的表現。

import torch
from transformers import pipeline

model_id = "merged_model"

# Load Model with PEFT adapter
model = AutoModelForImageTextToText.from_pretrained(
  model_id,
  device_map="auto",
  dtype="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
Loading weights:   0%|          | 0/2012 [00:00<?, ?it/s]
The tied weights mapping and config for this model specifies to tie model.language_model.embed_tokens.weight to lm_head.weight, but both are present in the checkpoints with different values, so we will NOT tie them. You should update the config with `tie_word_embeddings=False` to silence this warning.

我們從測試資料集載入隨機樣本,並產生 SQL 指令。

from random import randint
import re
from transformers import pipeline, GenerationConfig

config = GenerationConfig.from_pretrained(model_id)
config.max_new_tokens = 256

# Load the model and tokenizer into the pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

# Load a random sample from the test dataset
rand_idx = randint(0, len(dataset["test"]))
test_sample = dataset["test"][rand_idx]

# Convert as test example into a prompt with the Gemma template
prompt = pipe.tokenizer.apply_chat_template(test_sample["messages"][:2], tokenize=False, add_generation_prompt=True)
print(prompt)

# Generate our SQL query.
outputs = pipe(prompt, generation_config=config)

# Extract the user query and original answer
print(f"Context:\n", re.search(r'<SCHEMA>\n(.*?)\n</SCHEMA>', test_sample['messages'][1]['content'], re.DOTALL).group(1).strip())
print(f"Query:\n", re.search(r'<USER_QUERY>\n(.*?)\n</USER_QUERY>', test_sample['messages'][1]['content'], re.DOTALL).group(1).strip())
print(f"Original Answer:\n{test_sample['messages'][2]['content']}")
print(f"Generated Answer:\n{outputs[0]['generated_text'][len(prompt):].strip()}")
<bos><|turn>system
You are a text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA.<turn|>
<|turn>user
Given the <USER_QUERY> and the <SCHEMA>, generate the corresponding SQL command to retrieve the desired data, considering the query's syntax, semantics, and schema constraints.

<SCHEMA>
CREATE TABLE broadband_plans (plan_id INT, plan_name VARCHAR(255), download_speed INT, upload_speed INT, price DECIMAL(5,2));
</SCHEMA>

<USER_QUERY>
Delete a broadband plan from the 'broadband_plans' table
</USER_QUERY><turn|>
<|turn>model

Context:
 CREATE TABLE broadband_plans (plan_id INT, plan_name VARCHAR(255), download_speed INT, upload_speed INT, price DECIMAL(5,2));
Query:
 Delete a broadband plan from the 'broadband_plans' table
Original Answer:
DELETE FROM broadband_plans WHERE plan_id = 3001;
Generated Answer:
DELETE FROM broadband_plans
WHERE plan_name = 'Basic';

摘要和後續步驟

本教學課程說明如何使用 TRL 和 QLoRA 微調 Gemma 模型。接下來請參閱下列文件: