使用 Hugging Face Transformer 和 QLoRA 為 Gemma 進行視覺任務的微調

在 ai.google.dev 上查看 在 Google Colab 中執行 在 Kaggle 中執行 在 Vertex AI 中開啟 在 GitHub 上查看來源

本指南將逐步說明如何使用 Hugging Face TransformersTRL,在自訂圖片和文字資料集上微調 Gemma,以執行視覺工作 (產生產品說明)。您將學會:

  • 什麼是量化低秩適應 (QLoRA)
  • 設定開發環境
  • 建立及準備微調資料集
  • 使用 TRL 和 SFTTrainer 微調 Gemma
  • 測試模型推論,並根據圖片和文字生成產品說明。

什麼是量化低秩適應 (QLoRA)

本指南將示範如何使用量化低秩適應 (QLoRA)。這項技術可減少運算資源需求,同時維持高效能,因此成為有效微調大型語言模型的熱門方法。在 QloRA 中,預先訓練的模型會量化為 4 位元,且權重會凍結。接著附加可訓練的轉換器層 (LoRA),並只訓練轉換器層。之後,轉接器權重可以與基礎模型合併,也可以保留為獨立的轉接器。

設定開發環境

第一步是安裝 Hugging Face 程式庫,包括 TRL 和資料集,以微調開放模型。

# Install Pytorch & other libraries
%pip install torch tensorboard torchvision

# Install Transformers
%pip install transformers

# Install Hugging Face libraries
%pip install datasets accelerate evaluate bitsandbytes trl peft protobuf pillow sentencepiece

# COMMENT IN: if you are running on a GPU that supports BF16 data type and flash attn, such as NVIDIA L4 or NVIDIA A100
#%pip install flash-attn

注意:如果您使用 Ampere 架構的 GPU (例如 NVIDIA L4) 或更新版本,可以使用 Flash attention。Flash Attention 是一種方法,可大幅加快運算速度,並將記憶體用量從序列長度的二次方降至線性,進而將訓練速度提升多達 3 倍。詳情請參閱 FlashAttention

您必須提供有效的 Hugging Face 權杖,才能發布模型。如果您在 Google Colab 中執行,可以使用 Colab 密碼安全地使用 Hugging Face 權杖,否則可以直接在 login 方法中設定權杖。訓練期間,您會將模型推送至 Hub,因此請務必確認權杖也具備寫入權限。

# Login into Hugging Face Hub
from huggingface_hub import login
login()

建立及準備微調資料集

微調 LLM 時,請務必瞭解您的用途和要解決的任務。這有助於建立資料集,微調模型。如果尚未定義用途,建議您重新思考。

舉例來說,本指南著重於下列應用實例:

  • 微調 Gemma 模型,為電子商務平台生成簡潔的搜尋引擎最佳化產品說明,特別是針對行動搜尋量身打造。

本指南使用 philschmid/amazon-product-descriptions-vlm 資料集,這是 Amazon 產品說明資料集,包括產品圖片和類別。

Hugging Face TRL 支援多模態對話。重要部分是「image」角色,這個角色會告知處理類別應載入圖片。結構應如下所示:

{"messages": [{"role": "system", "content": [{"type": "text", "text":"You are..."}]}, {"role": "user", "content": [{"type": "text", "text": "..."}, {"type": "image"}]}, {"role": "assistant", "content": [{"type": "text", "text": "..."}]}]}
{"messages": [{"role": "system", "content": [{"type": "text", "text":"You are..."}]}, {"role": "user", "content": [{"type": "text", "text": "..."}, {"type": "image"}]}, {"role": "assistant", "content": [{"type": "text", "text": "..."}]}]}
{"messages": [{"role": "system", "content": [{"type": "text", "text":"You are..."}]}, {"role": "user", "content": [{"type": "text", "text": "..."}, {"type": "image"}]}, {"role": "assistant", "content": [{"type": "text", "text": "..."}]}]}

您現在可以使用 Hugging Face Datasets 程式庫載入資料集,並建立提示範本來合併圖片、產品名稱和類別,以及新增系統訊息。資料集包含圖片,以 Pil.Image 物件形式呈現。

from datasets import load_dataset
from PIL import Image

# System message for the assistant
system_message = "You are an expert product description writer for Amazon."

# User prompt that combines the user query and the schema
user_prompt = """Create a Short Product description based on the provided <PRODUCT> and <CATEGORY> and image.
Only return description. The description should be SEO optimized and for a better mobile search experience.

<PRODUCT>
{product}
</PRODUCT>

<CATEGORY>
{category}
</CATEGORY>
"""

# Convert dataset to OAI messages
def format_data(sample):
    return {
        "messages": [
            {
                "role": "system",
                #"content": [{"type": "text", "text": system_message}],
                "content": system_message,
            },
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": user_prompt.format(
                            product=sample["Product Name"],
                            category=sample["Category"],
                        ),
                    },
                    {
                        "type": "image",
                        "image": sample["image"],
                    },
                ],
            },
            {
                "role": "assistant",
                "content": [{"type": "text", "text": sample["description"]}],
            },
        ],
    }

def process_vision_info(messages: list[dict]) -> list[Image.Image]:
    image_inputs = []
    # Iterate through each conversation
    for msg in messages:
        # Get content (ensure it's a list)
        content = msg.get("content", [])
        if not isinstance(content, list):
            content = [content]

        # Check each content element for images
        for element in content:
            if isinstance(element, dict) and (
                "image" in element or element.get("type") == "image"
            ):
                # Get the image and convert to RGB
                if "image" in element:
                    image = element["image"]
                else:
                    image = element
                image_inputs.append(image.convert("RGB"))
    return image_inputs

# Load dataset from the hub
dataset = load_dataset("philschmid/amazon-product-descriptions-vlm", split="train")
dataset = dataset.train_test_split(test_size=0.1)

# Convert dataset to OAI messages
# need to use list comprehension to keep Pil.Image type, .mape convert image to bytes
dataset_train = [format_data(sample) for sample in dataset["train"]]
dataset_test = [format_data(sample) for sample in dataset["test"]]

print(dataset_train[345]["messages"])
README.md: 0.00B [00:00, ?B/s]
data/train-00000-of-00001.parquet:   0%|          | 0.00/47.6M [00:00<?, ?B/s]
Generating train split:   0%|          | 0/1345 [00:00<?, ? examples/s]
[{'role': 'system', 'content': 'You are an expert product description writer for Amazon.'}, {'role': 'user', 'content': [{'type': 'text', 'text': "Create a Short Product description based on the provided <PRODUCT> and <CATEGORY> and image.\nOnly return description. The description should be SEO optimized and for a better mobile search experience.\n\n<PRODUCT>\nRazor Agitator BMX/Freestyle Bike, 20-Inch\n</PRODUCT>\n\n<CATEGORY>\nSports & Outdoors | Outdoor Recreation | Cycling | Kids' Bikes & Accessories | Kids' Bikes\n</CATEGORY>\n"}, {'type': 'image', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x413 at 0x7B7250181790>}]}, {'role': 'assistant', 'content': [{'type': 'text', 'text': 'Conquer the streets with the Razor Agitator BMX Bike! This 20-inch freestyle bike is built for young riders ready to take on any challenge. Durable frame, responsive handling – perfect for tricks and cruising.  Get yours today!'}]}]

使用 TRL 和 SFTTrainer 微調 Gemma

現在可以開始微調模型了。Hugging Face TRL SFTTrainer 可輕鬆監督微調開放式 LLM。SFTTrainertransformers 程式庫中 Trainer 的子類別,支援所有相同的功能,包括記錄、評估和檢查點,但新增了其他提升生活品質的功能,包括:

  • 資料集格式設定,包括對話和指令格式
  • 只訓練完成內容,忽略提示
  • 封裝資料集以提高訓練效率
  • 支援高效參數微調 (PEFT),包括 QloRA
  • 準備模型和代碼化工具,以進行對話微調 (例如新增特殊符記)

下列程式碼會從 Hugging Face 載入 Gemma 模型和權杖化工具,並初始化量化設定。

import torch
from transformers import AutoProcessor, AutoModelForImageTextToText, BitsAndBytesConfig

# Hugging Face model id
model_id = "google/gemma-4-E2B" # @param ["google/gemma-4-E2B","google/gemma-4-E4B"] {"allow-input":true}

# Check if GPU benefits from bfloat16
if torch.cuda.get_device_capability()[0] < 8:
    raise ValueError("GPU does not support bfloat16, please use a GPU that supports bfloat16.")

# Define model init arguments
model_kwargs = dict(
    dtype=torch.bfloat16, # What torch dtype to use, defaults to auto
    device_map="auto", # Let torch decide how to load the model
)

# BitsAndBytesConfig int-4 config
model_kwargs["quantization_config"] = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=model_kwargs["dtype"],
    bnb_4bit_quant_storage=model_kwargs["dtype"],
)

# Load model and tokenizer
model = AutoModelForImageTextToText.from_pretrained(model_id, **model_kwargs)
processor = AutoProcessor.from_pretrained("google/gemma-4-E2B-it") # Load the Instruction Tokenizer to use the official Gemma template
config.json: 0.00B [00:00, ?B/s]
model.safetensors:   0%|          | 0.00/10.2G [00:00<?, ?B/s]
Loading weights:   0%|          | 0/2011 [00:00<?, ?it/s]
generation_config.json:   0%|          | 0.00/149 [00:00<?, ?B/s]
processor_config.json: 0.00B [00:00, ?B/s]
chat_template.jinja: 0.00B [00:00, ?B/s]
config.json: 0.00B [00:00, ?B/s]
tokenizer_config.json: 0.00B [00:00, ?B/s]
tokenizer.json:   0%|          | 0.00/32.2M [00:00<?, ?B/s]

SFTTrainer 支援與 peft 的內建整合功能,可輕鬆使用 QLoRA 高效率地微調 LLM。您只需要建立 LoraConfig 並提供給訓練師即可。

from peft import LoraConfig

peft_config = LoraConfig(
    lora_alpha=16,
    lora_dropout=0.05,
    r=16,
    bias="none",
    target_modules="all-linear",
    task_type="CAUSAL_LM",
    modules_to_save=["lm_head", "embed_tokens"], # make sure to save the lm_head and embed_tokens as you train the special tokens
    ensure_weight_tying=True,
)

開始訓練前,您需要定義要在 SFTConfig 中使用的超參數,以及處理視覺處理作業的自訂 collate_fncollate_fn 會將含有文字和圖片的訊息轉換為模型可解讀的格式。

from trl import SFTConfig

args = SFTConfig(
    output_dir="gemma-product-description",     # directory to save and repository id
    num_train_epochs=3,                         # number of training epochs
    per_device_train_batch_size=1,              # batch size per device during training
    optim="adamw_torch_fused",                  # use fused adamw optimizer
    logging_steps=5,                            # log every 5 steps
    save_strategy="epoch",                      # save checkpoint every epoch
    eval_strategy="epoch",                      # evaluate checkpoint every epoch
    learning_rate=2e-4,                         # learning rate, based on QLoRA paper
    bf16=True,                                  # use bfloat16 precision
    max_grad_norm=0.3,                          # max gradient norm based on QLoRA paper
    lr_scheduler_type="constant",               # use constant learning rate scheduler
    push_to_hub=True,                           # push model to hub
    report_to="tensorboard",                    # report metrics to tensorboard
    dataset_text_field="",                      # need a dummy field for collator
    dataset_kwargs={"skip_prepare_dataset": True}, # important for collator
    remove_unused_columns = False               # important for collator
)

# Create a data collator to encode text and image pairs
def collate_fn(examples):
    texts = []
    images = []
    for example in examples:
        image_inputs = process_vision_info(example["messages"])
        text = processor.apply_chat_template(
            example["messages"], add_generation_prompt=False, tokenize=False
        )
        texts.append(text.strip())
        images.append(image_inputs)

    # Tokenize the texts and process the images
    batch = processor(text=texts, images=images, return_tensors="pt", padding=True)

    # The labels are the input_ids, and we mask the padding tokens and image tokens in the loss computation
    labels = batch["input_ids"].clone()

    # Mask tokens for not being used in the loss computation
    labels[labels == processor.tokenizer.pad_token_id] = -100
    labels[labels == processor.tokenizer.boi_token_id] = -100
    labels[labels == processor.tokenizer.image_token_id] = -100
    labels[labels == processor.tokenizer.eoi_token_id] = -100

    batch["labels"] = labels
    return batch

您現在已擁有建立 SFTTrainer 的所有建構區塊,可以開始訓練模型。

from trl import SFTTrainer

# Create Trainer object
trainer = SFTTrainer(
    model=model,
    args=args,
    train_dataset=dataset_train,
    eval_dataset=dataset_test,
    peft_config=peft_config,
    processing_class=processor,
    data_collator=collate_fn,
)

呼叫 train() 方法開始訓練。

# Start training, the model will be automatically saved to the Hub and the output directory
trainer.train()

# Save the final model again to the Hugging Face Hub
trainer.save_model()
The tokenizer has new PAD/BOS/EOS tokens that differ from the model config and generation config. The model config and generation config were aligned accordingly, being updated with the tokenizer's values. Updated tokens: {'eos_token_id': 1, 'bos_token_id': 2, 'pad_token_id': 0}.

測試模型前,請務必釋放記憶體。

# free the memory again
del model
del trainer
torch.cuda.empty_cache()

使用 QLoRA 時,您只會訓練轉接器,不會訓練完整模型。也就是說,在訓練期間儲存模型時,您只會儲存適應器權重,而非完整模型。如要儲存完整模型,以便搭配 vLLM 或 TGI 等服務堆疊使用,可以先使用 merge_and_unload 方法將轉接器權重合併至模型權重,然後使用 save_pretrained 方法儲存模型。這會儲存預設模型,可用於推論。

from peft import PeftModel

# Load Model base model
model = AutoModelForImageTextToText.from_pretrained(model_id, low_cpu_mem_usage=True)

# Merge LoRA and base model and save
peft_model = PeftModel.from_pretrained(model, args.output_dir)
merged_model = peft_model.merge_and_unload()
merged_model.save_pretrained("merged_model", safe_serialization=True, max_shard_size="2GB")

processor = AutoProcessor.from_pretrained(args.output_dir)
processor.save_pretrained("merged_model")
Loading weights:   0%|          | 0/2011 [00:00<?, ?it/s]
Writing model shards:   0%|          | 0/5 [00:00<?, ?it/s]
['merged_model/processor_config.json']

測試模型推論並生成產品說明

訓練完成後,請評估及測試模型。您可以從測試資料集載入不同樣本,並評估模型在這些樣本上的表現。

model_id = "merged_model"

# Load Model with PEFT adapter
model = AutoModelForImageTextToText.from_pretrained(
  model_id,
  device_map="auto",
  dtype="auto",
)
processor = AutoProcessor.from_pretrained(model_id)
Loading weights:   0%|          | 0/2012 [00:00<?, ?it/s]
The tied weights mapping and config for this model specifies to tie model.language_model.embed_tokens.weight to lm_head.weight, but both are present in the checkpoints with different values, so we will NOT tie them. You should update the config with `tie_word_embeddings=False` to silence this warning.

您可以提供產品名稱、類別和圖片,測試推論結果。sample 內含漫威動作公仔。

import requests
from PIL import Image

# Test sample with Product Name, Category and Image
sample = {
  "product_name": "Hasbro Marvel Avengers-Serie Marvel Assemble Titan-Held, Iron Man, 30,5 cm Actionfigur",
  "category": "Toys & Games | Toy Figures & Playsets | Action Figures",
  "image": Image.open(requests.get("https://m.media-amazon.com/images/I/81+7Up7IWyL._AC_SY300_SX300_.jpg", stream=True).raw).convert("RGB")
}

def generate_description(sample, model, processor):
    # Convert sample into messages and then apply the chat template
    messages = [
        {"role": "system", "content": system_message},
        {"role": "user", "content": [
            {"type": "image","image": sample["image"]},
            {"type": "text", "text": user_prompt.format(product=sample["product_name"], category=sample["category"])},
        ]},
    ]
    text = processor.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )
    print(text)
    # Process the image and text
    image_inputs = process_vision_info(messages)
    # Tokenize the text and process the images
    inputs = processor(
        text=[text],
        images=image_inputs,
        padding=True,
        return_tensors="pt",
    )
    # Move the inputs to the device
    inputs = inputs.to(model.device)

    # Generate the output
    stop_token_ids = [processor.tokenizer.eos_token_id, processor.tokenizer.convert_tokens_to_ids("<turn|>")]
    generated_ids = model.generate(**inputs, max_new_tokens=256, top_p=1.0, do_sample=True, temperature=0.8, eos_token_id=stop_token_ids, disable_compile=True)
    # Trim the generation and decode the output to text
    generated_ids_trimmed = [out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
    output_text = processor.batch_decode(
        generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
    )
    return output_text[0]

# generate the description
description = generate_description(sample, model, processor)
print("MODEL OUTPUT>> \n")
print(description)
<bos><|turn>system
You are an expert product description writer for Amazon.<turn|>
<|turn>user


<|image|>

Create a Short Product description based on the provided <PRODUCT> and <CATEGORY> and image.
Only return description. The description should be SEO optimized and for a better mobile search experience.

<PRODUCT>
Hasbro Marvel Avengers-Serie Marvel Assemble Titan-Held, Iron Man, 30,5 cm Actionfigur
</PRODUCT>

<CATEGORY>
Toys & Games | Toy Figures & Playsets | Action Figures
</CATEGORY><turn|>
<|turn>model

MODEL OUTPUT>> 

Enhance your collection with the Marvel Avengers - Avengers Assemble Ultron-Comforter Set! This soft and cuddly blanket and pillowcase feature everyone's favorite Avengers, Iron Man, and his loyal companion War Machine. Officially licensed by Marvel.  Bring home the heroic team!

摘要和後續步驟

本教學課程說明如何使用 TRL 和 QLoRA 微調 Gemma 模型,以執行視覺工作,特別是生成產品說明。接下來請參閱下列文件: