[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["必要な情報がない","missingTheInformationINeed","thumb-down"],["複雑すぎる / 手順が多すぎる","tooComplicatedTooManySteps","thumb-down"],["最新ではない","outOfDate","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["サンプル / コードに問題がある","samplesCodeIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["EmbeddingGemma is a 308M parameter multilingual text embedding model based on\nGemma 3. It is optimized for use in everyday devices, such as phones, laptops,\nand tablets. The model produces numerical representations of text to be used\nfor downstream tasks like information retrieval, semantic similarity\nsearch, classification, and clustering.\n\nEmbeddingGemma includes the following key features:\n\n- **Multilingual support**: Wide linguistic data understanding, trained in over 100 languages.\n- **Flexible output dimensions**: Customize your output dimensions from 768 to 128 for speed and storage tradeoffs using Matryoshka Representation Learning (MRL).\n- **2K token context**: Substantial input context for processing text data and documents directly on your hardware.\n- **Storage efficient**: Run it on less than 200MB of RAM with quantization\n- **Low latency**: Generative embeddings in less than 22ms on EdgeTPU for fast and fluid applications.\n- **Offline and secure**: Generate embeddings of documents directly on your hardware, works without internet connection to keep sensitive data secure.\n\n| **Tip:** Deploy EmbeddingGemma with Gemma 3n to build contextually relevant mobile-first Retrieval Augmented Generation (RAG) pipelines and chatbots. See our [quickstart RAG notebook](https://github.com/google-gemini/gemma-cookbook/blob/main/Gemma/%5BGemma_3%5DRAG_with_EmbeddingGemma.ipynb) to get started.\n\n[Get it on Hugging Face](https://huggingface.co/collections/google/embeddinggemma-68b9ae3a72a82f0562a80dc4)\n[Get it on Kaggle](https://www.kaggle.com/models/google/embeddinggemma)\n[Access it on Vertex](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/embeddinggemma)\n\nAs with other Gemma models, EmbeddingGemma is provided with open weights and\nlicensed for responsible [commercial use](/gemma/terms), allowing you to\nfine tune and deploy it in your own projects and applications.\n\n[Try EmbeddingGemma](/gemma/docs/embeddinggemma/inference-embeddinggemma-with-sentence-transformers)\n[Fine-tune EmbeddingGemma](/gemma/docs/embeddinggemma/fine-tuning-embeddinggemma-with-sentence-transformers)"]]