[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["没有我需要的信息","missingTheInformationINeed","thumb-down"],["太复杂/步骤太多","tooComplicatedTooManySteps","thumb-down"],["内容需要更新","outOfDate","thumb-down"],["翻译问题","translationIssue","thumb-down"],["示例/代码问题","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2024-04-18。"],[],[],null,["# PaLM 2 models\n\n\u003cbr /\u003e\n\n[PaLM 2](https://ai.google/discover/palm2/)\nis a family of language models, optimized for ease of use on key developer use\ncases. The PaLM family of models includes variations trained for text and chat\ngeneration as well as text embeddings. This guide provides information about\neach variation to help you decide which is the best fit for your use case.\n\nModel sizes\n-----------\n\nThe model sizes are described by an animal name. The following table shows the\navailable sizes and what they mean relative to each other.\n\n| Model size | Description | Services |\n|------------|---------------------------------------------|---------------|\n| Bison | Most capable PaLM 2 model size. | - text - chat |\n| Gecko | Smallest, most efficient PaLM 2 model size. | - embeddings |\n\nModel variations\n----------------\n\nDifferent PaLM models are available and optimized for specific use cases. The\nfollowing table describes attributes of each.\n\n| Variation | Attribute | Description |\n|---------------------|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| **Bison Text** | Model last updated | May 2023 |\n| **Bison Text** | Model code | `text-bison-001` |\n| **Bison Text** | Model capabilities | - Input: text - Output: text - Optimized for language tasks such as: - Code generation - Text generation - Text editing - Problem solving - Recommendations generation - Information extraction - Data extraction or generation - AI agent - Can handle zero, one, and few-shot tasks. |\n| **Bison Text** | Model safety | Adjustable safety settings for 6 dimensions of harm available to developers. See the [safety settings](../palm_docs/safety_setting_palm) topic for details. |\n| **Bison Text** | Rate limit | 90 requests per minute |\n| |||\n| **Bison Chat** | Model last updated | May 2023 |\n| **Bison Chat** | Model code | `chat-bison-001` |\n| **Bison Chat** | Model capabilities | - Input: text - Output: text - Generates text in a conversational format. - Optimized for dialog language tasks such as implementation of chat bots or AI agents. - Can handle zero, one, and few-shot tasks. |\n| **Bison Chat** | Model safety | No adjustable safety settings. |\n| **Bison Chat** | Rate limit | 90 requests per minute |\n| |||\n| **Gecko Embedding** | Model last updated | May 2023 |\n| **Gecko Embedding** | Model code | `embedding-gecko-001` |\n| **Gecko Embedding** | Model capabilities | - Input: text - Output: text - Generates text embeddings for the input text. - Optimized for creating embeddings for text of up to 1024 tokens. |\n| **Gecko Embedding** | Model safety | No adjustable safety settings. |\n| **Gecko Embedding** | Rate limit | 1500 requests per minute |\n\nModel metadata\n--------------\n\nUse the `ModelService` API to get additional metadata about\nthe latest models such as input and output token limits. The following table\ndisplays the metadata for the `text-bison-001` model variant.\n| **Note:** For the PaLM 2 model, a token is equivalent to about 4 characters. 100 tokens are about 60-80 English words.\n\n| Attribute | Value |\n|------------------------------|------------------------------------|\n| Display name | Text Bison |\n| Model code | `models/text-bison-001` |\n| Description | Model targeted for text generation |\n| Input token limit | 8196 |\n| Output token limit | 1024 |\n| Supported generation methods | `generateText` |\n| Temperature | 0.7 |\n| top_p | 0.95 |\n| top_k | 40 |\n\nModel attributes\n----------------\n\nThe table below describes the attributes of the PaLM 2 which are common to\nall the model variations.\n| **Note:** The configurable parameters apply only to the text and chat model variations, but not embeddings.\n\n| Attribute | Description |\n|-------------------------------|---------------------------------------------------------------------------------------------------|\n| Training data | PaLM 2's knowledge cutoff time is mid-2021. Knowledge about events after that time is limited. |\n| Supported language | English |\n| Configurable model parameters | - Top p - Top k - Temperature - Stop sequence - Max output length - Number of response candidates |\n\nSee the [model parameters](../docs/concepts#model_parameters) section of the\nIntro to LLMs guide for information about each of these parameters."]]