[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["必要な情報がない","missingTheInformationINeed","thumb-down"],["複雑すぎる / 手順が多すぎる","tooComplicatedTooManySteps","thumb-down"],["最新ではない","outOfDate","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["サンプル / コードに問題がある","samplesCodeIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-07-24 UTC。"],[],[],null,["# Pre-trained models for LiteRT\n\nThere are a variety of already trained, open source models you can use\nimmediately with LiteRT to accomplish many machine learning tasks.\nUsing pre-trained LiteRT models lets you add machine learning\nfunctionality to your mobile and edge device application quickly, without having\nto build and train a model. This guide helps you find and decide on trained\nmodels for use with LiteRT.\n\nYou can start browsing a large set of models on [Kaggle\nModels](https://www.kaggle.com/models?framework=tfLite).\n| **Important:** Kaggle Models lists both regular TensorFlow models and TensorFlow Lite format models. These model formats are not interchangeable. TensorFlow models can be converted into LiteRT models, but that process is not reversible.\n\nFind a model for your application\n---------------------------------\n\nFinding an existing LiteRT model for your use case can be tricky\ndepending on what you are trying to accomplish. Here are a few recommended ways\nto discover models for use with LiteRT:\n\n**By example:** The fastest way to find and start using models with TensorFlow\nLite is to browse the [LiteRT\nExamples](https://github.com/tensorflow/examples/tree/master/lite/examples)\nsection to find models that perform a task which is similar to your use case.\nThis short catalog of examples provides models for common use cases with\nexplanations of the models and sample code to get you started running and using\nthem.\n\n**By data input type:** Aside from looking at examples similar to your use case,\nanother way to discover models for your own use is to consider the type of data\nyou want to process, such as audio, text, images, or video data. Machine\nlearning models are frequently designed for use with one of these types of data,\nso looking for models that handle the data type you want to use can help you\nnarrow down what models to consider.\n| **Note:** Processing video with machine learning models can frequently be accomplished with models that are designed for processing single images, depending on how fast and how many inferences you need to perform for your use case. If you intend to use video for your use case, consider using single-frame video sampling with a model built for fast processing of individual images.\n\nThe following lists links to LiteRT models on [Kaggle\nModels](https://www.kaggle.com/models?framework=tfLite) for common use cases:\n\n- [Image classification](https://www.kaggle.com/models?framework=tfLite&task=16686) models\n- [Object detection](https://www.kaggle.com/models?query=image+object+detection&framework=tfLite) models\n- [Text classification](https://www.kaggle.com/models?framework=tfLite&task=16691) models\n- [Text embedding](https://www.kaggle.com/models?framework=tfLite&query=text-embedding) models\n- [Audio speech synthesis](https://www.kaggle.com/models?framework=tfLite&query=audio-speech-synthesis) models\n- [Audio embedding](https://www.kaggle.com/models?framework=tfLite&query=audio-embedding) models\n\nChoose between similar models\n-----------------------------\n\nIf your application follows a common use case such as image classification or\nobject detection, you may find yourself deciding between multiple TensorFlow\nLite models, with varying binary size, data input size, inference speed, and\nprediction accuracy ratings. When deciding between a number of models, you\nshould narrow your options based first on your most limiting constraint: size of\nmodel, size of data, inference speed, or accuracy.\n| **Key Point:** Generally, when choosing between similar models, pick the smallest model to allow for the broadest device compatibility and fast inference times.\n\nIf you are not sure what your most limiting constraint is, assume it is the size\nof the model and pick the smallest model available. Picking a small model gives\nyou the most flexibility in terms of the devices where you can successfully\ndeploy and run the model. Smaller models also typically produce faster\ninferences, and speedier predictions generally create better end-user\nexperiences. Smaller models typically have lower accuracy rates, so you may need\nto pick larger models if prediction accuracy is your primary concern.\n\nSources for models\n------------------\n\nUse the [LiteRT\nExamples](https://github.com/tensorflow/examples/tree/master/lite/examples)\nsection and [Kaggle Models](https://www.kaggle.com/models?framework=tfLite) as\nyour first destinations for finding and selecting models for use with TensorFlow\nLite. These sources generally have up to date, curated models for use with\nLiteRT, and frequently include sample code to accelerate your\ndevelopment process.\n\n### TensorFlow models\n\nIt is possible to [convert](./convert) regular TensorFlow models to TensorFlow\nLite format. For more information about converting models, see the [TensorFlow\nLite Converter](./convert) documentation. You can find TensorFlow models on\n[Kaggle Models](https://www.kaggle.com/models) and in the [TensorFlow Model\nGarden](https://github.com/tensorflow/models)."]]