[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-24 (世界標準時間)。"],[],[],null,["# Build LiteRT models\n\nThis page provides guidance for building your TensorFlow models with the\nintention of converting to the LiteRT model format. The machine\nlearning (ML) models you use with LiteRT are originally built and\ntrained using TensorFlow core libraries and tools. Once you've built a model\nwith TensorFlow core, you can convert it to a smaller, more efficient ML model\nformat called a LiteRT model.\n\nIf you have a model to convert already, see the [Convert models\noverview](./convert/) page for guidance on converting your model.\n\nBuilding your model\n-------------------\n\nIf you are building a custom model for your specific use case, you should start\nwith developing and training a TensorFlow model or extending an existing one.\n\n### Model design constraints\n\nBefore you start your model development process, you should be aware of the\nconstraints for LiteRT models and build your model with these\nconstraints in mind:\n\n- **Limited compute capabilities** - Compared to fully equipped servers with multiple CPUs, high memory capacity, and specialized processors such as GPUs and TPUs, mobile and edge devices are much more limited. While they are growing in compute power and specialized hardware compatibility, the models and data you can effectively process with them are still comparably limited.\n- **Size of models** - The overall complexity of a model, including data pre-processing logic and the number of layers in the model, increases the in-memory size of a model. A large model may run unacceptably slow or simply may not fit in the available memory of a mobile or edge device.\n- **Size of data** - The size of input data that can be effectively processed with a machine learning model is limited on a mobile or edge device. Models that use large data libraries such as language libraries, image libraries, or video clip libraries may not fit on these devices, and may require off-device storage and access solutions.\n- **Supported TensorFlow operations** - LiteRT runtime environments support a subset of machine learning model operations compared to regular TensorFlow models. As you develop a model for use with LiteRT, you should track the compatibility of your model against the capabilities of LiteRT runtime environments.\n\nFor more information building effective, compatible, high performance models for\nLiteRT, see [Performance best practices](./best_practices).\n\n### Model development\n\nTo build a LiteRT model, you first need to build a model using the\nTensorFlow core libraries. TensorFlow core libraries are the lower-level\nlibraries that provide APIs to build, train and deploy ML models.\n\nTensorFlow provides two paths for doing this. You can develop your own custom\nmodel code or you can start with a model implementation available in the\nTensorFlow [Model Garden](https://www.tensorflow.org/tfmodels).\n\n#### Model Garden\n\nThe TensorFlow Model Garden provides implementations of many state-of-the-art\nmachine learning (ML) models for vision and natural language processing (NLP).\nYou'll also find workflow tools to let you quickly configure and run those\nmodels on standard datasets. The machine learning models in the Model Garden\ninclude full code so you can test, train, or re-train them using your own\ndatasets.\n\nWhether you are looking to benchmark performance for a well-known model, verify\nthe results of recently released research, or extend existing models, the Model\nGarden can help you drive your ML goals.\n\n#### Custom models\n\nIf your use case is outside of those supported by the models in Model Garden,\nyou can use a high level library like [Keras](https://www.keras.io) to develop\nyour custom training code. To learn the fundamentals of TensorFlow, see the\n[TensorFlow guide](https://www.tensorflow.org/guide/basics). To get started with\nexamples, see the [TensorFlow tutorials\noverview](https://www.tensorflow.org/tutorials) which contain pointers to\nbeginning to expert level tutorials.\n\n### Model evaluation\n\nOnce you've developed your model, you should evaluate its performance and test\nit on end-user devices. TensorFlow provides a few ways to do this.\n\n- [TensorBoard](https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras) is a tool for providing the measurements and visualizations needed during the machine learning workflow. It enables tracking experiment metrics like loss and accuracy, visualizing the model graph, projecting embeddings to a lower dimensional space, and much more.\n- [Benchmarking tools](./measurement) are available for each supported platform such as the Android benchmark app and the iOS benchmark app. Use these tools to measure and calculate statistics for important performance metrics.\n\n### Model optimization\n\nWith the [constraints](#model_constraints) on resources specific to TensorFlow\nLite models, model optimization can help to ensure your model performance well\nand uses less compute resources. Machine learning model performance is usually a\nbalance between size and speed of inference vs accuracy. LiteRT\ncurrently supports optimization via quantization, pruning and clustering. See\nthe [model optimization](./model_optimization) topic for more details on these\ntechniques. TensorFlow also provides a [Model optimization\ntoolkit](./model_optimization) which provides an API that implements these\ntechniques.\n\nNext steps\n----------\n\n- To start building your custom model, see the [quick start for beginners](https://www.tensorflow.org/tutorials/quickstart/beginner) tutorial in TensorFlow core documentation.\n- To convert your custom TensorFlow model, see the [Convert models\n overview](./convert).\n- See the [operator compatibility](./ops_compatibility) guide to determine if your model is compatible with LiteRT or if you'll need to take additional steps to make it compatible.\n- See the [performance best practices guide](./best_practices) for guidance on making your LiteRT models efficient and performant.\n- See the [performance metrics guide](./measurement) to learn how to measure the performance of your model using benchmarking tools."]]