[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-24 (世界標準時間)。"],[],[],null,["# Model conversion overview\n\nThe machine learning (ML) models you use with LiteRT are originally\nbuilt and trained using TensorFlow core libraries and tools. Once you've built\na model with TensorFlow core, you can convert it to a smaller, more\nefficient ML model format called a LiteRT model.\nThis section provides guidance for converting\nyour TensorFlow models to the LiteRT model format.\n| **Note:** If you don't have a model to convert yet, see the [Models overview](./trained) page for guidance on choosing or building models.\n\nConversion workflow\n-------------------\n\nConverting TensorFlow models to LiteRT format can take a few paths\ndepending on the content of your ML model. As the first step of that process,\nyou should evaluate your model to determine if it can be directly converted.\nThis evaluation determines if the content of the model is supported by the\nstandard LiteRT runtime environments based on the TensorFlow operations\nit uses. If your model uses operations outside of the supported set, you have\nthe option to refactor your model or use advanced conversion techniques.\n\nThe diagram below shows the high level steps in converting a model.\n\n**Figure 1.** LiteRT conversion workflow.\n\nThe following sections outline the process of evaluating and converting models\nfor use with LiteRT.\n\n### Input model formats\n\nYou can use the converter with the following input model formats:\n\n- [SavedModel](https://www.tensorflow.org/guide/saved_model) (***recommended***): A TensorFlow model saved as a set of files on disk.\n- [Keras model](https://www.tensorflow.org/guide/keras/overview): A model created using the high level Keras API.\n- [Keras H5 format](https://www.tensorflow.org/guide/keras/save_and_serialize#keras_h5_format): A light-weight alternative to SavedModel format supported by Keras API.\n- [Models built from concrete functions](https://www.tensorflow.org/guide/intro_to_graphs): A model created using the low level TensorFlow API.\n\nYou can save both the Keras and concrete function models as a SavedModel\nand convert using the recommeded path.\n| **Note:** To avoid errors during inference, include signatures when exporting to the SavedModel format. The TensorFlow converter supports converting TensorFlow model's input/output specifications to LiteRT models. See the topic on [adding signatures](./signatures).\n\nIf you have a Jax model, you can use the `TFLiteConverter.experimental_from_jax`\nAPI to convert it to the LiteRT format. Note that this API is subject\nto change while in experimental mode.\n\n### Conversion evaluation\n\nEvaluating your model is an important step before attempting to convert it.\nWhen evaluating,\nyou want to determine if the contents of your model is compatible with the\nLiteRT format. You should also determine if your model is a good fit\nfor use on mobile and edge devices in terms of the size of data the model uses,\nits hardware processing requirements, and the model's overall size and\ncomplexity.\n\nFor many models, the converter should work out of the box. However,\nLiteRT builtin operator library supports a subset of\nTensorFlow core operators, which means some models may need additional\nsteps before converting to LiteRT.\nAdditionally some operations that are supported by LiteRT have\nrestricted usage requirements for performance reasons. See the\n[operator compatibility](./ops_compatibility) guide\nto determine if your model needs to be refactored for conversion.\n| **Key Point:** Most models can be directly converted to LiteRT format. Some models may require refactoring or use of advanced conversion techniques to make them compatible.\n\n### Model conversion\n\nThe LiteRT converter takes a TensorFlow model and generates a\nLiteRT model (an optimized\n[FlatBuffer](https://google.github.io/flatbuffers/) format identified by the\n`.tflite` file extension). You can load\na SavedModel or directly convert a model you create in code.\n\nThe converter takes 3 main flags (or options) that customize the conversion\nfor your model:\n\n1. [Compatibility flags](./ops_compatibility) allow you to specify whether the conversion should allow custom operators.\n2. [Optimization flags](./model_optimization) allow you to specify the type of optimization to apply during conversion. The most commonly used optimization technique is [post-training quanitization](./post_training_quant).\n3. [Metadata flags](./metadata) allow you to add metadata to the converted model which makes it easier to create platform specific wrapper code when deploying models on devices.\n\nYou can convert your model using the [Python API](./convert_tf#python_api) or\nthe [Command line](./convert_tf#cmdline) tool. See the\n[Convert TF model](./convert_tf) guide for step by step\ninstructions on running the converter on your model.\n\nTypically you would convert your model for the standard LiteRT\n[runtime environment](../android/index#runtime) or the\n[Google Play services runtime environment](../android/play_services)\nfor LiteRT (Beta). Some advanced use cases require\ncustomization of model runtime environment, which require additional steps in\nthe conversion proceess. See the\n[advanced runtime environment](../android#adv_runtime) section of the Android\noverview for more guidance.\n\nAdvanced conversion\n-------------------\n\nIf you run into [errors](./convert_tf#conversion_errors)\nwhile running the converter on your model, it's most likely that you have an\noperator compatibility issue. Not all TensorFlow operations are\nsupported by TensorFlow\nLite. You can work around these issues by refactoring your model, or by using\nadvanced conversion options that allow you to create a modified LiteRT\nformat model and a custom runtime environment for that model.\n\n- See the [Model compatibility overview](./ops_compatibility) for more information on TensorFlow and LiteRT model compatibility considerations.\n- Topics under the Model compatibility overview cover advanced techniques for refactoring your model, such as the [Select operators](./ops_select) guide.\n- For full list of operations and limitations see [LiteRT Ops page](https://www.tensorflow.org/mlir/tfl_ops).\n\nNext steps\n----------\n\n- See the [convert TF models](./convert_tf) guide to quickly get started on converting your model.\n- See the [optimization overview](./model_optimization) for guidance on how to optimize your converted model using techniques like [post-training quanitization](./post_training_quantization).\n- See the [Adding metadata overview](./metadata) to learn how to add metadata to your models. Metadata provides other uses a description of your model as well as information that can be leveraged by code generators."]]