[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-24 (世界標準時間)。"],[],[],null,["# LiteRT and TensorFlow operator compatibility\n\nThe machine learning (ML) operators you use in your model can impact the\nprocess of converting a\nTensorFlow model to LiteRT format. The LiteRT converter\nsupports a limited number of TensorFlow operations used in common\ninference models, which means that not every model is directly convertible.\nThe converter tool allows you to include additional operators, but converting\na model this way also requires you to modify the LiteRT runtime\nenvironment you use to execute your model, which can limit your ability\nuse standard runtime deployment options, such as\n[Google Play services](../android/play_services).\n\nThe LiteRT Converter is designed to analyze model\nstructure and apply optimizations in order to make it compatible with the\ndirectly supported operators. For example, depending on the ML operators in\nyour model, the converter may\n[elide or fuse](./operation_fusion) those\noperators in order to map them to their LiteRT counterparts.\n\nEven for supported operations, specific usage patterns are sometimes expected,\nfor performance reasons. The best way to understand how to build a TensorFlow\nmodel that can be used with\nLiteRT is to carefully consider how operations are converted and\noptimized, along with the limitations imposed by this process.\n\nSupported operators\n-------------------\n\nLiteRT built-in operators are a subset of the operators\nthat are part of the TensorFlow core library. Your TensorFlow model may\nalso include custom operators in the form of composite operators\nor new operators defined by you. The diagram below shows the relationships\nbetween these operators.\n\nFrom this range of ML model operators, there are 3 types of\nmodels supported by the conversion process:\n\n1. Models with only LiteRT built-in operator. (**Recommended**)\n2. Models with the built-in operators and select TensorFlow core operators.\n3. Models with the built-in operators, TensorFlow core operators and/or custom operators.\n\nIf your model only contains operations that are natively supported by\nLiteRT, you do not need any additional flags to convert it. This\nis the recommended path because this type of model will convert smoothly\nand is simpler to optimize and run using the default LiteRT runtime.\nYou also have more deployment options for your model such as\n[Google Play services](../android/play_services).\nYou can get started with the\n[LiteRT converter guide](./convert). See\nthe [LiteRT Ops page](https://www.tensorflow.org/mlir/tfl_ops) for a\nlist of built-in operators.\n\nIf you need to include select TensorFlow operations from the core library,\nyou must specify that at conversion and ensure your runtime includes those\noperations. See the [Select TensorFlow operators](./ops_select.md) topic for\ndetailed steps.\n\nWhenever possible, avoid the last option of including custom operators in your\nconverted model. [Custom operators](https://www.tensorflow.org/guide/create_op)\nare either operators created by combining\nmultiple primitive TensorFlow core operators or defining a completely new one.\nWhen custom operators are converted, they can increase the size of the overall\nmodel by incurring dependencies outside of the built-in LiteRT library.\nCustom ops, if not specifically created for mobile or device deployment,\ncan result in worse performance when deployed to\nresource constrained devices compared to a server environment.\nFinally, just like including select TensorFlow core operators, custom operators\nrequires you to\n[modify the model runtime environment](./ops_custom#create_and_register_the_operator)\nwhich limits you from taking advantage of standard runtime services such as\nthe [Google Play services](../android/play_services).\n\nSupported types\n---------------\n\nMost LiteRT operations target both floating-point (`float32`) and\nquantized (`uint8`, `int8`) inference, but many ops do not yet for other types\nlike `tf.float16` and strings.\n\nApart from using different version of the operations, the other difference\nbetween floating-point and quantized models is the way they are converted.\nQuantized conversion requires dynamic range information for tensors. This\nrequires \"fake-quantization\" during model training, getting range information\nvia a calibration data set, or doing \"on-the-fly\" range estimation. See\n[quantization](./model_optimization.md) for more details.\n\nStraight-forward conversions, constant-folding and fusing\n---------------------------------------------------------\n\nA number of TensorFlow operations can be processed by LiteRT even\nthough they have no direct equivalent. This is the case for operations that can\nbe simply removed from the graph (`tf.identity`), replaced by tensors\n(`tf.placeholder`), or fused into more complex operations (`tf.nn.bias_add`).\nEven some supported operations may sometimes be removed through one of these\nprocesses.\n\nHere is a non-exhaustive list of TensorFlow operations that are usually removed\nfrom the graph:\n\n- `tf.add`\n- `tf.debugging.check_numerics`\n- `tf.constant`\n- `tf.div`\n- `tf.divide`\n- `tf.fake_quant_with_min_max_args`\n- `tf.fake_quant_with_min_max_vars`\n- `tf.identity`\n- `tf.maximum`\n- `tf.minimum`\n- `tf.multiply`\n- `tf.no_op`\n- `tf.placeholder`\n- `tf.placeholder_with_default`\n- `tf.realdiv`\n- `tf.reduce_max`\n- `tf.reduce_min`\n- `tf.reduce_sum`\n- `tf.rsqrt`\n- `tf.shape`\n- `tf.sqrt`\n- `tf.square`\n- `tf.subtract`\n- `tf.tile`\n- `tf.nn.batch_norm_with_global_normalization`\n- `tf.nn.bias_add`\n- `tf.nn.fused_batch_norm`\n- `tf.nn.relu`\n- `tf.nn.relu6`\n\n| **Note:** Many of those operations don't have LiteRT equivalents, and the corresponding model will not be convertible if they can't be elided or fused.\n\nExperimental Operations\n-----------------------\n\nThe following LiteRT operations are present, but not ready for custom\nmodels:\n\n- `CALL`\n- `CONCAT_EMBEDDINGS`\n- `CUSTOM`\n- `EMBEDDING_LOOKUP_SPARSE`\n- `HASHTABLE_LOOKUP`\n- `LSH_PROJECTION`\n- `SKIP_GRAM`\n- `SVDF`"]]