この機能は TensorFlow 2.3 リリースの一部です。また、
tf-nightly pip、または head から実行します。
この変換機能は、LiteRT に変換するときに使用できます。
Keras モデルから直接呼び出すこともできます。使用例をご覧ください。
保存済みモデルから
# build a saved model. Here concrete_function is the exported function# corresponding to the TensorFlow model containing one or more# Keras LSTM layers.saved_model,saved_model_dir=build_saved_model_lstm(...)saved_model.save(saved_model_dir,save_format="tf",signatures=concrete_func)# Convert the model.converter=TFLiteConverter.from_saved_model(saved_model_dir)tflite_model=converter.convert()
Keras モデルから
# build a Keras model
keras_model = build_keras_lstm(...)
# Convert the model.
converter = TFLiteConverter.from_keras_model(keras_model)
tflite_model = converter.convert()
例
Keras LSTM から LiteRT
Colab
LiteRT インタープリタでのエンドツーエンドの使用方法を示します。
サポートされている TensorFlow RNN API
Keras LSTM 変換(推奨)
Keras LSTM から LiteRT へのすぐに使える変換がサポートされています。対象
詳しくは、こちらの
Keras LSTM インターフェースと変換ロジックに反映されます。
こちらをご覧ください。
LiteRT の LSTM 契約を強調することも重要です。
Keras オペレーション定義に追加します。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["必要な情報がない","missingTheInformationINeed","thumb-down"],["複雑すぎる / 手順が多すぎる","tooComplicatedTooManySteps","thumb-down"],["最新ではない","outOfDate","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["サンプル / コードに問題がある","samplesCodeIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-07-24 UTC。"],[],[],null,["# TensorFlow RNN conversion to LiteRT\n\nOverview\n--------\n\nLiteRT supports converting TensorFlow RNN models to LiteRT's\nfused LSTM operations. Fused operations exist to maximize the performance of\ntheir underlying kernel implementations, as well as provide a higher level\ninterface to define complex transformations like quantizatization.\n\nSince there are many variants of RNN APIs in TensorFlow, our approach has been\ntwo fold:\n\n1. Provide **native support for standard TensorFlow RNN APIs** like Keras LSTM. This is the recommended option.\n2. Provide an **interface** **into the conversion infrastructure for** **user-defined** **RNN implementations** to plug in and get converted to LiteRT. We provide a couple of out of box examples of such conversion using lingvo's [LSTMCellSimple](https://github.com/tensorflow/tensorflow/blob/82abf0dbf316526cd718ae8cd7b11cfcb805805e/tensorflow/compiler/mlir/lite/transforms/prepare_composite_functions_tf.cc#L130) and [LayerNormalizedLSTMCellSimple](https://github.com/tensorflow/tensorflow/blob/c11d5d8881fd927165eeb09fd524a80ebaf009f2/tensorflow/compiler/mlir/lite/transforms/prepare_composite_functions_tf.cc#L137) RNN interfaces.\n\nConverter API\n-------------\n\nThe feature is part of TensorFlow 2.3 release. It is also available through the\n[tf-nightly](https://pypi.org/project/tf-nightly/) pip or from head.\n\nThis conversion functionality is available when converting to LiteRT\nvia a SavedModel or from the Keras model directly. See example usages.\n\n### From saved model\n\n # build a saved model. Here concrete_function is the exported function\n # corresponding to the TensorFlow model containing one or more\n # Keras LSTM layers.\n saved_model, saved_model_dir = build_saved_model_lstm(...)\n saved_model.save(saved_model_dir, save_format=\"tf\", signatures=concrete_func)\n\n # Convert the model.\n converter = TFLiteConverter.from_saved_model(saved_model_dir)\n tflite_model = converter.convert()\n\n### From Keras model\n\n # build a Keras model\n keras_model = build_keras_lstm(...)\n\n # Convert the model.\n converter = TFLiteConverter.from_keras_model(keras_model)\n tflite_model = converter.convert()\n\nExample\n-------\n\nKeras LSTM to LiteRT\n[Colab](https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/experimental_new_converter/Keras_LSTM_fusion_Codelab.ipynb)\nillustrates the end to end usage with the LiteRT interpreter.\n\nTensorFlow RNNs APIs supported\n------------------------------\n\n### Keras LSTM conversion (recommended)\n\nWe support out-of-the-box conversion of Keras LSTM to LiteRT. For\ndetails on how this works please refer to the\n[Keras LSTM interface](https://github.com/tensorflow/tensorflow/blob/35a3ab91b42503776f428bda574b74b9a99cd110/tensorflow/python/keras/layers/recurrent_v2.py#L1238) and to the conversion logic\n[here](https://github.com/tensorflow/tensorflow/blob/35a3ab91b42503776f428bda574b74b9a99cd110/tensorflow/compiler/mlir/lite/utils/lstm_utils.cc#L627).\n\nAlso important is to highlight the LiteRT's LSTM contract with respect\nto the Keras operation definition:\n\n1. The dimension 0 of the **input** tensor is the batch size.\n2. The dimension 0 of the **recurrent_weight** tensor is the number of outputs.\n3. The **weight** and **recurrent_kernel** tensors are transposed.\n4. The transposed weight, transposed recurrent_kernel and **bias** tensors are split into 4 equal sized tensors along the dimension 0. These correspond to **input gate, forget gate, cell, and output gate**.\n\n#### Keras LSTM Variants\n\n##### Time major\n\nUsers may choose time-major or no time-major. Keras LSTM adds a time-major\nattribute in the function def attributes. For Unidirectional sequence LSTM, we\ncan simply map to unidirecional_sequence_lstm's\n[time major attribute](https://github.com/tensorflow/tensorflow/blob/35a3ab91b42503776f428bda574b74b9a99cd110/tensorflow/compiler/mlir/lite/ir/tfl_ops.td#L3902).\n\n##### BiDirectional LSTM\n\nBidirectional LSTM can be implemented with two Keras LSTM layers, one for\nforward and one for backward, see examples\n[here](https://github.com/tensorflow/tensorflow/blob/35a3ab91b42503776f428bda574b74b9a99cd110/tensorflow/python/keras/layers/wrappers.py#L382).\nOnce we see the go_backward attribute, we recognize it as backward LSTM, then\nwe group forward \\& backward LSTM together. **This is future work.** Currently,\nthis creates two UnidirectionalSequenceLSTM operations in the LiteRT\nmodel.\n\n### User-defined LSTM conversion examples\n\nLiteRT also provides a way to convert user defined LSTM\nimplementations. Here we use Lingvo's LSTM as an example of how that can be\nimplemented. For details please refer to the\n[lingvo.LSTMCellSimple interface](https://github.com/tensorflow/lingvo/blob/91a4609dbc2579748a95110eda59c66d17c594c5/lingvo/core/rnn_cell.py#L228)\nand the conversion logic\n[here](https://github.com/tensorflow/tensorflow/blob/82abf0dbf316526cd718ae8cd7b11cfcb805805e/tensorflow/compiler/mlir/lite/transforms/prepare_composite_functions_tf.cc#L130).\nWe also provide an example for another of Lingvo's LSTM definitions in\n[lingvo.LayerNormalizedLSTMCellSimple interface](https://github.com/tensorflow/lingvo/blob/91a4609dbc2579748a95110eda59c66d17c594c5/lingvo/core/rnn_cell.py#L1173)\nand its conversion logic\n[here](https://github.com/tensorflow/tensorflow/blob/c11d5d8881fd927165eeb09fd524a80ebaf009f2/tensorflow/compiler/mlir/lite/transforms/prepare_composite_functions_tf.cc#L137).\n\n\"Bring your own TensorFlow RNN\" to LiteRT\n-----------------------------------------\n\nIf a user's RNN interface is different from the standard supported ones, there\nare a couple of options:\n\n**Option 1:** Write adapter code in TensorFlow python to adapt the RNN interface\nto the Keras RNN interface. This means a tf.function with\n[tf_implements annotation](https://github.com/tensorflow/community/pull/113) on\nthe generated RNN interface's function that is identical to the one generated by\nthe Keras LSTM layer. After this, the same conversion API used for Keras LSTM\nwill work.\n\n**Option 2:** If the above is not possible (e.g. the Keras LSTM is missing some\nfunctionality that is currently exposed by LiteRT's fused LSTM op like\nlayer normalization), then extend the LiteRT converter by writing\ncustom conversion code and plug it into the prepare-composite-functions\nMLIR-pass\n[here](https://github.com/tensorflow/tensorflow/blob/c11d5d8881fd927165eeb09fd524a80ebaf009f2/tensorflow/compiler/mlir/lite/transforms/prepare_composite_functions_tf.cc#L115).\nThe function's interface should be treated like an API contract and should\ncontain the arguments needed to convert to fused LiteRT LSTM\noperations - i.e. input, bias, weights, projection, layer normalization, etc. It\nis preferable for the tensors passed as arguments to this function to have known\nrank (i.e. RankedTensorType in MLIR). This makes it much easier to write\nconversion code that can assume these tensors as RankedTensorType and helps\ntransform them to ranked tensors corresponding to the fused LiteRT\noperator's operands.\n\nA complete example of such conversion flow is Lingvo's LSTMCellSimple to\nLiteRT conversion.\n\nThe LSTMCellSimple in Lingvo is defined\n[here](https://github.com/tensorflow/lingvo/blob/91a4609dbc2579748a95110eda59c66d17c594c5/lingvo/core/rnn_cell.py#L228).\nModels trained with this LSTM cell can be converted to LiteRT as\nfollows:\n\n1. Wrap all uses of LSTMCellSimple in a tf.function with a tf_implements annotation that is labelled as such (e.g. lingvo.LSTMCellSimple would be a good annotation name here). Make sure the tf.function that is generated matches the interface of the function expected in the conversion code. This is a contract between the model author adding the annotation and the conversion code.\n2. Extend the prepare-composite-functions pass to plug in a custom composite op\n to LiteRT fused LSTM op conversion. See\n [LSTMCellSimple](https://github.com/tensorflow/tensorflow/blob/82abf0dbf316526cd718ae8cd7b11cfcb805805e/tensorflow/compiler/mlir/lite/transforms/prepare_composite_functions_tf.cc#L130)\n conversion code.\n\n The conversion contract:\n3. **Weight** and **projection** tensors are transposed.\n\n4. The **{input, recurrent}** to **{cell, input gate, forget gate, output\n gate}** are extracted by slicing the transposed weight tensor.\n\n5. The **{bias}** to **{cell, input gate, forget gate, output gate}** are\n extracted by slicing the bias tensor.\n\n6. The **projection** is extracted by slicing the transposed projection tensor.\n\n7. Similar conversion is written for\n [LayerNormalizedLSTMCellSimple](https://github.com/tensorflow/tensorflow/blob/c11d5d8881fd927165eeb09fd524a80ebaf009f2/tensorflow/compiler/mlir/lite/transforms/prepare_composite_functions_tf.cc#L137).\n\n8. The rest of the LiteRT conversion infrastructure, including all the\n [MLIR passes](https://github.com/tensorflow/tensorflow/blob/35a3ab91b42503776f428bda574b74b9a99cd110/tensorflow/compiler/mlir/lite/tf_tfl_passes.cc#L57)\n defined as well as the final export to LiteRT flatbuffer can be\n reused.\n\nKnown issues/limitations\n------------------------\n\n1. Currently there is support only for converting stateless Keras LSTM (default behavior in Keras). Stateful Keras LSTM conversion is future work.\n2. It is still possible to model a stateful Keras LSTM layer using the underlying stateless Keras LSTM layer and managing the state explicitly in the user program. Such a TensorFlow program can still be converted to LiteRT using the feature being described here.\n3. Bidirectional LSTM is currently modelled as two UnidirectionalSequenceLSTM operations in LiteRT. This will be replaced with a single BidirectionalSequenceLSTM op."]]