[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["必要な情報がない","missingTheInformationINeed","thumb-down"],["複雑すぎる / 手順が多すぎる","tooComplicatedTooManySteps","thumb-down"],["最新ではない","outOfDate","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["サンプル / コードに問題がある","samplesCodeIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-07-24 UTC。"],[],[],null,["# Quickstart for Linux-based devices with Python\n\nUsing LiteRT with Python is great for embedded devices based on Linux,\nsuch as [Raspberry Pi](https://www.raspberrypi.org/) and\n[Coral devices with Edge TPU](https://coral.withgoogle.com/),\namong many others.\n\nThis page shows how you can start running LiteRT models with Python in\njust a few minutes. All you need is a TensorFlow model [converted to TensorFlow\nLite](../models/convert). (If you don't have a model converted yet, you can\nexperiment using the model provided with the example linked below.)\n\nAbout the LiteRT runtime package\n--------------------------------\n\nTo quickly start executing LiteRT models with Python, you can install\njust the LiteRT interpreter, instead of all TensorFlow packages. We\ncall this simplified Python package `tflite_runtime`.\n\nThe `tflite_runtime` package is a fraction the size of the full `tensorflow`\npackage and includes the bare minimum code required to run inferences with\nLiteRT---primarily the\n[`Interpreter`](../../api/tflite/python/tf/lite/Interpreter)\nPython class. This small package is ideal when all you want to do is execute\n`.tflite` models and avoid wasting disk space with the large TensorFlow library.\n| **Note:** If you need access to other Python APIs, such as the [LiteRT Converter](../models/convert), you must install the [full TensorFlow package](https://www.tensorflow.org/install/). For example, the [Select TF ops](../models/ops_select) are not included in the `tflite_runtime` package. If your models have any dependencies to the Select TF ops, you need to use the full TensorFlow package instead.\n\nInstall LiteRT for Python\n-------------------------\n\nYou can install on Linux with pip: \n\n```\npython3 -m pip install tflite-runtime\n```\n\nSupported platforms\n-------------------\n\nThe `tflite-runtime` Python wheels are pre-built and provided for these\nplatforms:\n\n- Linux armv7l (e.g. Raspberry Pi 2, 3, 4 and Zero 2 running Raspberry Pi OS 32-bit)\n- Linux aarch64 (e.g. Raspberry Pi 3, 4 running Debian ARM64)\n- Linux x86_64\n\nIf you want to run LiteRT models on other platforms, you should either\nuse the [full TensorFlow package](https://www.tensorflow.org/install/), or\n[build the tflite-runtime package from source](../build/cmake_pip).\n\nIf you're using TensorFlow with the Coral Edge TPU, you should\ninstead follow the appropriate [Coral setup documentation](https://coral.ai/docs/setup).\n| **Note:** We no longer update the Debian package `python3-tflite-runtime`. The latest Debian package is for TF version 2.5, which you can install by following [these older instructions](https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/lite/g3doc/guide/python.md#install-tensorflow-lite-for-python).\n| **Note:** We no longer release pre-built `tflite-runtime` wheels for Windows and macOS. For these platforms, you should use the [full TensorFlow package](https://www.tensorflow.org/install/), or [build the tflite-runtime package from source](../build/cmake_pip).\n\nRun an inference using tflite_runtime\n-------------------------------------\n\nInstead of importing `Interpreter` from the `tensorflow` module, you now need to\nimport it from `tflite_runtime`.\n\nFor example, after you install the package above, copy and run the\n[`label_image.py`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/examples/python/)\nfile. It will (probably) fail because you don't have the `tensorflow` library\ninstalled. To fix it, edit this line of the file: \n\n import tensorflow as tf\n\nSo it instead reads: \n\n import tflite_runtime.interpreter as tflite\n\nAnd then change this line: \n\n interpreter = tf.lite.Interpreter(model_path=args.model_file)\n\nSo it reads: \n\n interpreter = tflite.Interpreter(model_path=args.model_file)\n\nNow run `label_image.py` again. That's it! You're now executing LiteRT\nmodels.\n\nLearn more\n----------\n\n- For more details about the `Interpreter` API, read\n [Load and run a model in Python](../inference#load_and_run_a_model_in_python).\n\n- If you have a Raspberry Pi, check out a [video series](https://www.youtube.com/watch?v=mNjXEybFn98&list=PLQY2H8rRoyvz_anznBg6y3VhuSMcpN9oe)\n about how to run object detection on Raspberry Pi using LiteRT.\n\n- If you're using a Coral ML accelerator, check out the\n [Coral examples on GitHub](https://github.com/google-coral/tflite/tree/master/python/examples).\n\n- To convert other TensorFlow models to LiteRT, read about the\n [LiteRT Converter](../models/convert).\n\n- If you want to build `tflite_runtime` wheel, read\n [Build LiteRT Python Wheel Package](../build/cmake_pip)"]]