[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["没有我需要的信息","missingTheInformationINeed","thumb-down"],["太复杂/步骤太多","tooComplicatedTooManySteps","thumb-down"],["内容需要更新","outOfDate","thumb-down"],["翻译问题","translationIssue","thumb-down"],["示例/代码问题","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-07-24。"],[],[],null,["# Build and convert models\n\nMicrocontrollers have limited RAM and storage, which places constraints on the\nsizes of machine learning models. In addition, LiteRT for\nMicrocontrollers currently supports a limited subset of operations, so not all\nmodel architectures are possible.\n\nThis document explains the process of converting a TensorFlow model to run on\nmicrocontrollers. It also outlines the supported operations and gives some\nguidance on designing and training a model to fit in limited memory.\n\nFor an end-to-end, runnable example of building and converting a model, see the\n[Hello World](https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/examples/hello_world#hello-world-example)\nexample.\n\nModel conversion\n----------------\n\nTo convert a trained TensorFlow model to run on microcontrollers, you should use\nthe\n[LiteRT converter Python API](../models/convert).\nThis will convert the model into a\n[`FlatBuffer`](https://google.github.io/flatbuffers/), reducing the model size,\nand modify it to use LiteRT operations.\n\nTo obtain the smallest possible model size, you should consider using\n[post-training quantization](../models/post_training_quantization).\n\n### Convert to a C array\n\nMany microcontroller platforms do not have native filesystem support. The\neasiest way to use a model from your program is to include it as a C array and\ncompile it into your program.\n\nThe following unix command will generate a C source file that contains the\nLiteRT model as a `char` array: \n\n xxd -i converted_model.tflite \u003e model_data.cc\n\nThe output will look similar to the following: \n\n unsigned char converted_model_tflite[] = {\n 0x18, 0x00, 0x00, 0x00, 0x54, 0x46, 0x4c, 0x33, 0x00, 0x00, 0x0e, 0x00,\n // \u003cLines omitted\u003e\n };\n unsigned int converted_model_tflite_len = 18200;\n\nOnce you have generated the file, you can include it in your program. It is\nimportant to change the array declaration to `const` for better memory\nefficiency on embedded platforms.\n\nFor an example of how to include and use a model in your program, see\n[`hello_world_test.cc`](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/examples/hello_world/hello_world_test.cc)\nin the *Hello World* example.\n\nModel architecture and training\n-------------------------------\n\nWhen designing a model for use on microcontrollers, it is important to consider\nthe model size, workload, and the operations that are used.\n\n### Model size\n\nA model must be small enough to fit within your target device's memory alongside\nthe rest of your program, both as a binary and at runtime.\n\nTo create a smaller model, you can use fewer and smaller layers in your\narchitecture. However, small models are more likely to suffer from underfitting.\nThis means for many problems, it makes sense to try and use the largest model\nthat will fit in memory. However, using larger models will also lead to\nincreased processor workload.\n| **Note:** The core runtime for LiteRT for Microcontrollers fits in 16KB on a Cortex M3.\n\n### Workload\n\nThe size and complexity of the model has an impact on workload. Large, complex\nmodels might result in a higher duty cycle, which means your device's processor\nis spending more time working and less time idle. This will increase power\nconsumption and heat output, which might be an issue depending on your\napplication.\n\n### Operation support\n\nLiteRT for Microcontrollers currently supports a limited subset of\nTensorFlow operations, which impacts the model architectures that it is possible\nto run. We are working on expanding operation support, both in terms of\nreference implementations and optimizations for specific architectures.\n\nThe supported operations can be seen in the file\n[`micro_mutable_ops_resolver.h`](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/micro_mutable_op_resolver.h)"]]