// Load the modelvalmodelBuffer:MappedByteBuffer=FileUtil.loadMappedFile(appContext,"model.tflite")// Initialize runtimevaloptions=Interpreter.Options()valinterpreter=Interpreter(modelBuffer,options)interpreter.allocateTensors()// Use acceleratorsavalgpuDelegate=GpuDelegate()options.addDelegate(gpuDelegate)
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["没有我需要的信息","missingTheInformationINeed","thumb-down"],["太复杂/步骤太多","tooComplicatedTooManySteps","thumb-down"],["内容需要更新","outOfDate","thumb-down"],["翻译问题","translationIssue","thumb-down"],["示例/代码问题","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-07-24。"],[],[],null,["# Get Started with LiteRT Next\n\n| **Experimental:** LiteRT Next is an alpha release and under active development.\n\nThe LiteRT Next APIs are not compatible with the LiteRT APIs, so\napplications using LiteRT must completely migrate to LiteRT Next in order\nto make use of the features and capabilities provided by the new APIs.\nApplications cannot use the TF Lite Interpreter APIs and Compiled Model APIs\ninterchangeably.\n\nLiteRT Next provides APIs for Kotlin and C++. Applications using a LiteRT\nSDK in other languages should continue using LiteRT.\n\nAndroid dependencies\n--------------------\n\nTo migrate an Android application using LiteRT, replace the dependency from\n`com.google.ai.edge.litert` to `com.google.ai.edge.litert:litert:2.0.0-alpha`.\n\nWith LiteRT, the GPU accelerator is available as a delegate in a separate\nlibrary (`com.google.ai.edge.litert:litert-gpu`). With LiteRT Next, the\nGPU accelerator is included in the LiteRT Next package. For more\ninformation, see [GPU with LiteRT Next](./gpu).\n\nYou can add the LiteRT Next package to your `build.gradle` dependencies: \n\n dependencies {\n ...\n implementation `com.google.ai.edge.litert:litert:2.0.0-alpha`\n }\n\nCode changes\n------------\n\nApplications using LiteRT will have to substitute code that uses the TFLite\nInterpreter API for the code using the Compiled Model API. The following shows\nthe major changes required to migrate to LiteRT Next. For more details,\nsee the [LiteRT Next API reference](../../api/litert/c).\n\n### Code changes in C++\n\nTo migrate an application using C++, substitute the following key snippets:\n\n| | **LiteRT (TFLite Interpreter)** | **LiteRT Next (`CompiledModel`)** |\n|------------------------|------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|\n| **Load a Model** | `FlatBufferModel::BuildFromFile() InterpreterBuilder(...)` | `Model::CreateFromFile(\"mymodel.tflite\")` *Note: No separate builder step* |\n| **Initialize Runtime** | `builder(&interpreter), interpreter-\u003eAllocateTensors()` | `CompiledModel::Create(env, model, kLiteRtHwAcceleratorCpu)` *Note: No manual memory allocation step* |\n| **Use Accelerators** | `interpreter-\u003eModifyGraphWithDelegate(...)` | `CompiledModel::Create(env, model, kLiteRtHwAcceleratorGpu)` |\n| **Run a Model** | `interpreter-\u003eInvoke()` | `compiled_model-\u003eRun(inputs, outputs)` |\n\n### Code changes in Kotlin\n\nTo migrate an application using Kotlin, follow the following key steps:\n\n#### Set up model and runtime\n\nWith LiteRT, you load a model, set up acceleration, and initialize the runtime\nin different steps: \n\n // Load the model\n val modelBuffer: MappedByteBuffer =\n FileUtil.loadMappedFile(appContext, \"model.tflite\")\n\n // Initialize runtime\n val options = Interpreter.Options()\n val interpreter = Interpreter(modelBuffer, options)\n interpreter.allocateTensors()\n\n // Use accelerators\n aval gpuDelegate = GpuDelegate()\n options.addDelegate(gpuDelegate)\n\nWith LiteRT Next, you load the model, specify the acceleration, and\ninitialize the runtime at the same time: \n\n val model =\n CompiledModel.create(\n context.assets,\n \"model.tflite\",\n CompiledModel.Options(Accelerator.GPU)\n )\n\n#### Run inference\n\nTo run the model with LiteRT: \n\n val input = FloatBuffer.allocate(data_size)\n val output = FloatBuffer.allocate(data_size)\n interpreter.run(input, output)\n\nTo run the model with LiteRT Next: \n\n val inputBuffers = model.createInputBuffers()\n val outputBuffers = model.createOutputBuffers()\n model.run(inputBuffers, outputBuffers)\n\nOther libraries\n---------------\n\nThe LiteRT Next APIs are only available in Kotlin and C++. Applications\nusing the LiteRT SDKs in other languages cannot migrate to LiteRT Next.\n\nApplications using the LiteRT in the Play Services runtime cannot migrate to\nLiteRT Next, and should continue using the `play-services-tflite`\nruntime. The Task Library and Model Maker libraries cannot migrate to LiteRT\nNext, and should continue using the TensorFlow Lite APIs."]]