Delegasi LiteRT Core ML memungkinkan model LiteRT dijalankan di
Framework Core ML, yang
menghasilkan inferensi model yang lebih cepat di perangkat iOS.
Versi dan perangkat iOS yang didukung:
iOS 12 dan yang lebih baru. Pada versi iOS yang lebih lama, delegasi Core ML akan
secara otomatis kembali ke CPU.
Secara default, delegasi Core ML hanya akan diaktifkan di perangkat dengan SoC A12
dan yang lebih baru (iPhone Xs dan yang lebih baru) menggunakan Neural Engine untuk inferensi yang lebih cepat.
Jika Anda ingin menggunakan delegasi Core ML juga di perangkat yang lebih lama, lihat
praktik terbaik
Model yang didukung
Delegasi Core ML saat ini mendukung model float (FP32 dan FP16).
Mencoba delegasi Core ML pada model Anda sendiri
Delegasi Core ML sudah disertakan dalam rilis LiteRT setiap malam
CocoaPods. Untuk menggunakan delegasi Core ML, ubah pod LiteRT agar menyertakan
subspesifikasi CoreML dalam Podfile Anda.
target 'YourProjectName'
pod 'TensorFlowLiteSwift/CoreML', '~> 2.4.0' # Or TensorFlowLiteObjC/CoreML
ATAU
# Particularily useful when you also want to include 'Metal' subspec.
target 'YourProjectName'
pod 'TensorFlowLiteSwift', '~> 2.4.0', :subspecs => ['CoreML']
Menggunakan delegasi Core ML pada perangkat tanpa Neural Engine
Secara default, delegasi Core ML hanya akan dibuat jika perangkat memiliki jaringan Neural
Engine, dan akan menampilkan null jika delegasi tidak dibuat. Jika Anda ingin
jalankan delegasi Core ML di lingkungan lain (misalnya, simulator), teruskan .all
sebagai opsi saat membuat delegasi di Swift. Pada C++ (dan Objective-C), Anda dapat
teruskan TfLiteCoreMlDelegateAllDevices. Contoh berikut menunjukkan cara melakukannya:
Menggunakan delegasi Metal(GPU) sebagai penggantian.
Jika delegasi Core ML tidak dibuat, Anda masih dapat menggunakan
Delegasi logam untuk mendapatkan
manfaat performa. Contoh berikut menunjukkan cara melakukannya:
Logika pembuatan delegasi membaca ID mesin perangkat (mis. iPhone11,1) ke
menentukan ketersediaan Neural Engine-nya. Lihat
kode
untuk mengetahui detail selengkapnya. Atau, Anda dapat menerapkan kumpulan daftar tolak Anda sendiri
menggunakan {i>library <i}lain seperti
DeviceKit.
Menggunakan versi Core ML lama
Meskipun iOS 13 mendukung Core ML 3, model ini mungkin akan berfungsi lebih baik jika
dikonversi dengan spesifikasi model Core ML 2. Versi konversi target kini
menyetel ke versi terbaru secara {i>default<i}, tetapi
Anda dapat mengubahnya dengan mengaturnya
coreMLVersion (di Swift, coreml_version di C API) dalam opsi delegasi untuk
versi yang lebih lama.
Operasi yang didukung
Operasi berikut didukung oleh delegasi Core ML.
Menambahkan
Hanya bentuk tertentu yang dapat disiarkan. Dalam tata letak tensor Core ML,
bentuk tensor berikut bisa disiarkan. [B, C, H, W], [B, C, 1,
1], [B, 1, H, W], [B, 1, 1, 1].
Rata-RataPool2D
Gabungkan
Penyambungan harus dilakukan di sepanjang sumbu saluran.
Konv2D
Bobot dan bias harus konstan.
DepthwiseConv2D
Bobot dan bias harus konstan.
Terhubung Sepenuhnya (alias Dense atau InnerProduct)
Bobot dan bias (jika ada) harus konstan.
Hanya mendukung kasus batch tunggal. Dimensi input harus 1, kecuali
dimensi terakhir.
Keras
Logistik (alias Sigmoid)
TotalKompleks2D
MirrorPad
Hanya input 4D dengan mode REFLECT yang didukung. Padding harus
konstan, dan hanya diperbolehkan untuk dimensi H dan W.
Mul
Hanya bentuk tertentu yang dapat disiarkan. Dalam tata letak tensor Core ML,
bentuk tensor berikut bisa disiarkan. [B, C, H, W], [B, C, 1,
1], [B, 1, H, W], [B, 1, 1, 1].
Bantalan dan PadV2
Hanya input 4D yang didukung. Padding harus konstan, dan hanya
diperbolehkan untuk dimensi H dan W.
Relu
ReluN1To1
Relu6
Bentuk Ulang
Hanya didukung jika versi Core ML target adalah 2, tidak didukung jika
yang menargetkan Core ML 3.
ResizeBilinear
SoftMax
Tanh
TransposeConv
Bobot harus konstan.
Masukan
Jika mengalami masalah, buat
GitHub
masalah dengan semua detail
yang diperlukan untuk mereproduksi.
FAQ
Apakah delegasi CoreML mendukung penggantian ke CPU jika grafik berisi data yang tidak didukung
operasi?
Ya
Apakah delegasi CoreML berfungsi di Simulator iOS?
Ya. Library menyertakan target x86 dan x86_64 sehingga dapat berjalan di
simulator, tetapi Anda tidak akan melihat
peningkatan kinerja melalui CPU.
Apakah delegasi LiteRT dan CoreML mendukung MacOS?
LiteRT hanya diuji di iOS, tetapi tidak di MacOS.
Apakah operasi LiteRT kustom didukung?
Tidak, delegasi CoreML tidak mendukung operasi khusus dan akan kembali ke
dengan CPU
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Informasi yang saya butuhkan tidak ada","missingTheInformationINeed","thumb-down"],["Terlalu rumit/langkahnya terlalu banyak","tooComplicatedTooManySteps","thumb-down"],["Sudah usang","outOfDate","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Masalah kode / contoh","samplesCodeIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-07-24 UTC."],[],[],null,["# LiteRT Core ML delegate\n\nThe LiteRT Core ML delegate enables running LiteRT models on\n[Core ML framework](https://developer.apple.com/documentation/coreml), which\nresults in faster model inference on iOS devices.\n| **Note:** This delegate is in experimental (beta) phase. It is available from LiteRT 2.4.0 and latest nightly releases.\n| **Note:** Core ML delegate supports Core ML version 2 and later.\n\n**Supported iOS versions and devices:**\n\n- iOS 12 and later. In the older iOS versions, Core ML delegate will automatically fallback to CPU.\n- By default, Core ML delegate will only be enabled on devices with A12 SoC and later (iPhone Xs and later) to use Neural Engine for faster inference. If you want to use Core ML delegate also on the older devices, please see [best practices](#best_practices)\n\n**Supported models**\n\nThe Core ML delegate currently supports float (FP32 and FP16) models.\n\nTrying the Core ML delegate on your own model\n---------------------------------------------\n\nThe Core ML delegate is already included in nightly release of LiteRT\nCocoaPods. To use Core ML delegate, change your LiteRT pod to include\nsubspec `CoreML` in your `Podfile`.\n**Note:** If you want to use C API instead of Objective-C API, you can include `TensorFlowLiteC/CoreML` pod to do so. \n\n target 'YourProjectName'\n pod 'TensorFlowLiteSwift/CoreML', '~\u003e 2.4.0' # Or TensorFlowLiteObjC/CoreML\n\nOR \n\n # Particularily useful when you also want to include 'Metal' subspec.\n target 'YourProjectName'\n pod 'TensorFlowLiteSwift', '~\u003e 2.4.0', :subspecs =\u003e ['CoreML']\n\n**Note:** Core ML delegate can also use C API for Objective-C code. Prior to LiteRT 2.4.0 release, this was the only option. \n\n### Swift\n\n\u003cbr /\u003e\n\n```swift\n let coreMLDelegate = CoreMLDelegate()\n var interpreter: Interpreter\n\n // Core ML delegate will only be created for devices with Neural Engine\n if coreMLDelegate != nil {\n interpreter = try Interpreter(modelPath: modelPath,\n delegates: [coreMLDelegate!])\n } else {\n interpreter = try Interpreter(modelPath: modelPath)\n }\n \n```\n\n\u003cbr /\u003e\n\n### Objective-C\n\n\u003cbr /\u003e\n\n```objective-c\n // Import module when using CocoaPods with module support\n @import TFLTensorFlowLite;\n\n // Or import following headers manually\n # import \"tensorflow/lite/objc/apis/TFLCoreMLDelegate.h\"\n # import \"tensorflow/lite/objc/apis/TFLTensorFlowLite.h\"\n\n // Initialize Core ML delegate\n TFLCoreMLDelegate* coreMLDelegate = [[TFLCoreMLDelegate alloc] init];\n\n // Initialize interpreter with model path and Core ML delegate\n TFLInterpreterOptions* options = [[TFLInterpreterOptions alloc] init];\n NSError* error = nil;\n TFLInterpreter* interpreter = [[TFLInterpreter alloc]\n initWithModelPath:modelPath\n options:options\n delegates:@[ coreMLDelegate ]\n error:&error];\n if (error != nil) { /* Error handling... */ }\n\n if (![interpreter allocateTensorsWithError:&error]) { /* Error handling... */ }\n if (error != nil) { /* Error handling... */ }\n\n // Run inference ...\n \n```\n\n\u003cbr /\u003e\n\n### C (Until 2.3.0)\n\n\u003cbr /\u003e\n\n```c\n #include \"tensorflow/lite/delegates/coreml/coreml_delegate.h\"\n\n // Initialize interpreter with model\n TfLiteModel* model = TfLiteModelCreateFromFile(model_path);\n\n // Initialize interpreter with Core ML delegate\n TfLiteInterpreterOptions* options = TfLiteInterpreterOptionsCreate();\n TfLiteDelegate* delegate = TfLiteCoreMlDelegateCreate(NULL); // default config\n TfLiteInterpreterOptionsAddDelegate(options, delegate);\n TfLiteInterpreterOptionsDelete(options);\n\n TfLiteInterpreter* interpreter = TfLiteInterpreterCreate(model, options);\n\n TfLiteInterpreterAllocateTensors(interpreter);\n\n // Run inference ...\n\n /* ... */\n\n // Dispose resources when it is no longer used.\n // Add following code to the section where you dispose of the delegate\n // (e.g. `dealloc` of class).\n\n TfLiteInterpreterDelete(interpreter);\n TfLiteCoreMlDelegateDelete(delegate);\n TfLiteModelDelete(model);\n \n```\n\n\u003cbr /\u003e\n\nBest practices\n--------------\n\n### Using Core ML delegate on devices without Neural Engine\n\nBy default, Core ML delegate will only be created if the device has Neural\nEngine, and will return `null` if the delegate is not created. If you want to\nrun Core ML delegate on other environments (for example, simulator), pass `.all`\nas an option while creating delegate in Swift. On C++ (and Objective-C), you can\npass `TfLiteCoreMlDelegateAllDevices`. Following example shows how to do this: \n\n### Swift\n\n\u003cbr /\u003e\n\n```swift\n var options = CoreMLDelegate.Options()\n options.enabledDevices = .all\n let coreMLDelegate = CoreMLDelegate(options: options)!\n let interpreter = try Interpreter(modelPath: modelPath,\n delegates: [coreMLDelegate])\n \n```\n\n\u003cbr /\u003e\n\n### Objective-C\n\n\u003cbr /\u003e\n\n```objective-c\n TFLCoreMLDelegateOptions* coreMLOptions = [[TFLCoreMLDelegateOptions alloc] init];\n coreMLOptions.enabledDevices = TFLCoreMLDelegateEnabledDevicesAll;\n TFLCoreMLDelegate* coreMLDelegate = [[TFLCoreMLDelegate alloc]\n initWithOptions:coreMLOptions];\n\n // Initialize interpreter with delegate\n \n```\n\n\u003cbr /\u003e\n\n### C\n\n\u003cbr /\u003e\n\n```c\n TfLiteCoreMlDelegateOptions options;\n options.enabled_devices = TfLiteCoreMlDelegateAllDevices;\n TfLiteDelegate* delegate = TfLiteCoreMlDelegateCreate(&options);\n // Initialize interpreter with delegate\n \n```\n\n\u003cbr /\u003e\n\n### Using Metal(GPU) delegate as a fallback.\n\nWhen the Core ML delegate is not created, alternatively you can still use\n[Metal delegate](../performance/gpu#ios) to get\nperformance benefits. Following example shows how to do this: \n\n### Swift\n\n\u003cbr /\u003e\n\n```swift\n var delegate = CoreMLDelegate()\n if delegate == nil {\n delegate = MetalDelegate() // Add Metal delegate options if necessary.\n }\n\n let interpreter = try Interpreter(modelPath: modelPath,\n delegates: [delegate!])\n \n```\n\n\u003cbr /\u003e\n\n### Objective-C\n\n\u003cbr /\u003e\n\n```objective-c\n TFLDelegate* delegate = [[TFLCoreMLDelegate alloc] init];\n if (!delegate) {\n // Add Metal delegate options if necessary\n delegate = [[TFLMetalDelegate alloc] init];\n }\n // Initialize interpreter with delegate\n \n```\n\n\u003cbr /\u003e\n\n### C\n\n\u003cbr /\u003e\n\n```c\n TfLiteCoreMlDelegateOptions options = {};\n delegate = TfLiteCoreMlDelegateCreate(&options);\n if (delegate == NULL) {\n // Add Metal delegate options if necessary\n delegate = TFLGpuDelegateCreate(NULL);\n }\n // Initialize interpreter with delegate\n \n```\n\n\u003cbr /\u003e\n\nThe delegate creation logic reads device's machine id (e.g. iPhone11,1) to\ndetermine its Neural Engine availability. See the\n[code](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/coreml/coreml_delegate.mm)\nfor more detail. Alternatively, you can implement your own set of denylist\ndevices using other libraries such as\n[DeviceKit](https://github.com/devicekit/DeviceKit).\n\n### Using older Core ML version\n\nAlthough iOS 13 supports Core ML 3, the model might work better when it is\nconverted with Core ML 2 model specification. The target conversion version is\nset to the latest version by default, but you can change this by setting\n`coreMLVersion` (in Swift, `coreml_version` in C API) in the delegate option to\nolder version.\n\nSupported ops\n-------------\n\nFollowing ops are supported by the Core ML delegate.\n\n- Add\n - Only certain shapes are broadcastable. In Core ML tensor layout, following tensor shapes are broadcastable. `[B, C, H, W]`, `[B, C, 1,\n 1]`, `[B, 1, H, W]`, `[B, 1, 1, 1]`.\n- AveragePool2D\n- Concat\n - Concatenation should be done along the channel axis.\n- Conv2D\n - Weights and bias should be constant.\n- DepthwiseConv2D\n - Weights and bias should be constant.\n- FullyConnected (aka Dense or InnerProduct)\n - Weights and bias (if present) should be constant.\n - Only supports single-batch case. Input dimensions should be 1, except the last dimension.\n- Hardswish\n- Logistic (aka Sigmoid)\n- MaxPool2D\n- MirrorPad\n - Only 4D input with `REFLECT` mode is supported. Padding should be constant, and is only allowed for H and W dimensions.\n- Mul\n - Only certain shapes are broadcastable. In Core ML tensor layout, following tensor shapes are broadcastable. `[B, C, H, W]`, `[B, C, 1,\n 1]`, `[B, 1, H, W]`, `[B, 1, 1, 1]`.\n- Pad and PadV2\n - Only 4D input is supported. Padding should be constant, and is only allowed for H and W dimensions.\n- Relu\n- ReluN1To1\n- Relu6\n- Reshape\n - Only supported when target Core ML version is 2, not supported when targeting Core ML 3.\n- ResizeBilinear\n- SoftMax\n- Tanh\n- TransposeConv\n - Weights should be constant.\n\nFeedback\n--------\n\nFor issues, please create a\n[GitHub](https://github.com/tensorflow/tensorflow/issues/new?template=50-other-issues.md)\nissue with all the necessary details to reproduce.\n\nFAQ\n---\n\n- Does CoreML delegate support fallback to CPU if a graph contains unsupported ops?\n - Yes\n- Does CoreML delegate work on iOS Simulator?\n - Yes. The library includes x86 and x86_64 targets so it can run on a simulator, but you will not see performance boost over CPU.\n- Does LiteRT and CoreML delegate support MacOS?\n - LiteRT is only tested on iOS but not MacOS.\n- Is custom LiteRT ops supported?\n - No, CoreML delegate does not support custom ops and they will fallback to CPU.\n\nAPIs\n----\n\n- [Core ML delegate Swift API](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/swift/Sources/CoreMLDelegate.swift)\n- [Core ML delegate C API](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/coreml/coreml_delegate.h)"]]