The maximum number of CPU threads that the interpreter should run on. The default is nil
indicating that the Interpreter will decide the number of threads to use.
Indicates whether an optimized set of floating point CPU kernels, provided by XNNPACK, is
enabled.
Experiment
Enabling this flag will enable use of a new, highly optimized set of CPU kernels provided
via the XNNPACK delegate. Currently, this is restricted to a subset of floating point
operations. Eventually, we plan to enable this by default, as it can provide significant
performance benefits for many classes of floating point models. See
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/xnnpack/README.md
for more details.
Important
Things to keep in mind when enabling this flag:
Startup time and resize time may increase.
Baseline memory consumption may increase.
Compatibility with other delegates (e.g., GPU) has not been fully validated.
Quantized models will not see any benefit.
Warning
This is an experimental interface that is subject to change.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-10 UTC."],[],[],null,["# TensorFlowLiteSwift Framework Reference\n\nOptions\n=======\n\n public struct Options : Equatable, Hashable\n\nOptions for configuring the [Interpreter](../../Classes/Interpreter.html).\n- `\n ``\n ``\n `\n\n ### [threadCount](#/s:19TensorFlowLiteSwift11InterpreterC7OptionsV11threadCountSiSgvp)\n\n `\n ` \n The maximum number of CPU threads that the interpreter should run on. The default is `nil`\n indicating that the [Interpreter](../../Classes/Interpreter.html) will decide the number of threads to use. \n\n #### Declaration\n\n Swift \n\n public var threadCount: Int?\n\n- `\n ``\n ``\n `\n\n ### [isXNNPackEnabled](#/s:19TensorFlowLiteSwift11InterpreterC7OptionsV16isXNNPackEnabledSbvp)\n\n `\n ` \n Indicates whether an optimized set of floating point CPU kernels, provided by XNNPACK, is\n enabled. \n Experiment\n\n Enabling this flag will enable use of a new, highly optimized set of CPU kernels provided\n via the XNNPACK delegate. Currently, this is restricted to a subset of floating point\n operations. Eventually, we plan to enable this by default, as it can provide significant\n performance benefits for many classes of floating point models. See\n \u003chttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/xnnpack/README.md\u003e\n for more details. \n Important\n\n Things to keep in mind when enabling this flag:\n - Startup time and resize time may increase.\n - Baseline memory consumption may increase.\n - Compatibility with other delegates (e.g., GPU) has not been fully validated.\n - Quantized models will not see any benefit. \n Warning\n\n This is an experimental interface that is subject to change. \n\n #### Declaration\n\n Swift \n\n public var isXNNPackEnabled: Bool\n\n- `\n ``\n ``\n `\n\n ### [init()](#/s:19TensorFlowLiteSwift11InterpreterC7OptionsVAEycfc)\n\n `\n ` \n Creates a new instance with the default values. \n\n #### Declaration\n\n Swift \n\n public init()"]]