Enum defining the optimizations to apply when generating a tflite model.
DEFAULT
The default optimization strategy that enables post-training quantization.
The type of post-training quantization that will be used is dependent on
the other converter options supplied. Refer to the
documentation for further information on the types available
and how to use them.
OPTIMIZE_FOR_SIZE
Deprecated. Does the same as DEFAULT.
OPTIMIZE_FOR_LATENCY
Deprecated. Does the same as DEFAULT.
EXPERIMENTAL_SPARSITY
Experimental flag, subject to change.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-09-24 UTC."],[],[],null,["# tf.lite.Optimize\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/2adc36c677a558b93a454705059baac2b0cdf5a3/tensorflow/lite/python/lite.py#L108-L163) |\n\nEnum defining the optimizations to apply when generating a tflite model.\n\nDEFAULT\nThe default optimization strategy that enables post-training quantization.\nThe type of post-training quantization that will be used is dependent on\nthe other converter options supplied. Refer to the\n[documentation](https://ai.google.dev/edge/litert/models/post_training_quantization) for further information on the types available\nand how to use them.\n\nOPTIMIZE_FOR_SIZE\nDeprecated. Does the same as DEFAULT.\n\nOPTIMIZE_FOR_LATENCY\nDeprecated. Does the same as DEFAULT.\n\nEXPERIMENTAL_SPARSITY\nExperimental flag, subject to change. \n\n Enable optimization by taking advantage of the sparse model weights\n trained with pruning.\n\n The converter will inspect the sparsity pattern of the model weights and\n do its best to improve size and latency.\n The flag can be used alone to optimize float32 models with sparse weights.\n It can also be used together with the DEFAULT optimization mode to\n optimize quantized models with sparse weights.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Class Variables --------------- ||\n|-----------------------|-------------------------------------------------------------|\n| DEFAULT | `\u003cOptimize.DEFAULT: 'DEFAULT'\u003e` |\n| EXPERIMENTAL_SPARSITY | `\u003cOptimize.EXPERIMENTAL_SPARSITY: 'EXPERIMENTAL_SPARSITY'\u003e` |\n| OPTIMIZE_FOR_LATENCY | `\u003cOptimize.OPTIMIZE_FOR_LATENCY: 'OPTIMIZE_FOR_LATENCY'\u003e` |\n| OPTIMIZE_FOR_SIZE | `\u003cOptimize.OPTIMIZE_FOR_SIZE: 'OPTIMIZE_FOR_SIZE'\u003e` |\n\n\u003cbr /\u003e"]]