A list of optimizations to apply when converting the model.
If not set, use [Optimize.DEFAULT] by default.
representative_data
A representative ds.Dataset for post-training
quantization.
quantization_steps
Number of post-training quantization calibration steps
to run (default to DEFAULT_QUANTIZATION_STEPS).
inference_input_type
Target data type of real-number input arrays. Allows
for a different type for input arrays. Defaults to None. If set, must be
be {tf.float32, tf.uint8, tf.int8}.
inference_output_type
Target data type of real-number output arrays.
Allows for a different type for output arrays. Defaults to None. If set,
must be {tf.float32, tf.uint8, tf.int8}.
supported_ops
Set of OpsSet options supported by the device. Used to Set
converter.target_spec.supported_ops.
supported_types
List of types for constant values on the target device.
Supported values are types exported by lite.constants. Frequently, an
optimization choice is driven by the most compact (i.e. smallest) type
in this list (default [constants.FLOAT]).
experimental_new_quantizer
Whether to enable experimental new quantizer.
Raises
ValueError
if inference_input_type or inference_output_type are set but
not in {tf.float32, tf.uint8, tf.int8}.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-07 UTC."],[],[],null,["# mediapipe_model_maker.quantization.QuantizationConfig\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/google/mediapipe/blob/master/mediapipe/model_maker/python/core/utils/quantization.py#L58-L213) |\n\nConfiguration for post-training quantization.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`mediapipe_model_maker.face_stylizer.face_stylizer.face_stylizer_options.model_opt.loss_functions.model_util.quantization.QuantizationConfig`](https://www.tensorflow.org/mediapipe/api/solutions/python/mediapipe_model_maker/quantization/QuantizationConfig), [`mediapipe_model_maker.face_stylizer.face_stylizer.loss_functions.model_util.quantization.QuantizationConfig`](https://www.tensorflow.org/mediapipe/api/solutions/python/mediapipe_model_maker/quantization/QuantizationConfig), [`mediapipe_model_maker.face_stylizer.face_stylizer.model_opt.loss_functions.model_util.quantization.QuantizationConfig`](https://www.tensorflow.org/mediapipe/api/solutions/python/mediapipe_model_maker/quantization/QuantizationConfig), [`mediapipe_model_maker.face_stylizer.face_stylizer.model_util.quantization.QuantizationConfig`](https://www.tensorflow.org/mediapipe/api/solutions/python/mediapipe_model_maker/quantization/QuantizationConfig), [`mediapipe_model_maker.face_stylizer.face_stylizer_options.model_opt.loss_functions.model_util.quantization.QuantizationConfig`](https://www.tensorflow.org/mediapipe/api/solutions/python/mediapipe_model_maker/quantization/QuantizationConfig), [`mediapipe_model_maker.face_stylizer.model_options.loss_functions.model_util.quantization.QuantizationConfig`](https://www.tensorflow.org/mediapipe/api/solutions/python/mediapipe_model_maker/quantization/QuantizationConfig), [`mediapipe_model_maker.model_util.quantization.QuantizationConfig`](https://www.tensorflow.org/mediapipe/api/solutions/python/mediapipe_model_maker/quantization/QuantizationConfig)\n\n\u003cbr /\u003e\n\n mediapipe_model_maker.quantization.QuantizationConfig(\n optimizations: Optional[Union[tf.lite.Optimize, List[tf.lite.Optimize]]] = None,\n representative_data: Optional[../../mediapipe_model_maker/model_util/dataset/Dataset] = None,\n quantization_steps: Optional[int] = None,\n inference_input_type: Optional[tf.dtypes.DType] = None,\n inference_output_type: Optional[tf.dtypes.DType] = None,\n supported_ops: Optional[Union[tf.lite.OpsSet, List[tf.lite.OpsSet]]] = None,\n supported_types: Optional[Union[tf.dtypes.DType, List[tf.dtypes.DType]]] = None,\n experimental_new_quantizer: bool = False\n )\n\nRefer to\n\u003chttps://www.tensorflow.org/lite/performance/post_training_quantization\u003e\nfor different post-training quantization options.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `optimizations` | A list of optimizations to apply when converting the model. If not set, use `[Optimize.DEFAULT]` by default. |\n| `representative_data` | A representative ds.Dataset for post-training quantization. |\n| `quantization_steps` | Number of post-training quantization calibration steps to run (default to DEFAULT_QUANTIZATION_STEPS). |\n| `inference_input_type` | Target data type of real-number input arrays. Allows for a different type for input arrays. Defaults to None. If set, must be be `{tf.float32, tf.uint8, tf.int8}`. |\n| `inference_output_type` | Target data type of real-number output arrays. Allows for a different type for output arrays. Defaults to None. If set, must be `{tf.float32, tf.uint8, tf.int8}`. |\n| `supported_ops` | Set of OpsSet options supported by the device. Used to Set converter.target_spec.supported_ops. |\n| `supported_types` | List of types for constant values on the target device. Supported values are types exported by lite.constants. Frequently, an optimization choice is driven by the most compact (i.e. smallest) type in this list (default \\[constants.FLOAT\\]). |\n| `experimental_new_quantizer` | Whether to enable experimental new quantizer. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|------------------------------------------------------------------------------------------------------|\n| `ValueError` | if inference_input_type or inference_output_type are set but not in {tf.float32, tf.uint8, tf.int8}. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `for_dynamic`\n\n[View source](https://github.com/google/mediapipe/blob/master/mediapipe/model_maker/python/core/utils/quantization.py#L142-L145) \n\n @classmethod\n for_dynamic() -\u003e 'QuantizationConfig'\n\nCreates configuration for dynamic range quantization.\n\n### `for_float16`\n\n[View source](https://github.com/google/mediapipe/blob/master/mediapipe/model_maker/python/core/utils/quantization.py#L178-L181) \n\n @classmethod\n for_float16() -\u003e 'QuantizationConfig'\n\nCreates configuration for float16 quantization.\n\n### `for_int8`\n\n[View source](https://github.com/google/mediapipe/blob/master/mediapipe/model_maker/python/core/utils/quantization.py#L147-L176) \n\n @classmethod\n for_int8(\n representative_data: ../../mediapipe_model_maker/model_util/dataset/Dataset,\n quantization_steps: int = DEFAULT_QUANTIZATION_STEPS,\n inference_input_type: tf.dtypes.DType = tf.uint8,\n inference_output_type: tf.dtypes.DType = tf.uint8,\n supported_ops: tf.lite.OpsSet = tf.lite.OpsSet.TFLITE_BUILTINS_INT8\n ) -\u003e 'QuantizationConfig'\n\nCreates configuration for full integer quantization.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `representative_data` | Representative data used for post-training quantization. |\n| `quantization_steps` | Number of post-training quantization calibration steps to run. |\n| `inference_input_type` | Target data type of real-number input arrays. |\n| `inference_output_type` | Target data type of real-number output arrays. |\n| `supported_ops` | Set of [`tf.lite.OpsSet`](https://www.tensorflow.org/lite/api_docs/python/tf/lite/OpsSet) options, where each option represents a set of operators supported by the target device. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| QuantizationConfig. ||\n\n\u003cbr /\u003e\n\n### `set_converter_with_quantization`\n\n[View source](https://github.com/google/mediapipe/blob/master/mediapipe/model_maker/python/core/utils/quantization.py#L183-L213) \n\n set_converter_with_quantization(\n converter: tf.lite.TFLiteConverter, **kwargs\n ) -\u003e tf.lite.TFLiteConverter\n\nSets input TFLite converter with quantization configurations.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|-------------|----------------------------------------------|\n| `converter` | input tf.lite.TFLiteConverter. |\n| `**kwargs` | arguments used by ds.Dataset.gen_tf_dataset. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| tf.lite.TFLiteConverter with quantization configurations. ||\n\n\u003cbr /\u003e"]]