Quantizes a TensorBuffer with given zeroPoint and scale.
Note: QuantizeOp does not cast output to UINT8, but only performs the quantization
math on top of input. The data type of output tensor is always FLOAT32 except that the Op
is effectively an identity Op (in this case, the output tensor is the same instance as the
input). To connect with quantized model, a CastOp is probably needed.
If both zeroPoint and scale are 0, the QuantizeOp will be bypassed,
which is equivalent to setting zeroPoint to 0 and scale to 1. This can be useful
when passing in the quantization parameters that are extracted directly from the TFLite model
flatbuffer. If the tensor is not quantized, both zeroPoint and scale will be read
as 0.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-10 UTC."],[],[]]