DequantizeOp

public class DequantizeOp

Dequantizes a TensorBuffer with given zeroPoint and scale.

Note: The data type of output tensor is always FLOAT32 except when the DequantizeOp is created effectively as an identity Op such as setting zeroPoint to 0 and scale to 1 (in this case, the output tensor is the same instance as input).

If both zeroPoint and scale are 0, the DequantizeOp will be bypassed, which is equivalent to setting zeroPoint to 0 and scale to 1. This can be useful when passing in the quantization parameters that are extracted directly from the TFLite model flatbuffer. If the tensor is not quantized, both zeroPoint and scale will be read as 0.

Public Constructors

DequantizeOp(float zeroPoint, float scale)

Inherited Methods

org.tensorflow.lite.support.common.ops.NormalizeOp
TensorBuffer
apply(TensorBuffer input)
Applies the defined normalization on given tensor and returns the result.
boolean
equals(Object arg0)
final Class<?>
getClass()
int
hashCode()
final void
notify()
final void
notifyAll()
String
toString()
final void
wait(long arg0, int arg1)
final void
wait(long arg0)
final void
wait()
org.tensorflow.lite.support.common.TensorOperator
abstract TensorBuffer
org.tensorflow.lite.support.common.Operator
abstract TensorBuffer
apply(TensorBuffer x)
Applies an operation on a T object, returning a T object.

Public Constructors

public DequantizeOp (float zeroPoint, float scale)

Parameters
zeroPoint
scale