A typed multi-dimensional array used in Tensorflow Lite.
The native handle of a Tensor
is managed by NativeInterpreterWrapper
, and does
not needed to be closed by the client. However, once the NativeInterpreterWrapper
has
been closed, the tensor handle will be invalidated.
Nested Classes
class | Tensor.QuantizationParams | Quantization parameters that corresponds to the table, QuantizationParameters , in the
TFLite
Model schema file. |
Public Methods
abstract ByteBuffer |
asReadOnlyBuffer()
Returns a read-only
ByteBuffer view of the tensor data. |
abstract DataType | |
abstract int |
numBytes()
Returns the size, in bytes, of the tensor data.
|
abstract int |
numDimensions()
Returns the number of dimensions (sometimes referred to as rank) of the Tensor.
|
abstract int |
numElements()
Returns the number of elements in a flattened (1-D) view of the tensor.
|
abstract Tensor.QuantizationParams |
quantizationParams()
Returns the quantization parameters of the tensor within the owning interpreter.
|
abstract int[] | |
abstract int[] |
shapeSignature()
Returns the original shape of the Tensor,
i.e., the sizes of each dimension - before any resizing was performed.
|
Public Methods
public abstract ByteBuffer asReadOnlyBuffer ()
Returns a read-only ByteBuffer
view of the tensor data.
In general, this method is most useful for obtaining a read-only view of output tensor data,
*after* inference has been executed (e.g., via InterpreterApi.run(Object, Object)
). In
particular, some graphs have dynamically shaped outputs, which can make feeding a predefined
output buffer to the interpreter awkward. Example usage:
interpreter.run(input, null);
ByteBuffer outputBuffer = interpreter.getOutputTensor(0).asReadOnlyBuffer();
// Copy or read from outputBuffer.
WARNING: If the tensor has not yet been allocated, e.g., before inference has been executed, the result is undefined. Note that the underlying tensor pointer may also change when the tensor is invalidated in any way (e.g., if inference is executed, or the graph is resized), so it is *not* safe to hold a reference to the returned buffer beyond immediate use directly following inference. Example *bad* usage:
ByteBuffer outputBuffer = interpreter.getOutputTensor(0).asReadOnlyBuffer();
interpreter.run(input, null);
// Copy or read from outputBuffer (which may now be invalid).
Throws
IllegalArgumentException | if the tensor data has not been allocated. |
---|