tflite:: impl:: Interpreter
Summary
Constructors and Destructors |
|
---|---|
Interpreter(ErrorReporter *error_reporter)
|
|
Interpreter(const Interpreter &)
|
|
~Interpreter()
|
Public types |
|
---|---|
TfLiteDelegatePtr
|
usingstd::unique_ptr< TfLiteDelegate, void(*)(TfLiteDelegate *)>
|
Public static attributes |
|
---|---|
kTensorsCapacityHeadroom = 16
|
constexpr int
The capacity headroom of
tensors_ vector before calling ops' prepare and invoke function. |
kTensorsReservedCapacity = 128
|
constexpr int
|
Friend classes |
|
---|---|
tflite::impl::InterpreterBuilder
|
friend class
|
Public functions |
|
---|---|
AddProfiler(Profiler *profiler)
|
void
\warning This is an experimental API and subject to change.
|
AddProfiler(std::unique_ptr< Profiler > profiler)
|
void
\warning This is an experimental API and subject to change.
|
AllocateTensors()
|
TfLiteStatus
Update allocations for all tensors.
|
ApplyOptions(InterpreterOptions *options)
|
TfLiteStatus
\warning This is an experimental API and subject to change.
|
Cancel()
|
TfLiteStatus
\warning This is an experimental API and subject to change.
|
EnsureTensorDataIsReadable(int tensor_index)
|
TfLiteStatus
\warning This is an experimental API and subject to change.
|
GetAllowFp16PrecisionForFp32() const
|
bool
\warning Experimental interface, subject to change.
|
GetAsyncSignatureRunner(const char *signature_key)
|
async::AsyncSignatureRunner *
\warning Experimental interface, subject to change.
|
GetBufferHandle(int tensor_index, TfLiteBufferHandle *buffer_handle, TfLiteDelegate **delegate)
|
TfLiteStatus
\warning This is an experimental API and subject to change.
|
GetInputName(int index) const
|
const char *
Return the name of a given input.
|
GetOutputName(int index) const
|
const char *
Return the name of a given output.
|
GetProfiler()
|
Profiler *
\warning This is an experimental API and subject to change.
|
GetSignatureRunner(const char *signature_key)
|
SignatureRunner *
Returns a pointer to the SignatureRunner instance to run the part of the graph identified by a SignatureDef.
|
GetSubgraphIndexFromSignature(const char *signature_key) const
|
int
\warning Experimental interface, subject to change.
|
Invoke()
|
TfLiteStatus
Invoke the interpreter (run the whole graph in dependency order).
|
ModifyGraphWithDelegate(TfLiteDelegate *delegate)
|
TfLiteStatus
Allow a delegate to look at the graph and modify the graph to handle parts of the graph themselves.
|
ModifyGraphWithDelegate(TfLiteOpaqueDelegateStruct *delegate)
|
TfLiteStatus
|
ModifyGraphWithDelegate(std::unique_ptr< Delegate, Deleter > delegate)
|
TfLiteStatus
\warning This is an experimental API and subject to change.
|
ModifyGraphWithDelegate(std::unique_ptr< TfLiteDelegate > delegate)=delete
|
TfLiteStatus
This overload is never OK.
|
OpProfilingString(const TfLiteRegistration & op_reg, const TfLiteNode *node) const
|
const char *
Retrieve an operator's description of its work, for profiling purposes.
|
ReleaseNonPersistentMemory()
|
TfLiteStatus
\warning Experimental interface, subject to change.
|
ResetVariableTensors()
|
TfLiteStatus
\warning This is an experimental API and subject to change.
|
ResizeInputTensor(int tensor_index, const std::vector< int > & dims)
|
TfLiteStatus
Change the dimensionality of a given tensor.
|
ResizeInputTensorStrict(int tensor_index, const std::vector< int > & dims)
|
TfLiteStatus
Change the dimensionality of a given tensor.
|
SetAllowBufferHandleOutput(bool allow_buffer_handle_output)
|
void
\warning This is an experimental API and subject to change.
|
SetAllowFp16PrecisionForFp32(bool allow)
|
void
Allow float16 precision for FP32 calculation when possible.
|
SetBufferHandle(int tensor_index, TfLiteBufferHandle buffer_handle, TfLiteDelegate *delegate)
|
TfLiteStatus
\warning This is an experimental API and subject to change.
|
SetBufferHandle(TfLiteTensor *tensor, TfLiteBufferHandle buffer_handle, TfLiteDelegate *delegate)
|
TfLiteStatus
\warning This is an experimental API and subject to change.
|
SetCancellationFunction(void *data, bool(*)(void *) check_cancelled_func)
|
void
\warning This is an experimental API and subject to change.
|
SetCustomAllocationForTensor(int tensor_index, const TfLiteCustomAllocation & allocation, int64_t flags)
|
TfLiteStatus
Assigns (or reassigns) a custom memory allocation for the given tensor.
|
SetExternalContext(TfLiteExternalContextType type, TfLiteExternalContext *ctx)
|
void
|
SetNumThreads(int num_threads)
|
TfLiteStatus
Set the number of threads available to the interpreter.
|
SetProfiler(Profiler *profiler)
|
void
\warning This is an experimental API and subject to change.
|
SetProfiler(std::unique_ptr< Profiler > profiler)
|
void
\warning This is an experimental API and subject to change.
|
error_reporter() const
|
\warning Experimental interface, subject to change.
|
execution_plan() const
|
const std::vector< int > &
\warning Experimental interface, subject to change.
|
input_tensor(size_t index)
|
TfLiteTensor *
Return a mutable pointer to the given input tensor.
|
input_tensor(size_t index) const
|
const TfLiteTensor *
Return an immutable pointer to the given input tensor.
|
input_tensor_by_signature(const char *signature_input_name, const char *signature_key)
|
TfLiteTensor *
Returns the input tensor identified by 'signature_input_name' in the signature identified by 'signature_key'.
|
inputs() const
|
const std::vector< int > &
Read only access to list of inputs.
|
node_and_registration(int node_index) const
|
const std::pair< TfLiteNode, TfLiteRegistration > *
Returns a pointer to an operation and registration data structure if in bounds from the primary subgraph(subgraph_[0]).
|
node_and_registration(int subgraph_index, int node_index) const
|
const std::pair< TfLiteNode, TfLiteRegistration > *
Returns a pointer to an operation and registration data structure if in bounds.
|
nodes_size() const
|
size_t
Return the number of ops in the model.
|
operator=(const Interpreter &)=delete
|
|
output_tensor(size_t index)
|
TfLiteTensor *
Return a mutable pointer to the given output tensor.
|
output_tensor(size_t index) const
|
const TfLiteTensor *
Return an immutable pointer to the given output tensor.
|
output_tensor_by_signature(const char *signature_output_name, const char *signature_key) const
|
const TfLiteTensor *
Returns the output tensor identified by 'signature_output_name' in the signature identified by 'signature_key'.
|
outputs() const
|
const std::vector< int > &
Read only access to list of outputs.
|
signature_inputs(const char *signature_key) const
|
const std::map< std::string, uint32_t > &
Returns the mapping of inputs to tensor index in the signature specified through 'signature_key'.
|
signature_keys() const
|
std::vector< const std::string * >
Returns list of all keys of different method signatures defined in the model.
|
signature_outputs(const char *signature_key) const
|
const std::map< std::string, uint32_t > &
Returns the mapping of outputs to tensor index in the signature specified through 'signature_key'.
|
tensor(int tensor_index)
|
TfLiteTensor *
Get a mutable tensor data structure.
|
tensor(int tensor_index) const
|
const TfLiteTensor *
Get an immutable tensor data structure.
|
tensors_size() const
|
size_t
Return the number of tensors in the model.
|
typed_input_tensor(int index)
|
T *
Return a mutable pointer into the data of a given input tensor.
|
typed_input_tensor(int index) const
|
const T *
Return an immutable pointer into the data of a given input tensor.
|
typed_output_tensor(int index)
|
T *
Return a mutable pointer into the data of a given output tensor.
|
typed_output_tensor(int index) const
|
const T *
Return an immutable pointer into the data of a given output tensor.
|
typed_tensor(int tensor_index)
|
T *
Perform a checked cast to the appropriate tensor type (mutable pointer version).
|
typed_tensor(int tensor_index) const
|
const T *
Perform a checked cast to the appropriate tensor type (immutable pointer version).
|
variables() const
|
const std::vector< int > &
Read only access to list of variable tensors.
|
Public types
TfLiteDelegatePtr
std::unique_ptr< TfLiteDelegate, void(*)(TfLiteDelegate *)> TfLiteDelegatePtr
Public static attributes
kTensorsCapacityHeadroom
constexpr int kTensorsCapacityHeadroom = 16
The capacity headroom of tensors_
vector before calling ops' prepare
and invoke
function.
In these functions, it's guaranteed allocating up to kTensorsCapacityHeadroom
more tensors won't invalidate pointers to existing tensors.
kTensorsReservedCapacity
constexpr int kTensorsReservedCapacity = 128
Friend classes
tflite::impl::InterpreterBuilder
friend class tflite::impl::InterpreterBuilder
Public functions
AddProfiler
void AddProfiler( Profiler *profiler )
\warning This is an experimental API and subject to change.
\n Adds the profiler to tracing execution. The caller retains ownership of the profiler and must ensure its validity. nullptr profiler
will be ignored.
AddProfiler
void AddProfiler( std::unique_ptr< Profiler > profiler )
\warning This is an experimental API and subject to change.
\n Adds the profiler to tracing execution. Transfers ownership of the profiler to the interpreter. nullptr profiler
will be ignored.
AllocateTensors
TfLiteStatus AllocateTensors()
Update allocations for all tensors.
This will redim dependent tensors using the input tensor dimensionality as given. This is relatively expensive. This must be called after the interpreter has been created and before running inference (and accessing tensor buffers), and must be called again if (and only if) an input tensor is resized. Returns status of success or failure. Will fail if any of the ops in the model (other than those which were rewritten by delegates, if any) are not supported by the Interpreter's OpResolver.
ApplyOptions
TfLiteStatus ApplyOptions( InterpreterOptions *options )
\warning This is an experimental API and subject to change.
\n Apply InterpreterOptions which tunes behavior of the interpreter.
Cancel
TfLiteStatus Cancel()
\warning This is an experimental API and subject to change.
\n Attempts to cancel in flight invocation if any. This will not affect Invoke
s that happends after the cancellation. Non blocking. Thread safe. Returns kTfLiteError if cancellation is not enabled, otherwise returns kTfLiteOk.
EnsureTensorDataIsReadable
TfLiteStatus EnsureTensorDataIsReadable( int tensor_index )
\warning This is an experimental API and subject to change.
\n Ensure the data in tensor.data
is readable. If a delegate has been used, and SetAllowBufferHandleOutput(true)
has been called, tensor outputs may be stored as delegate buffer handles whose data is not directly readable until this method has been called. In such cases, this method will copy the data from the delegate buffer handle to CPU memory.
GetAllowFp16PrecisionForFp32
bool GetAllowFp16PrecisionForFp32() const
\warning Experimental interface, subject to change.
\n Get the half precision flag.
GetAsyncSignatureRunner
async::AsyncSignatureRunner * GetAsyncSignatureRunner( const char *signature_key )
\warning Experimental interface, subject to change.
\n Returns a pointer to the AsyncSignatureRunner instance to run the part of the graph identified by a SignatureDef. The nullptr is returned if the given signature key is not valid. if the model does not have signature def, pass nullptr to signature_key and AsyncSignatureRunner will be created using primary subgraph (0). The async delegate should be applied before calling this function.
GetBufferHandle
TfLiteStatus GetBufferHandle( int tensor_index, TfLiteBufferHandle *buffer_handle, TfLiteDelegate **delegate )
\warning This is an experimental API and subject to change.
\n Get the delegate buffer handle, and the delegate which can process the buffer handle.
GetInputName
const char * GetInputName( int index ) const
Return the name of a given input.
The given index must be between 0 and inputs().size().