TensorFlowLiteSwift Framework Reference

Interpreter

public final class Interpreter

A TensorFlow Lite interpreter that performs inference from a given model.

  • The configuration options for the Interpreter.

    Declaration

    Swift

    public let options: Options?
  • An Array of Delegates for the Interpreter to use to perform graph operations.

    Declaration

    Swift

    public let delegates: [Delegate]?
  • The total number of input Tensors associated with the model.

    Declaration

    Swift

    public var inputTensorCount: Int { get }
  • The total number of output Tensors associated with the model.

    Declaration

    Swift

    public var outputTensorCount: Int { get }
  • Creates a new instance with the given values.

    Throws

    An error if the model could not be loaded or the interpreter could not be created.

    Declaration

    Swift

    public init(modelPath: String, options: Options? = nil, delegates: [Delegate]? = nil) throws

    Parameters

    modelPath

    The local file path to a TensorFlow Lite model.

    options

    Configurations for the Interpreter. The default is nil indicating that the Interpreter will determine the configuration options.

    delegate

    Array of Delegates for the Interpreter to use to perform graph operations. The default is nil.

  • Invokes the interpreter to perform inference from the loaded graph.

    Throws

    An error if the model was not ready because the tensors were not allocated.

    Declaration

    Swift

    public func invoke() throws
  • Returns the input Tensor at the given index.

    Throws

    An error if the index is invalid or the tensors have not been allocated.

    Declaration

    Swift

    public func input(at index: Int) throws -> Tensor

    Parameters

    index

    The index for the input Tensor.

    Return Value

    The input Tensor at the given index.

  • Returns the output Tensor at the given index.

    Throws

    An error if the index is invalid, tensors haven’t been allocated, or interpreter has not been invoked for models that dynamically compute output tensors based on the values of its input tensors.

    Declaration

    Swift

    public func output(at index: Int) throws -> Tensor

    Parameters

    index

    The index for the output Tensor.

    Return Value

    The output Tensor at the given index.

  • Resizes the input Tensor at the given index to the specified Tensor.Shape.

    Note

    After resizing an input tensor, the client must explicitly call allocateTensors() before attempting to access the resized tensor data or invoking the interpreter to perform inference.

    Throws

    An error if the input tensor at the given index could not be resized.

    Declaration

    Swift

    public func resizeInput(at index: Int, to shape: Tensor.Shape) throws

    Parameters

    index

    The index for the input Tensor.

    shape

    The shape to resize the input Tensor to.

  • Copies the given data to the input Tensor at the given index.

    Throws

    An error if the data.count does not match the input tensor’s data.count or if the given index is invalid.

    Declaration

    Swift

    @discardableResult
    public func copy(_ data: Data, toInputAt index: Int) throws -> Tensor

    Parameters

    data

    The data to be copied to the input Tensor‘s data buffer.

    index

    The index for the input Tensor.

    Return Value

    The input Tensor with the copied data.

  • Allocates memory for all input Tensors based on their Tensor.Shapes.

    Note

    This is a relatively expensive operation and should only be called after creating the interpreter and resizing any input tensors.

    Throws

    An error if memory could not be allocated for the input tensors.

    Declaration

    Swift

    public func allocateTensors() throws
  • Options for configuring the