ImageEmbedder
class ImageEmbedder : NSObject@brief Performs embedding extraction on images.
The API expects a TFLite model with optional, but strongly recommended, TFLite Model Metadata..
The API supports models with one image input tensor and one or more output tensors. To be more specific, here are the requirements.
Input image tensor (kTfLiteUInt8/kTfLiteFloat32)
- image input of size
[batch x height x width x channels]. - batch inference is not supported (
batchis required to be 1). - only RGB inputs are supported (
channelsis required to be 3). - if type is kTfLiteFloat32, NormalizationOptions are required to be attached to the metadata for input normalization.
At least one output tensor (kTfLiteUInt8/kTfLiteFloat32) with shape [1 x N] where N
is the number of dimensions in the produced embeddings.
-
Creates a new instance of
ImageEmbedderfrom an absolute path to a TensorFlow Lite model file stored locally on the device and the defaultImageEmbedderOptions.Declaration
Swift
convenience init(modelPath: String) throwsParameters
modelPathAn absolute path to a TensorFlow Lite model file stored locally on the device.
Return Value
A new instance of
ImageEmbedderwith the given model path.nilif there is an error in initializing the image embedder. -
Creates a new instance of
ImageEmbedderfrom the givenImageEmbedderOptions.Declaration
Swift
init(options: ImageEmbedderOptions) throwsParameters
optionsThe options of type
ImageEmbedderOptionsto use for configuring theImageEmbedder.Return Value
A new instance of
ImageEmbedderwith the given options.nilif there is an error in initializing the image embedder. -
Performs embedding extraction on the provided
MPImageusing the whole image as region of interest. Rotation will be applied according to theorientationproperty of the providedMPImage. Only use this method when theImageEmbedderis created with running mode,.image.This method supports embedding extraction on RGBA images. If your
MPImagehas a source type of.pixelBufferor.sampleBuffer, the underlying pixel buffer must usekCVPixelFormatType_32BGRAas its pixel format.If your
MPImagehas a source type of.imageensure that the color space is RGB with an Alpha channel.Declaration
Swift
func embed(image: MPImage) throws -> ImageEmbedderResultParameters
imageThe
MPImageon which embedding extraction is to be performed.Return Value
An
ImageEmbedderResultobject that contains a list of embedding extraction. -
Performs embedding extraction on the provided
MPImagecropped to the specified region of interest. Rotation will be applied on the cropped image according to theorientationproperty of the providedMPImage. Only use this method when theImageEmbedderis created with running mode,.image.This method supports embedding extraction on RGBA images. If your
MPImagehas a source type of.pixelBufferor.sampleBuffer, the underlying pixel buffer must usekCVPixelFormatType_32BGRAas its pixel format.If your
MPImagehas a source type of.imageensure that the color space is RGB with an Alpha channel.Declaration
Swift
func embed(image: MPImage, regionOfInterest roi: CGRect) throws -> ImageEmbedderResultParameters
imageThe
MPImageon which embedding extraction is to be performed.roiA
CGRectspecifying the region of interest within the givenMPImage, on which embedding extraction should be performed.Return Value
An
ImageEmbedderResultobject that contains a list of generated image embeddings. -
Performs embedding extraction on the provided video frame of type
MPImageusing the whole image as region of interest. Rotation will be applied according to theorientationproperty of the providedMPImage. Only use this method when theImageEmbedderis created with running mode.video.It’s required to provide the video frame’s timestamp (in milliseconds). The input timestamps must be monotonically increasing.
This method supports embedding extraction on RGBA images. If your
MPImagehas a source type of.pixelBufferor.sampleBuffer, the underlying pixel buffer must usekCVPixelFormatType_32BGRAas its pixel format.If your
MPImagehas a source type of.imageensure that the color space is RGB with an Alpha channel.Declaration
Swift
func embed(videoFrame image: MPImage, timestampInMilliseconds: Int) throws -> ImageEmbedderResultParameters
imageThe
MPImageon which embedding extraction is to be performed.timestampInMillisecondsThe video frame’s timestamp (in milliseconds). The input timestamps must be monotonically increasing.
Return Value
An
ImageEmbedderResultobject that contains a list of generated image embeddings. -
Performs embedding extraction on the provided video frame of type
MPImagecropped to the specified region of interest. Rotation will be applied according to theorientationproperty of the providedMPImage. Only use this method when theImageEmbedderis created with.video.It’s required to provide the video frame’s timestamp (in milliseconds). The input timestamps must be monotonically increasing.
This method supports embedding extraction on RGBA images. If your
MPImagehas a source type of.pixelBufferor.sampleBuffer, the underlying pixel buffer must usekCVPixelFormatType_32BGRAas its pixel format.If your
MPImagehas a source type of.imageensure that the color space is RGB with an Alpha channel.Declaration
Swift
func embed(videoFrame image: MPImage, timestampInMilliseconds: Int, regionOfInterest roi: CGRect) throws -> ImageEmbedderResultParameters
imageA live stream image data of type
MPImageon which embedding extraction is to be performed.timestampInMillisecondsThe video frame’s timestamp (in milliseconds). The input timestamps must be monotonically increasing.
roiA
CGRectspecifying the region of interest within the video frame of typeMPImage, on which embedding extraction should be performed.Return Value
An
ImageEmbedderResultobject that contains a list of generated image embeddings. -
Sends live stream image data of type
MPImageto perform embedding extraction using the whole image as region of interest. Rotation will be applied according to theorientationproperty of the providedMPImage. Only use this method when theImageEmbedderis created with running mode.liveStream.The object which needs to be continuously notified of the available results of image embedding extraction must confirm to
ImageEmbedderLiveStreamDelegateprotocol and implement theimageEmbedder(_:didFinishEmbeddingWithResult:timestampInMilliseconds:error:)delegate method.It’s required to provide a timestamp (in milliseconds) to indicate when the input image is sent to the image embedder. The input timestamps must be monotonically increasing.
This method supports embedding extraction on RGBA images. If your
MPImagehas a source type of.pixelBufferor.sampleBuffer, the underlying pixel buffer must usekCVPixelFormatType_32BGRAas its pixel format.If the input
MPImagehas a source type of.imageensure that the color space is RGB with an Alpha channel.If this method is used for embedding live camera frames using
AVFoundation, ensure that you requestAVCaptureVideoDataOutputto output frames inkCMPixelFormat_32BGRAusing itsvideoSettingsproperty.Declaration
Swift
func embedAsync(image: MPImage, timestampInMilliseconds: Int) throwsParameters
imageA live stream image data of type
MPImageon which embedding extraction is to be performed.timestampInMillisecondsThe timestamp (in milliseconds) which indicates when the input image is sent to the image embedder. The input timestamps must be monotonically increasing.
Return Value
trueif the image was sent to the task successfully, otherwisefalse. -
Sends live stream image data of type
MPImageto perform embedding extraction, cropped to the specified region of interest.. Rotation will be applied according to theorientationproperty of the providedMPImage. Only use this method when theImageEmbedderis created with.liveStream.The object which needs to be continuously notified of the available results of image embedding extraction must confirm to
ImageEmbedderLiveStreamDelegateprotocol and implement theimageEmbedder(_:didFinishEmbeddingWithResult:timestampInMilliseconds:error:)delegate method.It’s required to provide a timestamp (in milliseconds) to indicate when the input image is sent to the image embedder. The input timestamps must be monotonically increasing.
This method supports embedding extraction on RGBA images. If your
MPImagehas a source type of.pixelBufferor.sampleBuffer, the underlying pixel buffer must usekCVPixelFormatType_32BGRAas its pixel format.If the input
MPImagehas a source type of.imageensure that the color space is RGB with an Alpha channel.If this method is used for embedding live camera frames using
AVFoundation, ensure that you requestAVCaptureVideoDataOutputto output frames inkCMPixelFormat_32BGRAusing itsvideoSettingsproperty.Declaration
Swift
func embedAsync(image: MPImage, timestampInMilliseconds: Int, regionOfInterest roi: CGRect) throwsParameters
imageA live stream image data of type
MPImageon which embedding extraction is to be performed.timestampInMillisecondsThe timestamp (in milliseconds) which indicates when the input image is sent to the image embedder. The input timestamps must be monotonically increasing.
roiA
CGRectspecifying the region of interest within the given live stream image data of typeMPImage, on which embedding extraction should be performed.Return Value
trueif the image was sent to the task successfully, otherwisefalse. -
Undocumented
-
Utility function to computecosine similarity between two
MPPEmbeddingobjects.Declaration
Parameters
embedding1One of the two
MPPEmbeddings between whom cosine similarity is to be computed.embedding2One of the two
MPPEmbeddings between whom cosine similarity is to be computed.errorAn optional error parameter populated when there is an error in calculating cosine similarity between two embeddings.
Return Value
An
NSNumberwhich holds the cosine similarity of typedouble. -
Undocumented