An object that confirms to ImageEmbedderLiveStreamDelegate protocol. This object must
implement imageEmbedder(_:didFinishEmbeddingWithResult:timestampInMilliseconds:error:) to
receive the results of asynchronous embedding extraction on images (i.e, when runningMode =
.liveStream).
@brief Sets whether L2 normalization should be performed on the returned embeddings.
Use this option only if the model does not already contain a native L2_NORMALIZATION TF Lite Op.
In most cases, this is already the case and L2 norm is thus achieved through TF Lite inference.
@brief Sets whether the returned embedding should be quantized to bytes via scalar quantization.
Embeddings are implicitly assumed to be unit-norm and therefore any dimensions is guaranteed to
have value in [-1.0, 1.0]. Use the l2Normalize property if this is not the case.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-08 UTC."],[],[],null,["# MediaPipeTasksVision Framework Reference\n\nMPPImageEmbedderOptions\n=======================\n\n\n @interface MPPImageEmbedderOptions : ../Classes/MPPTaskOptions.html \u003cNSCopying\u003e\n\nOptions for setting up a `ImageEmbedder`.\n- `\n ``\n ``\n `\n\n ### [runningMode](#/c:objc(cs)MPPImageEmbedderOptions(py)runningMode)\n\n `\n ` \n Running mode of the image embedder task. Defaults to `.image`.\n `ImageEmbedder` can be created with one of the following running modes:\n 1. `.image`: The mode for performing embedding extraction on single image inputs.\n 2. `.video`: The mode for performing embedding extraction on the decoded frames of a video.\n 3. `.liveStream`: The mode for performing embedding extraction on a live stream of input data, such as from the camera. \n\n #### Declaration\n\n Objective-C \n\n @property (nonatomic) ../Enums/MPPRunningMode.html runningMode;\n\n- `\n ``\n ``\n `\n\n ### [imageEmbedderLiveStreamDelegate](#/c:objc(cs)MPPImageEmbedderOptions(py)imageEmbedderLiveStreamDelegate)\n\n `\n ` \n An object that confirms to `ImageEmbedderLiveStreamDelegate` protocol. This object must\n implement `imageEmbedder(_:didFinishEmbeddingWithResult:timestampInMilliseconds:error:)` to\n receive the results of asynchronous embedding extraction on images (i.e, when `runningMode =\n .liveStream`). \n\n #### Declaration\n\n Objective-C \n\n @property (nonatomic, weak, nullable) id\u003c../Protocols/MPPImageEmbedderLiveStreamDelegate.html\u003e imageEmbedderLiveStreamDelegate;\n\n- `\n ``\n ``\n `\n\n ### [l2Normalize](#/c:objc(cs)MPPImageEmbedderOptions(py)l2Normalize)\n\n `\n ` \n @brief Sets whether L2 normalization should be performed on the returned embeddings.\n Use this option only if the model does not already contain a native L2_NORMALIZATION TF Lite Op.\n In most cases, this is already the case and L2 norm is thus achieved through TF Lite inference.\n\n `NO` by default. \n\n #### Declaration\n\n Objective-C \n\n @property (nonatomic) BOOL l2Normalize;\n\n- `\n ``\n ``\n `\n\n ### [quantize](#/c:objc(cs)MPPImageEmbedderOptions(py)quantize)\n\n `\n ` \n @brief Sets whether the returned embedding should be quantized to bytes via scalar quantization.\n Embeddings are implicitly assumed to be unit-norm and therefore any dimensions is guaranteed to\n have value in \\[-1.0, 1.0\\]. Use the [l2Normalize](../Classes/MPPImageEmbedderOptions.html#/c:objc(cs)MPPImageEmbedderOptions(py)l2Normalize) property if this is not the case.\n\n `NO` by default. \n\n #### Declaration\n\n Objective-C \n\n @property (nonatomic) BOOL quantize;"]]