An object that confirms to ImageSegmenterLiveStreamDelegate protocol. This object must
implement imageSegmenter(_:didFinishSegmentationWithResult:timestampInMilliseconds:error:) to
receive the results of performing asynchronous segmentation on images (i.e, when runningMode =
liveStream).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-08 UTC."],[],[],null,["# MediaPipeTasksVision Framework Reference\n\nImageSegmenterOptions\n=====================\n\n class ImageSegmenterOptions : ../Classes/TaskOptions.html, NSCopying\n\nOptions for setting up a [ImageSegmenter](../Classes/ImageSegmenter.html).\n- `\n ``\n ``\n `\n\n ### [runningMode](#/c:objc(cs)MPPImageSegmenterOptions(py)runningMode)\n\n `\n ` \n Running mode of the image segmenter task. Defaults to [image](../Constants.html#/c:MPPImage.h@MPPImageSourceTypeImage).\n [ImageSegmenter](../Classes/ImageSegmenter.html) can be created with one of the following running modes:\n 1. [image](../Constants.html#/c:MPPImage.h@MPPImageSourceTypeImage): The mode for performing segmentation on single image inputs.\n 2. `video`: The mode for performing segmentation on the decoded frames of a video.\n 3. `liveStream`: The mode for performing segmentation on a live stream of input data, such as from the camera. \n\n #### Declaration\n\n Swift \n\n var runningMode: ../Enums/RunningMode.html { get set }\n\n- `\n ``\n ``\n `\n\n ### [imageSegmenterLiveStreamDelegate](#/c:objc(cs)MPPImageSegmenterOptions(py)imageSegmenterLiveStreamDelegate)\n\n `\n ` \n An object that confirms to [ImageSegmenterLiveStreamDelegate](../Protocols/ImageSegmenterLiveStreamDelegate.html) protocol. This object must\n implement `imageSegmenter(_:didFinishSegmentationWithResult:timestampInMilliseconds:error:)` to\n receive the results of performing asynchronous segmentation on images (i.e, when [runningMode](../Classes/ImageSegmenterOptions.html#/c:objc(cs)MPPImageSegmenterOptions(py)runningMode) =\n `liveStream`). \n\n #### Declaration\n\n Swift \n\n weak var imageSegmenterLiveStreamDelegate: ../Protocols/ImageSegmenterLiveStreamDelegate.html? { get set }\n\n- `\n ``\n ``\n `\n\n ### [displayNamesLocale](#/c:objc(cs)MPPImageSegmenterOptions(py)displayNamesLocale)\n\n `\n ` \n The locale to use for display names specified through the TFLite Model Metadata, if any. Defaults\n to English. \n\n #### Declaration\n\n Swift \n\n var displayNamesLocale: String { get set }\n\n- `\n ``\n ``\n `\n\n ### [shouldOutputConfidenceMasks](#/c:objc(cs)MPPImageSegmenterOptions(py)shouldOutputConfidenceMasks)\n\n `\n ` \n Represents whether to output confidence masks. \n\n #### Declaration\n\n Swift \n\n var shouldOutputConfidenceMasks: Bool { get set }\n\n- `\n ``\n ``\n `\n\n ### [shouldOutputCategoryMask](#/c:objc(cs)MPPImageSegmenterOptions(py)shouldOutputCategoryMask)\n\n `\n ` \n Represents whether to output category mask. \n\n #### Declaration\n\n Swift \n\n var shouldOutputCategoryMask: Bool { get set }"]]