An object that confirms to FaceDetectorLiveStreamDelegate protocol. This object must
implement faceDetector(_:didFinishDetectionWithResult:timestampInMilliseconds:error:) to
receive the results of performing asynchronous face detection on images (i.e, when runningMode
= .liveStream).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-08 UTC."],[],[],null,["# MediaPipeTasksVision Framework Reference\n\nFaceDetectorOptions\n===================\n\n class FaceDetectorOptions : ../Classes/TaskOptions.html, NSCopying\n\nOptions for setting up a [FaceDetector](../Classes/FaceDetector.html).\n- `\n ``\n ``\n `\n\n ### [runningMode](#/c:objc(cs)MPPFaceDetectorOptions(py)runningMode)\n\n `\n ` \n Running mode of the face detector task. Defaults to [.image](../Constants.html#/c:MPPImage.h@MPPImageSourceTypeImage).\n [FaceDetector](../Classes/FaceDetector.html) can be created with one of the following running modes:\n 1. [.image](../Constants.html#/c:MPPImage.h@MPPImageSourceTypeImage): The mode for performing face detection on single image inputs.\n 2. `.video`: The mode for performing face detection on the decoded frames of a video.\n 3. `.liveStream`: The mode for performing face detection on a live stream of input data, such as from the camera. \n\n #### Declaration\n\n Swift \n\n var runningMode: ../Enums/RunningMode.html { get set }\n\n- `\n ``\n ``\n `\n\n ### [faceDetectorLiveStreamDelegate](#/c:objc(cs)MPPFaceDetectorOptions(py)faceDetectorLiveStreamDelegate)\n\n `\n ` \n An object that confirms to [FaceDetectorLiveStreamDelegate](../Protocols/FaceDetectorLiveStreamDelegate.html) protocol. This object must\n implement `faceDetector(_:didFinishDetectionWithResult:timestampInMilliseconds:error:)` to\n receive the results of performing asynchronous face detection on images (i.e, when [runningMode](../Classes/FaceDetectorOptions.html#/c:objc(cs)MPPFaceDetectorOptions(py)runningMode)\n = `.liveStream`). \n\n #### Declaration\n\n Swift \n\n weak var faceDetectorLiveStreamDelegate: ../Protocols/FaceDetectorLiveStreamDelegate.html? { get set }\n\n- `\n ``\n ``\n `\n\n ### [minDetectionConfidence](#/c:objc(cs)MPPFaceDetectorOptions(py)minDetectionConfidence)\n\n `\n ` \n The minimum confidence score for the face detection to be considered successful. Defaults to\n 0.5. \n\n #### Declaration\n\n Swift \n\n var minDetectionConfidence: Float { get set }\n\n- `\n ``\n ``\n `\n\n ### [minSuppressionThreshold](#/c:objc(cs)MPPFaceDetectorOptions(py)minSuppressionThreshold)\n\n `\n ` \n The minimum non-maximum-suppression threshold for face detection to be considered overlapped.\n Defaults to 0.3. \n\n #### Declaration\n\n Swift \n\n var minSuppressionThreshold: Float { get set }"]]