An object that confirms to FaceLandmarkerLiveStreamDelegate protocol. This object must
implement faceLandmarker(_:didFinishDetectionWithResult:timestampInMilliseconds:error:) to
receive the results of performing asynchronous face landmark detection on images (i.e, when
runningMode = .liveStream).
Whether FaceLandmarker outputs facial transformation_matrix. Facial transformation matrix is used
to transform the face landmarks in canonical face to the detected face, so that users can apply
face effects on the detected landmarks.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-08 UTC."],[],[],null,["# MediaPipeTasksVision Framework Reference\n\nMPPFaceLandmarkerOptions\n========================\n\n\n @interface MPPFaceLandmarkerOptions : ../Classes/MPPTaskOptions.html \u003cNSCopying\u003e\n\nOptions for setting up a `FaceLandmarker`.\n- `\n ``\n ``\n `\n\n ### [runningMode](#/c:objc(cs)MPPFaceLandmarkerOptions(py)runningMode)\n\n `\n ` \n Running mode of the face landmark dection task. Defaults to `.image`. `FaceLandmarker` can be\n created with one of the following running modes:\n 1. `.image`: The mode for performing face detection on single image inputs.\n 2. `.video`: The mode for performing face detection on the decoded frames of a video.\n 3. `.liveStream`: The mode for performing face detection on a live stream of input data, such as from the camera. \n\n #### Declaration\n\n Objective-C \n\n @property (nonatomic) ../Enums/MPPRunningMode.html runningMode;\n\n- `\n ``\n ``\n `\n\n ### [faceLandmarkerLiveStreamDelegate](#/c:objc(cs)MPPFaceLandmarkerOptions(py)faceLandmarkerLiveStreamDelegate)\n\n `\n ` \n An object that confirms to `FaceLandmarkerLiveStreamDelegate` protocol. This object must\n implement `faceLandmarker(_:didFinishDetectionWithResult:timestampInMilliseconds:error:)` to\n receive the results of performing asynchronous face landmark detection on images (i.e, when\n [runningMode](../Classes/MPPFaceLandmarkerOptions.html#/c:objc(cs)MPPFaceLandmarkerOptions(py)runningMode) = `.liveStream`). \n\n #### Declaration\n\n Objective-C \n\n @property (nonatomic, weak, nullable) id\u003c../Protocols/MPPFaceLandmarkerLiveStreamDelegate.html\u003e faceLandmarkerLiveStreamDelegate;\n\n- `\n ``\n ``\n `\n\n ### [numFaces](#/c:objc(cs)MPPFaceLandmarkerOptions(py)numFaces)\n\n `\n ` \n The maximum number of faces can be detected by the FaceLandmarker. Defaults to 1. \n\n #### Declaration\n\n Objective-C \n\n @property (nonatomic) NSInteger numFaces;\n\n- `\n ``\n ``\n `\n\n ### [minFaceDetectionConfidence](#/c:objc(cs)MPPFaceLandmarkerOptions(py)minFaceDetectionConfidence)\n\n `\n ` \n The minimum confidence score for the face detection to be considered successful. Defaults to 0.5. \n\n #### Declaration\n\n Objective-C \n\n @property (nonatomic) float minFaceDetectionConfidence;\n\n- `\n ``\n ``\n `\n\n ### [minFacePresenceConfidence](#/c:objc(cs)MPPFaceLandmarkerOptions(py)minFacePresenceConfidence)\n\n `\n ` \n The minimum confidence score of face presence score in the face landmark detection. Defaults to\n 0.5. \n\n #### Declaration\n\n Objective-C \n\n @property (nonatomic) float minFacePresenceConfidence;\n\n- `\n ``\n ``\n `\n\n ### [minTrackingConfidence](#/c:objc(cs)MPPFaceLandmarkerOptions(py)minTrackingConfidence)\n\n `\n ` \n The minimum confidence score for the face tracking to be considered successful. Defaults to 0.5. \n\n #### Declaration\n\n Objective-C \n\n @property (nonatomic) float minTrackingConfidence;\n\n- `\n ``\n ``\n `\n\n ### [outputFaceBlendshapes](#/c:objc(cs)MPPFaceLandmarkerOptions(py)outputFaceBlendshapes)\n\n `\n ` \n Whether FaceLandmarker outputs face blendshapes classification. Face blendshapes are used for\n rendering the 3D face model. \n\n #### Declaration\n\n Objective-C \n\n @property (nonatomic) BOOL outputFaceBlendshapes;\n\n- `\n ``\n ``\n `\n\n ### [outputFacialTransformationMatrixes](#/c:objc(cs)MPPFaceLandmarkerOptions(py)outputFacialTransformationMatrixes)\n\n `\n ` \n Whether FaceLandmarker outputs facial transformation_matrix. Facial transformation matrix is used\n to transform the face landmarks in canonical face to the detected face, so that users can apply\n face effects on the detected landmarks. \n\n #### Declaration\n\n Objective-C \n\n @property (nonatomic) BOOL outputFacialTransformationMatrixes;"]]