This protocol defines an interface for the delegates of FaceDetector face to receive
results of performing asynchronous face detection on images (i.e, when runningMode =
.liveStream).
The delegate of FaceDetector must adopt FaceDetectorLiveStreamDelegate protocol.
The methods in this protocol are optional.
The face detector which performed the face detection.
This is useful to test equality when there are multiple instances of FaceDetector.
result
The FaceDetectorResult object that contains a list of detections, each
detection has a bounding box that is expressed in the unrotated input frame of reference
coordinates system, i.e. in [0,image_width) x [0,image_height), which are the dimensions of the
underlying image data.
timestampInMilliseconds
The timestamp (in milliseconds) which indicates when the input
image was sent to the face detector.
error
An optional error parameter populated when there is an error in performing face
detection on the input live stream image data.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-08 UTC."],[],[],null,["# MediaPipeTasksVision Framework Reference\n\nMPPFaceDetectorLiveStreamDelegate\n=================================\n\n @protocol MPPFaceDetectorLiveStreamDelegate \u003cNSObject\u003e\n\nThis protocol defines an interface for the delegates of `FaceDetector` face to receive\nresults of performing asynchronous face detection on images (i.e, when `runningMode` =\n`.liveStream`).\n\nThe delegate of `FaceDetector` must adopt `FaceDetectorLiveStreamDelegate` protocol.\nThe methods in this protocol are optional.\n- `\n ``\n ``\n `\n\n ### [-faceDetector:didFinishDetectionWithResult:timestampInMilliseconds:error:](#/c:objc(pl)MPPFaceDetectorLiveStreamDelegate(im)faceDetector:didFinishDetectionWithResult:timestampInMilliseconds:error:)\n\n `\n ` \n This method notifies a delegate that the results of asynchronous face detection of\n an image submitted to the `FaceDetector` is available.\n\n This method is called on a private serial dispatch queue created by the `FaceDetector`\n for performing the asynchronous delegates calls. \n\n #### Declaration\n\n Objective-C \n\n - (void)faceDetector:(nonnull ../Classes/MPPFaceDetector.html *)faceDetector\n didFinishDetectionWithResult:(nullable ../Classes/MPPFaceDetectorResult.html *)result\n timestampInMilliseconds:(NSInteger)timestampInMilliseconds\n error:(nullable NSError *)error;\n\n #### Parameters\n\n |---------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n | ` `*faceDetector*` ` | The face detector which performed the face detection. This is useful to test equality when there are multiple instances of `FaceDetector`. |\n | ` `*result*` ` | The `FaceDetectorResult` object that contains a list of detections, each detection has a bounding box that is expressed in the unrotated input frame of reference coordinates system, i.e. in `[0,image_width) x [0,image_height)`, which are the dimensions of the underlying image data. |\n | ` `*timestampInMilliseconds*` ` | The timestamp (in milliseconds) which indicates when the input image was sent to the face detector. |\n | ` `*error*` ` | An optional error parameter populated when there is an error in performing face detection on the input live stream image data. |"]]