@brief Class that performs face detection on images.
The API expects a TFLite model with mandatory TFLite Model Metadata.
The API supports models with one image input tensor and one or more output tensors. To be more
specific, here are the requirements:
Input tensor
(kTfLiteUInt8/kTfLiteFloat32)
image input of size [batch x height x width x channels].
batch inference is not supported (batch is required to be 1).
only RGB inputs are supported (channels is required to be 3).
if type is kTfLiteFloat32, NormalizationOptions are required to be attached to the metadata
for input normalization.
Output tensors must be the 4 outputs of a DetectionPostProcess op, i.e:(kTfLiteFloat32)
(kTfLiteUInt8/kTfLiteFloat32)
locations tensor of size [num_results x 4], the inner array representing bounding boxes
in the form [top, left, right, bottom].
BoundingBoxProperties are required to be attached to the metadata and must specify
type=BOUNDARIES and coordinate_type=RATIO.
(kTfLiteFloat32)
classes tensor of size [num_results], each value representing the integer index of a
class.
scores tensor of size [num_results], each value representing the score of the detected
face.
optional score calibration can be attached using ScoreCalibrationOptions and an
AssociatedFile with type TENSOR_AXIS_SCORE_CALIBRATION. See metadata_schema.fbs [1] for more
details.
(kTfLiteFloat32)
Creates a new instance of FaceDetector from an absolute path to a TensorFlow Lite model
file stored locally on the device and the default FaceDetector.
Declaration
Swift
convenienceinit(modelPath:String)throws
Parameters
modelPath
An absolute path to a TensorFlow Lite model file stored locally on the device.
Return Value
A new instance of FaceDetector with the given model path. nil if there is an
error in initializing the face detector.
Performs face detection on the provided MPImage using the whole image as region of
interest. Rotation will be applied according to the orientation property of the provided
MPImage. Only use this method when the FaceDetector is created with running mode .image.
This method supports performing face detection on RGBA images. If your MPImage has a source
type of .pixelBuffer or .sampleBuffer, the underlying pixel buffer must use
kCVPixelFormatType_32BGRA as its pixel format.
If your MPImage has a source type of .image ensure that the color space is
RGB with an Alpha channel.
The MPImage on which face detection is to be performed.
Return Value
An FaceDetectorResult face that contains a list of detections, each detection
has a bounding box that is expressed in the unrotated input frame of reference coordinates
system, i.e. in [0,image_width) x [0,image_height), which are the dimensions of the underlying
image data.
Performs face detection on the provided video frame of type MPImage using the whole
image as region of interest. Rotation will be applied according to the orientation property of
the provided MPImage. Only use this method when the FaceDetector is created with running
mode .video.
This method supports performing face detection on RGBA images. If your MPImage has a source
type of .pixelBuffer or .sampleBuffer, the underlying pixel buffer must use
kCVPixelFormatType_32BGRA as its pixel format.
If your MPImage has a source type of .image ensure that the color space is RGB with an Alpha
channel.
The MPImage on which face detection is to be performed.
timestampInMilliseconds
The video frame’s timestamp (in milliseconds). The input
timestamps must be monotonically increasing.
Return Value
An FaceDetectorResult face that contains a list of detections, each detection
has a bounding box that is expressed in the unrotated input frame of reference coordinates
system, i.e. in [0,image_width) x [0,image_height), which are the dimensions of the underlying
image data.
Sends live stream image data of type MPImage to perform face detection using the whole
image as region of interest. Rotation will be applied according to the orientation property of
the provided MPImage. Only use this method when the FaceDetector is created with
.liveStream.
The object which needs to be continuously notified of the available results of face
detection must confirm to FaceDetectorLiveStreamDelegate protocol and implement the
faceDetector(_:didFinishDetectionWithResult:timestampInMilliseconds:error:) delegate method.
It’s required to provide a timestamp (in milliseconds) to indicate when the input image is sent
to the face detector. The input timestamps must be monotonically increasing.
This method supports performing face detection on RGBA images. If your MPImage has a source
type of .pixelBuffer or .sampleBuffer, the underlying pixel buffer must use
kCVPixelFormatType_32BGRA as its pixel format.
If the input MPImage has a source type of .image ensure that the color
space is RGB with an Alpha channel.
If this method is used for classifying live camera frames using AVFoundation, ensure that you
request AVCaptureVideoDataOutput to output frames in kCMPixelFormat_32BGRA using its
videoSettings property.
A live stream image data of type MPImage on which face detection is to be
performed.
timestampInMilliseconds
The timestamp (in milliseconds) which indicates when the input
image is sent to the face detector. The input timestamps must be monotonically increasing.
Return Value
true if the image was sent to the task successfully, otherwise false.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-08 UTC."],[],[],null,["# MediaPipeTasksVision Framework Reference\n\nFaceDetector\n============\n\n class FaceDetector : NSObject\n\n@brief Class that performs face detection on images.\n\nThe API expects a TFLite model with mandatory TFLite Model Metadata.\n\nThe API supports models with one image input tensor and one or more output tensors. To be more\nspecific, here are the requirements:\n\nInput tensor\n(kTfLiteUInt8/kTfLiteFloat32)\n\n- image input of size `[batch x height x width x channels]`.\n- batch inference is not supported (`batch` is required to be 1).\n- only RGB inputs are supported (`channels` is required to be 3).\n- if type is kTfLiteFloat32, NormalizationOptions are required to be attached to the metadata for input normalization.\n\nOutput tensors must be the 4 outputs of a `DetectionPostProcess` op, i.e:(kTfLiteFloat32)\n(kTfLiteUInt8/kTfLiteFloat32)\n\n- locations tensor of size `[num_results x 4]`, the inner array representing bounding boxes in the form \\[top, left, right, bottom\\].\n- BoundingBoxProperties are required to be attached to the metadata and must specify type=BOUNDARIES and coordinate_type=RATIO. (kTfLiteFloat32)\n- classes tensor of size `[num_results]`, each value representing the integer index of a class.\n- scores tensor of size `[num_results]`, each value representing the score of the detected face.\n- optional score calibration can be attached using ScoreCalibrationOptions and an AssociatedFile with type TENSOR_AXIS_SCORE_CALIBRATION. See metadata_schema.fbs \\[1\\] for more details. (kTfLiteFloat32)\n- integer num_results as a tensor of size `[1]`\n- `\n ``\n ``\n `\n\n ### [init(modelPath:)](#/c:objc(cs)MPPFaceDetector(im)initWithModelPath:error:)\n\n `\n ` \n Creates a new instance of `FaceDetector` from an absolute path to a TensorFlow Lite model\n file stored locally on the device and the default `FaceDetector`. \n\n #### Declaration\n\n Swift \n\n convenience init(modelPath: String) throws\n\n #### Parameters\n\n |-------------------|--------------------------------------------------------------------------------|\n | ` `*modelPath*` ` | An absolute path to a TensorFlow Lite model file stored locally on the device. |\n\n #### Return Value\n\n A new instance of `FaceDetector` with the given model path. `nil` if there is an\n error in initializing the face detector.\n- `\n ``\n ``\n `\n\n ### [init(options:)](#/c:objc(cs)MPPFaceDetector(im)initWithOptions:error:)\n\n `\n ` \n Creates a new instance of `FaceDetector` from the given [FaceDetectorOptions](../Classes/FaceDetectorOptions.html). \n\n #### Declaration\n\n Swift \n\n init(options: ../Classes/FaceDetectorOptions.html) throws\n\n #### Parameters\n\n |-----------------|---------------------------------------------------------------------------------------------------------------------------|\n | ` `*options*` ` | The options of type [FaceDetectorOptions](../Classes/FaceDetectorOptions.html) to use for configuring the `FaceDetector`. |\n\n #### Return Value\n\n A new instance of `FaceDetector` with the given options. `nil` if there is an error\n in initializing the face detector.\n- `\n ``\n ``\n `\n\n ### [detect(image:)](#/c:objc(cs)MPPFaceDetector(im)detectImage:error:)\n\n `\n ` \n Performs face detection on the provided [MPImage](../Classes/MPImage.html) using the whole image as region of\n interest. Rotation will be applied according to the `orientation` property of the provided\n [MPImage](../Classes/MPImage.html). Only use this method when the `FaceDetector` is created with running mode [.image](../Constants.html#/c:MPPImage.h@MPPImageSourceTypeImage).\n\n This method supports performing face detection on RGBA images. If your [MPImage](../Classes/MPImage.html) has a source\n type of [.pixelBuffer](../Constants.html#/c:MPPImage.h@MPPImageSourceTypePixelBuffer) or [.sampleBuffer](../Constants.html#/c:MPPImage.h@MPPImageSourceTypeSampleBuffer), the underlying pixel buffer must use\n `kCVPixelFormatType_32BGRA` as its pixel format.\n\n If your [MPImage](../Classes/MPImage.html) has a source type of [.image](../Constants.html#/c:MPPImage.h@MPPImageSourceTypeImage) ensure that the color space is\n RGB with an Alpha channel. \n\n #### Declaration\n\n Swift \n\n func detect(image: ../Classes/MPImage.html) throws -\u003e ../Classes/FaceDetectorResult.html\n\n #### Parameters\n\n |---------------|------------------------------------------------------------------------------------|\n | ` `*image*` ` | The [MPImage](../Classes/MPImage.html) on which face detection is to be performed. |\n\n #### Return Value\n\n An [FaceDetectorResult](../Classes/FaceDetectorResult.html) face that contains a list of detections, each detection\n has a bounding box that is expressed in the unrotated input frame of reference coordinates\n system, i.e. in `[0,image_width) x [0,image_height)`, which are the dimensions of the underlying\n image data.\n- `\n ``\n ``\n `\n\n ### [detect(videoFrame:timestampInMilliseconds:)](#/c:objc(cs)MPPFaceDetector(im)detectVideoFrame:timestampInMilliseconds:error:)\n\n `\n ` \n Performs face detection on the provided video frame of type [MPImage](../Classes/MPImage.html) using the whole\n image as region of interest. Rotation will be applied according to the `orientation` property of\n the provided [MPImage](../Classes/MPImage.html). Only use this method when the `FaceDetector` is created with running\n mode `.video`.\n\n This method supports performing face detection on RGBA images. If your [MPImage](../Classes/MPImage.html) has a source\n type of [.pixelBuffer](../Constants.html#/c:MPPImage.h@MPPImageSourceTypePixelBuffer) or [.sampleBuffer](../Constants.html#/c:MPPImage.h@MPPImageSourceTypeSampleBuffer), the underlying pixel buffer must use\n `kCVPixelFormatType_32BGRA` as its pixel format.\n\n If your [MPImage](../Classes/MPImage.html) has a source type of [.image](../Constants.html#/c:MPPImage.h@MPPImageSourceTypeImage) ensure that the color space is RGB with an Alpha\n channel. \n\n #### Declaration\n\n Swift \n\n func detect(videoFrame image: ../Classes/MPImage.html, timestampInMilliseconds: Int) throws -\u003e ../Classes/FaceDetectorResult.html\n\n #### Parameters\n\n |---------------------------------|-------------------------------------------------------------------------------------------------------|\n | ` `*image*` ` | The [MPImage](../Classes/MPImage.html) on which face detection is to be performed. |\n | ` `*timestampInMilliseconds*` ` | The video frame's timestamp (in milliseconds). The input timestamps must be monotonically increasing. |\n\n #### Return Value\n\n An [FaceDetectorResult](../Classes/FaceDetectorResult.html) face that contains a list of detections, each detection\n has a bounding box that is expressed in the unrotated input frame of reference coordinates\n system, i.e. in `[0,image_width) x [0,image_height)`, which are the dimensions of the underlying\n image data.\n- `\n ``\n ``\n `\n\n ### [detectAsync(image:timestampInMilliseconds:)](#/c:objc(cs)MPPFaceDetector(im)detectAsyncImage:timestampInMilliseconds:error:)\n\n `\n ` \n Sends live stream image data of type [MPImage](../Classes/MPImage.html) to perform face detection using the whole\n image as region of interest. Rotation will be applied according to the `orientation` property of\n the provided [MPImage](../Classes/MPImage.html). Only use this method when the `FaceDetector` is created with\n `.liveStream`.\n\n The object which needs to be continuously notified of the available results of face\n detection must confirm to [FaceDetectorLiveStreamDelegate](../Protocols/FaceDetectorLiveStreamDelegate.html) protocol and implement the\n `faceDetector(_:didFinishDetectionWithResult:timestampInMilliseconds:error:)` delegate method.\n\n It's required to provide a timestamp (in milliseconds) to indicate when the input image is sent\n to the face detector. The input timestamps must be monotonically increasing.\n\n This method supports performing face detection on RGBA images. If your [MPImage](../Classes/MPImage.html) has a source\n type of [.pixelBuffer](../Constants.html#/c:MPPImage.h@MPPImageSourceTypePixelBuffer) or [.sampleBuffer](../Constants.html#/c:MPPImage.h@MPPImageSourceTypeSampleBuffer), the underlying pixel buffer must use\n `kCVPixelFormatType_32BGRA` as its pixel format.\n\n If the input [MPImage](../Classes/MPImage.html) has a source type of [.image](../Constants.html#/c:MPPImage.h@MPPImageSourceTypeImage) ensure that the color\n space is RGB with an Alpha channel.\n\n If this method is used for classifying live camera frames using `AVFoundation`, ensure that you\n request `AVCaptureVideoDataOutput` to output frames in `kCMPixelFormat_32BGRA` using its\n `videoSettings` property. \n\n #### Declaration\n\n Swift \n\n func detectAsync(image: ../Classes/MPImage.html, timestampInMilliseconds: Int) throws\n\n #### Parameters\n\n |---------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|\n | ` `*image*` ` | A live stream image data of type [MPImage](../Classes/MPImage.html) on which face detection is to be performed. |\n | ` `*timestampInMilliseconds*` ` | The timestamp (in milliseconds) which indicates when the input image is sent to the face detector. The input timestamps must be monotonically increasing. |\n\n #### Return Value\n\n `true` if the image was sent to the task successfully, otherwise `false`.\n- `\n ``\n ``\n `\n\n ### [-init](#/c:objc(cs)MPPFaceDetector(im)init)\n\n `\n ` \n Undocumented\n- `\n ``\n ``\n `\n\n ### [+new](#/c:objc(cs)MPPFaceDetector(cm)new)\n\n `\n ` \n Undocumented"]]