FaceDetector
class FaceDetector : NSObject@brief Class that performs face detection on images.
The API expects a TFLite model with mandatory TFLite Model Metadata.
The API supports models with one image input tensor and one or more output tensors. To be more specific, here are the requirements:
Input tensor (kTfLiteUInt8/kTfLiteFloat32)
- image input of size
[batch x height x width x channels]. - batch inference is not supported (
batchis required to be 1). - only RGB inputs are supported (
channelsis required to be 3). - if type is kTfLiteFloat32, NormalizationOptions are required to be attached to the metadata for input normalization.
Output tensors must be the 4 outputs of a DetectionPostProcess op, i.e:(kTfLiteFloat32)
(kTfLiteUInt8/kTfLiteFloat32)
- locations tensor of size
[num_results x 4], the inner array representing bounding boxes in the form [top, left, right, bottom]. - BoundingBoxProperties are required to be attached to the metadata and must specify type=BOUNDARIES and coordinate_type=RATIO. (kTfLiteFloat32)
- classes tensor of size
[num_results], each value representing the integer index of a class. - scores tensor of size
[num_results], each value representing the score of the detected face. - optional score calibration can be attached using ScoreCalibrationOptions and an AssociatedFile with type TENSOR_AXIS_SCORE_CALIBRATION. See metadata_schema.fbs [1] for more details. (kTfLiteFloat32)
- integer num_results as a tensor of size
[1]
-
Creates a new instance of
FaceDetectorfrom an absolute path to a TensorFlow Lite model file stored locally on the device and the defaultFaceDetector.Declaration
Swift
convenience init(modelPath: String) throwsParameters
modelPathAn absolute path to a TensorFlow Lite model file stored locally on the device.
Return Value
A new instance of
FaceDetectorwith the given model path.nilif there is an error in initializing the face detector. -
Creates a new instance of
FaceDetectorfrom the givenFaceDetectorOptions.Declaration
Swift
init(options: FaceDetectorOptions) throwsParameters
optionsThe options of type
FaceDetectorOptionsto use for configuring theFaceDetector.Return Value
A new instance of
FaceDetectorwith the given options.nilif there is an error in initializing the face detector. -
Performs face detection on the provided
MPImageusing the whole image as region of interest. Rotation will be applied according to theorientationproperty of the providedMPImage. Only use this method when theFaceDetectoris created with running mode.image.This method supports performing face detection on RGBA images. If your
MPImagehas a source type of.pixelBufferor.sampleBuffer, the underlying pixel buffer must usekCVPixelFormatType_32BGRAas its pixel format.If your
MPImagehas a source type of.imageensure that the color space is RGB with an Alpha channel.Declaration
Swift
func detect(image: MPImage) throws -> FaceDetectorResultParameters
imageThe
MPImageon which face detection is to be performed.Return Value
An
FaceDetectorResultface that contains a list of detections, each detection has a bounding box that is expressed in the unrotated input frame of reference coordinates system, i.e. in[0,image_width) x [0,image_height), which are the dimensions of the underlying image data. -
Performs face detection on the provided video frame of type
MPImageusing the whole image as region of interest. Rotation will be applied according to theorientationproperty of the providedMPImage. Only use this method when theFaceDetectoris created with running mode.video.This method supports performing face detection on RGBA images. If your
MPImagehas a source type of.pixelBufferor.sampleBuffer, the underlying pixel buffer must usekCVPixelFormatType_32BGRAas its pixel format.If your
MPImagehas a source type of.imageensure that the color space is RGB with an Alpha channel.Declaration
Swift
func detect(videoFrame image: MPImage, timestampInMilliseconds: Int) throws -> FaceDetectorResultParameters
imageThe
MPImageon which face detection is to be performed.timestampInMillisecondsThe video frame’s timestamp (in milliseconds). The input timestamps must be monotonically increasing.
Return Value
An
FaceDetectorResultface that contains a list of detections, each detection has a bounding box that is expressed in the unrotated input frame of reference coordinates system, i.e. in[0,image_width) x [0,image_height), which are the dimensions of the underlying image data. -
Sends live stream image data of type
MPImageto perform face detection using the whole image as region of interest. Rotation will be applied according to theorientationproperty of the providedMPImage. Only use this method when theFaceDetectoris created with.liveStream.The object which needs to be continuously notified of the available results of face detection must confirm to
FaceDetectorLiveStreamDelegateprotocol and implement thefaceDetector(_:didFinishDetectionWithResult:timestampInMilliseconds:error:)delegate method.It’s required to provide a timestamp (in milliseconds) to indicate when the input image is sent to the face detector. The input timestamps must be monotonically increasing.
This method supports performing face detection on RGBA images. If your
MPImagehas a source type of.pixelBufferor.sampleBuffer, the underlying pixel buffer must usekCVPixelFormatType_32BGRAas its pixel format.If the input
MPImagehas a source type of.imageensure that the color space is RGB with an Alpha channel.If this method is used for classifying live camera frames using
AVFoundation, ensure that you requestAVCaptureVideoDataOutputto output frames inkCMPixelFormat_32BGRAusing itsvideoSettingsproperty.Declaration
Swift
func detectAsync(image: MPImage, timestampInMilliseconds: Int) throwsParameters
imageA live stream image data of type
MPImageon which face detection is to be performed.timestampInMillisecondsThe timestamp (in milliseconds) which indicates when the input image is sent to the face detector. The input timestamps must be monotonically increasing.
Return Value
trueif the image was sent to the task successfully, otherwisefalse. -
Undocumented
-
Undocumented