MediaPipeTasksVision Framework Reference

FaceLandmarker

class FaceLandmarker : NSObject

@brief Class that performs face landmark detection on images.

The API expects a TFLite model with mandatory TFLite Model Metadata.

  • Creates a new instance of FaceLandmarker from an absolute path to a TensorFlow Lite model file stored locally on the device and the default FaceLandmarker.

    Declaration

    Swift

    convenience init(modelPath: String) throws

    Parameters

    modelPath

    An absolute path to a TensorFlow Lite model file stored locally on the device.

    Return Value

    A new instance of FaceLandmarker with the given model path. nil if there is an error in initializing the face landmaker.

  • Creates a new instance of FaceLandmarker from the given FaceLandmarkerOptions.

    Declaration

    Swift

    init(options: FaceLandmarkerOptions) throws

    Parameters

    options

    The options of type FaceLandmarkerOptions to use for configuring the FaceLandmarker.

    Return Value

    A new instance of FaceLandmarker with the given options. nil if there is an error in initializing the face landmaker.

  • Performs face landmark detection on the provided MPImage using the whole image as region of interest. Rotation will be applied according to the orientation property of the provided MPImage. Only use this method when the FaceLandmarker is created with .image.

    This method supports performing face landmark detection on RGBA images. If your MPImage has a source type of .pixelBuffer or .sampleBuffer, the underlying pixel buffer must use kCVPixelFormatType_32BGRA as its pixel format.

    If your MPImage has a source type of .image ensure that the color space is RGB with an Alpha channel.

    Declaration

    Swift

    func detect(image: MPImage) throws -> FaceLandmarkerResult

    Parameters

    image

    The MPImage on which face landmark detection is to be performed.

    Return Value

    An FaceLandmarkerResult that contains a list of landmarks. nil if there is an error in initializing the face landmaker.

  • Performs face landmark detection on the provided video frame of type MPImage using the whole image as region of interest. Rotation will be applied according to the orientation property of the provided MPImage. Only use this method when the FaceLandmarker is created with running mode .video.

    This method supports performing face landmark detection on RGBA images. If your MPImage has a source type of .pixelBuffer or .sampleBuffer, the underlying pixel buffer must use kCVPixelFormatType_32BGRA as its pixel format.

    If your MPImage has a source type of .image ensure that the color space is RGB with an Alpha channel.

    Declaration

    Swift

    func detect(videoFrame image: MPImage, timestampInMilliseconds: Int) throws -> FaceLandmarkerResult

    Parameters

    image

    The MPImage on which face landmark detection is to be performed.

    timestampInMilliseconds

    The video frame’s timestamp (in milliseconds). The input timestamps must be monotonically increasing.

    Return Value

    An FaceLandmarkerResult that contains a list of landmarks. nil if there is an error in initializing the face landmaker.

  • Sends live stream image data of type MPImage to perform face landmark detection using the whole image as region of interest. Rotation will be applied according to the orientation property of the provided MPImage. Only use this method when the FaceLandmarker is created with .liveStream.

    The object which needs to be continuously notified of the available results of face detection must confirm to FaceLandmarkerLiveStreamDelegate protocol and implement the faceLandmarker(_:didFinishDetectionWithResult:timestampInMilliseconds:error:) delegate method.

    It’s required to provide a timestamp (in milliseconds) to indicate when the input image is sent to the face detector. The input timestamps must be monotonically increasing.

    This method supports performing face landmark detection on RGBA images. If your MPImage has a source type of .pixelBuffer or .sampleBuffer, the underlying pixel buffer must use kCVPixelFormatType_32BGRA as its pixel format.

    If the input MPImage has a source type of .image ensure that the color space is RGB with an Alpha channel.

    If this method is used for classifying live camera frames using AVFoundation, ensure that you request AVCaptureVideoDataOutput to output frames in kCMPixelFormat_32BGRA using its videoSettings property.

    Declaration

    Swift

    func detectAsync(image: MPImage, timestampInMilliseconds: Int) throws

    Parameters

    image

    A live stream image data of type MPImage on which face landmark detection is to be performed.

    timestampInMilliseconds

    The timestamp (in milliseconds) which indicates when the input image is sent to the face detector. The input timestamps must be monotonically increasing.

    Return Value

    true if the image was sent to the task successfully, otherwise false.

  • Returns the connections between all the landmarks in the lips.

    Declaration

    Swift

    class func lipsConnections() -> [Connection]

    Return Value

    An array of connections between all the landmarks in the lips.

  • Returns the connections between all the landmarks in the left eye.

    Declaration

    Swift

    class func leftEyeConnections() -> [Connection]

    Return Value

    An array of connections between all the landmarks in the left eye.

  • Returns the connections between all the landmarks in the left eyebrow.

    Declaration

    Swift

    class func leftEyebrowConnections() -> [Connection]

    Return Value

    An array of connections between all the landmarks in the left eyebrow.

  • Returns the connections between all the landmarks in the left iris.

    Declaration

    Swift

    class func leftIrisConnections() -> [Connection]

    Return Value

    An array of connections between all the landmarks in the left iris.

  • <