MediaPipeTasksVision Framework Reference

PoseLandmarkerOptions

class PoseLandmarkerOptions : TaskOptions, NSCopying

Options for setting up a PoseLandmarker.

  • Running mode of the pose landmark dection task. Defaults to .image. PoseLandmarker can be created with one of the following running modes:

    1. .image: The mode for performing pose landmark detection on single image inputs.
    2. .video: The mode for performing pose landmark detection on the decoded frames of a video.
    3. .liveStream: The mode for performing pose landmark detection on a live stream of input data, such as from the camera.

    Declaration

    Swift

    var runningMode: RunningMode { get set }
  • An object that confirms to PoseLandmarkerLiveStreamDelegate protocol. This object must implement poseLandmarker(_:didFinishDetectionWithResult:timestampInMilliseconds:error:) to receive the results of performing asynchronous pose landmark detection on images (i.e, when runningMode = .liveStream).

    Declaration

    Swift

    weak var poseLandmarkerLiveStreamDelegate: PoseLandmarkerLiveStreamDelegate? { get set }
  • The maximum number of poses that can be detected by the PoseLandmarker. Defaults to 1.

    Declaration

    Swift

    var numPoses: Int { get set }
  • The minimum confidence score for pose detection to be considered successful. Defaults to 0.5.

    Declaration

    Swift

    var minPoseDetectionConfidence: Float { get set }
  • The minimum confidence score of pose presence score in the pose landmark detection. Defaults to 0.5.

    Declaration

    Swift

    var minPosePresenceConfidence: Float { get set }
  • The minimum confidence score for pose tracking to be considered successful. Defaults to 0.5.

    Declaration

    Swift

    var minTrackingConfidence: Float { get set }
  • Whether to output segmentation masks. Defaults to false.

    Declaration

    Swift

    var shouldOutputSegmentationMasks: Bool { get set }