Classes
The following classes are available globally.
- 
                  
                  Holds the base options that is used for creation of any type of task. It has fields with important information acceleration configuration, TFLite model source etc. DeclarationSwift class BaseOptions : NSObject, NSCopying
- 
                  
                  Category is a util class that contains a label, its display name, a float value as score, and the index of the label in the corresponding label file. Typically it’s used as the result of classification tasks. DeclarationSwift class ResultCategory : NSObject
- 
                  
                  Represents the list of classification for a given classifier head. Typically used as a result for classification tasks. DeclarationSwift class Classifications : NSObject
- 
                  
                  Represents the classification results of a model. Typically used as a result for classification tasks. DeclarationSwift class ClassificationResult : NSObject
- 
                  
                  Classifier options shared across MediaPipe iOS classification tasks. DeclarationSwift class ClassifierOptions : NSObject, NSCopying
- 
                  
                  The value class representing a landmark connection. DeclarationSwift class Connection : NSObject
- 
                  
                  Normalized keypoint represents a point in 2D space with x, y coordinates. x and y are normalized to [0.0, 1.0] by the image width and height respectively. DeclarationSwift class NormalizedKeypoint : NSObject
- 
                  
                  Represents one detected object in the results of ObjectDetector.DeclarationSwift class Detection : NSObject
- 
                  
                  Represents the embedding for a given embedder head. Typically used in embedding tasks. One and only one of the two ‘floatEmbedding’ and ‘quantizedEmbedding’ will contain data, based on whether or not the embedder was configured to perform scala quantization. DeclarationSwift class Embedding : NSObject
- 
                  
                  Represents the embedding results of a model. Typically used as a result for embedding tasks. DeclarationSwift class EmbeddingResult : NSObject
- 
                  
                  @brief Class that performs face detection on images. The API expects a TFLite model with mandatory TFLite Model Metadata. The API supports models with one image input tensor and one or more output tensors. To be more specific, here are the requirements: Input tensor (kTfLiteUInt8/kTfLiteFloat32) - image input of size [batch x height x width x channels].
- batch inference is not supported (batchis required to be 1).
- only RGB inputs are supported (channelsis required to be 3).
- if type is kTfLiteFloat32, NormalizationOptions are required to be attached to the metadata for input normalization.
 Output tensors must be the 4 outputs of a DetectionPostProcessop, i.e:(kTfLiteFloat32) (kTfLiteUInt8/kTfLiteFloat32)- locations tensor of size [num_results x 4], the inner array representing bounding boxes in the form [top, left, right, bottom].
- BoundingBoxProperties are required to be attached to the metadata and must specify type=BOUNDARIES and coordinate_type=RATIO. (kTfLiteFloat32)
- classes tensor of size [num_results], each value representing the integer index of a class.
- scores tensor of size [num_results], each value representing the score of the detected face.
- optional score calibration can be attached using ScoreCalibrationOptions and an AssociatedFile with type TENSOR_AXIS_SCORE_CALIBRATION. See metadata_schema.fbs [1] for more details. (kTfLiteFloat32)
- integer num_results as a tensor of size [1]
 DeclarationSwift class FaceDetector : NSObject
- image input of size 
- 
                  
                  Options for setting up a FaceDetector.DeclarationSwift class FaceDetectorOptions : TaskOptions, NSCopying
- 
                  
                  Represents the detection results generated by FaceDetector.DeclarationSwift class FaceDetectorResult : TaskResult
- 
                  
                  @brief Class that performs face landmark detection on images. The API expects a TFLite model with mandatory TFLite Model Metadata. DeclarationSwift class FaceLandmarker : NSObject
- 
                  
                  Options for setting up a FaceLandmarker.DeclarationSwift class FaceLandmarkerOptions : TaskOptions, NSCopying
- 
                  
                  A matrix that can be used for tansformations. DeclarationSwift class TransformMatrix : NSObject
- 
                  
                  Represents the detection results generated by FaceLandmarker.DeclarationSwift class FaceLandmarkerResult : TaskResult
- 
                  
                  Class that performs face stylization on images. DeclarationSwift class FaceStylizer : NSObject
- 
                  
                  Options for setting up a FaceStylizer.DeclarationSwift class FaceStylizerOptions : TaskOptions, NSCopying
- 
                  
                  Represents the stylized image generated by FaceStylizer.DeclarationSwift class FaceStylizerResult : TaskResult
- 
                  
                  @brief Performs gesture recognition on images. This API expects a pre-trained TFLite hand gesture recognizer model or a custom one created using MediaPipe Solutions Model Maker. See https://developers.google.com/mediapipe/solutions/model_maker. DeclarationSwift class GestureRecognizer : NSObject
- 
                  
                  Options for setting up a GestureRecognizer.DeclarationSwift class GestureRecognizerOptions : TaskOptions, NSCopying
- 
                  
                  Represents the gesture recognition results generated by GestureRecognizer.DeclarationSwift class GestureRecognizerResult : TaskResult
- 
                  
                  @brief Performs hand landmarks detection on images. This API expects a pre-trained hand landmarks model asset bundle. DeclarationSwift class HandLandmarker : NSObject
- 
                  
                  Options for setting up a HandLandmarker.DeclarationSwift class HandLandmarkerOptions : TaskOptions, NSCopying
- 
                  
                  Represents the hand landmarker results generated by HandLandmarker.DeclarationSwift class HandLandmarkerResult : TaskResult
- 
                  
                  An image used in on-device machine learning using MediaPipe Task library. DeclarationSwift class MPImage : NSObject
- 
                  
                  @brief Performs classification on images. The API expects a TFLite model with optional, but strongly recommended, TFLite Model Metadata.. The API supports models with one image input tensor and one or more output tensors. To be more specific, here are the requirements. Input tensor (kTfLiteUInt8/kTfLiteFloat32) - image input of size [batch x height x width x channels].
- batch inference is not supported (batchis required to be 1).
- only RGB inputs are supported (channelsis required to be 3).
- if type is kTfLiteFloat32, NormalizationOptions are required to be attached to the metadata for input normalization.
 At least one output tensor with: (kTfLiteUInt8/kTfLiteFloat32) - Nclasses and either 2 or 4 dimensions, i.e.- [1 x N]or- [1 x 1 x 1 x N]
- optional (but recommended) label map(s) as AssociatedFiles with type TENSOR_AXIS_LABELS,
containing one label per line. The first such AssociatedFile (if any) is used to fill the
class_namefield of the results. Thedisplay_namefield is filled from the AssociatedFile (if any) whose locale matches thedisplay_names_localefield of theImageClassifierOptionsused at creation time (“en” by default, i.e. English). If none of these are available, only theindexfield of the results will be filled.
- optional score calibration can be attached using ScoreCalibrationOptions and an AssociatedFile with type TENSOR_AXIS_SCORE_CALIBRATION. See metadata_schema.fbs [1] for more details.
 DeclarationSwift class ImageClassifier : NSObject
- image input of size 
- 
                  
                  Options for setting up a ImageClassifier.DeclarationSwift class ImageClassifierOptions : TaskOptions, NSCopying
- 
                  
                  Represents the classification results generated by ImageClassifier. *DeclarationSwift class ImageClassifierResult : TaskResult
- 
                  
                  @brief Performs embedding extraction on images. The API expects a TFLite model with optional, but strongly recommended, TFLite Model Metadata.. The API supports models with one image input tensor and one or more output tensors. To be more specific, here are the requirements. Input image tensor (kTfLiteUInt8/kTfLiteFloat32) - image input of size [batch x height x width x channels].
- batch inference is not supported (batchis required to be 1).
- only RGB inputs are supported (channelsis required to be 3).
- if type is kTfLiteFloat32, NormalizationOptions are required to be attached to the metadata for input normalization.
 At least one output tensor (kTfLiteUInt8/kTfLiteFloat32) with shape [1 x N]where N is the number of dimensions in the produced embeddings.DeclarationSwift class ImageEmbedder : NSObject
- image input of size 
- 
                  
                  Options for setting up a ImageEmbedder.DeclarationSwift class ImageEmbedderOptions : TaskOptions, NSCopying
- 
                  
                  Represents the embedding results generated by ImageEmbedder. *DeclarationSwift class ImageEmbedderResult : TaskResult
- 
                  
                  @brief Class that performs segmentation on images. The API expects a TFLite model with mandatory TFLite Model Metadata. DeclarationSwift class ImageSegmenter : NSObject
- 
                  
                  Options for setting up a ImageSegmenter.DeclarationSwift class ImageSegmenterOptions : TaskOptions, NSCopying
- 
                  
                  Represents the segmentation results generated by ImageSegmenter.DeclarationSwift class ImageSegmenterResult : TaskResult
- 
                  
                  @brief Class that performs interactive segmentation on images. Users can represent user interaction through RegionOfInterest, which gives a hint toInteractiveSegmenterto perform segmentation focusing on the given region of interest.The API expects a TFLite model with mandatory TFLite Model Metadata. Input tensor: (kTfLiteUInt8/kTfLiteFloat32) - image input of size [batch x height x width x channels].
- batch inference is not supported (batchis required to be 1).
- RGB and greyscale inputs are supported (channelsis required to be 1 or 3).
- if type is kTfLiteFloat32, NormalizationOptions are required to be attached to the metadata for input normalization. Output tensors: (kTfLiteUInt8/kTfLiteFloat32)
- list of segmented masks.
- if output_typeis CATEGORY_MASK, uint8 Image, Image vector of size 1.
- if output_typeis CONFIDENCE_MASK, float32 Image list of sizechannels.
- batch is always 1.
 An example of such model can be found at: https://tfhub.dev/tensorflow/lite-model/deeplabv3/1/metadata/2 DeclarationSwift class InteractiveSegmenter : NSObject
- image input of size 
- 
                  
                  Options for setting up a InteractiveSegmenter.DeclarationSwift class InteractiveSegmenterOptions : TaskOptions, NSCopying
- 
                  
                  Represents the segmentation results generated by ImageSegmenter.DeclarationSwift class InteractiveSegmenterResult : TaskResult
- 
                  
                  Landmark represents a point in 3D space with x, y, z coordinates. The landmark coordinates are in meters. z represents the landmark depth, and the smaller the value the closer the world landmark is to the camera. DeclarationSwift class Landmark : NSObject
- 
                  
                  Normalized Landmark represents a point in 3D space with x, y, z coordinates. x and y are normalized to [0.0, 1.0] by the image width and height respectively. z represents the landmark depth, and the smaller the value the closer the landmark is to the camera. The magnitude of z uses roughly the same scale as x. DeclarationSwift class NormalizedLandmark : NSObject
- 
                  
                  The wrapper class for MediaPipe segmentation masks. Masks are stored as UInt8 *orfloat *objects. Every mask has an underlying type which can be accessed usingdataType. You can access the mask as any other type using the appropriate properties. For example, if the underlying type isuInt8, in addition to accessing the mask usinguint8Data, you can accessfloat32Datato get the 32 bit float data (with values ranging from 0.0 to 1.0). The first time you access the data as a type different from the underlying type, an expensive type conversion is performed. Subsequent accesses return a pointer to the memory location for the same type converted array. As type conversions can be expensive, it is recommended to limit the accesses to data of types different from the underlying type.Masks that are returned from a MediaPipe Tasks are owned by by the underlying C++ Task. If you need to extend the lifetime of these objects, you can invoke the copy()method.DeclarationSwift class Mask : NSObject, NSCopying
- 
                  
                  @brief Class that performs object detection on images. The API expects a TFLite model with mandatory TFLite Model Metadata. The API supports models with one image input tensor and one or more output tensors. To be more specific, here are the requirements: Input tensor (kTfLiteUInt8/kTfLiteFloat32) - image input of size [batch x height x width x channels].
- batch inference is not supported (batchis required to be 1).
- only RGB inputs are supported (channelsis required to be 3).
- if type is kTfLiteFloat32, NormalizationOptions are required to be attached to the metadata for input normalization.
 Output tensors must be the 4 outputs of a DetectionPostProcessop, i.e:(kTfLiteFloat32) (kTfLiteUInt8/kTfLiteFloat32)- locations tensor of size [num_results x 4], the inner array representing bounding boxes in the form [top, left, right, bottom].
- BoundingBoxProperties are required to be attached to the metadata and must specify type=BOUNDARIES and coordinate_type=RATIO. (kTfLiteFloat32)
- classes tensor of size [num_results], each value representing the integer index of a class.
- optional (but recommended) label map(s) can be attached as AssociatedFiles with type
TENSOR_VALUE_LABELS, containing one label per line. The first such AssociatedFile (if any) is
used to fill the class_namefield of the results. Thedisplay_namefield is filled from the AssociatedFile (if any) whose locale matches thedisplay_names_localefield of theObjectDetectorOptionsused at creation time (“en” by default, i.e. English). If none of these are available, only theindexfield of the results will be filled. (kTfLiteFloat32)
- scores tensor of size [num_results], each value representing the score of the detected object.
- optional score calibration can be attached using ScoreCalibrationOptions and an AssociatedFile with type TENSOR_AXIS_SCORE_CALIBRATION. See metadata_schema.fbs [1] for more details. (kTfLiteFloat32)
- integer num_results as a tensor of size [1]
 DeclarationSwift class ObjectDetector : NSObject
- image input of size 
- 
                  
                  Options for setting up a ObjectDetector.DeclarationSwift class ObjectDetectorOptions : TaskOptions, NSCopying
- 
                  
                  Represents the detection results generated by ObjectDetector.DeclarationSwift class ObjectDetectorResult : TaskResult
- 
                  
                  @brief Performs pose landmarks detection on images. This API expects a pre-trained pose landmarks model asset bundle. DeclarationSwift class PoseLandmarker : NSObject
- 
                  
                  Options for setting up a PoseLandmarker.DeclarationSwift class PoseLandmarkerOptions : TaskOptions, NSCopying
- 
                  
                  Represents the pose landmarks deection results generated by PoseLandmarker.DeclarationSwift class PoseLandmarkerResult : TaskResult
- 
                  
                  The Region-Of-Interest (ROI) to interact with in an interactive segmentation inference. An instance can contain erither contain a single normalized point pointing to the object that the user wants to segment or array of normalized key points that make up scribbles over the object that the user wants to segment. DeclarationSwift class RegionOfInterest : NSObject
- 
                  
                  MediaPipe Tasks options base class. Any MediaPipe task-specific options class should extend this class. DeclarationSwift class TaskOptions : NSObject, NSCopying
- 
                  
                  MediaPipe Tasks result base class. Any MediaPipe task result class should extend this class. DeclarationSwift class TaskResult : NSObject, NSCopying