mp.tasks.vision.ImageSegmenter

Class that performs image segmentation on images.

The API expects a TFLite model with mandatory TFLite Model Metadata.

(kTfLiteUInt8/kTfLiteFloat32)

  • image input of size [batch x height x width x channels].
  • batch inference is not supported (batch is required to be 1).
  • RGB and greyscale inputs are supported (channels is required to be 1 or 3).
  • if type is kTfLiteFloat32, NormalizationOptions are required to be attached to the metadata for input normalization.

(kTfLiteUInt8/kTfLiteFloat32)

  • list of segmented masks.
  • if output_category_mask is True, uint8 Image, Image vector of size 1.
  • if output_confidence_masks is True, float32 Image list of size channels.
  • batch is always 1

An example of such model can be found at: https://tfhub.dev/tensorflow/lite-model/deeplabv3/1/metadata/2

labels Get the category label list the ImageSegmenter can recognize.

For CATEGORY_MASK type, the index in the category mask corresponds to the category in the label list. For CONFIDENCE_MASK type, the output mask list at index corresponds to the category in the label list.

If there is no label map provided in the model file, empty label list is returned.

Methods

close

View source

Shuts down the mediapipe vision task instance.

Raises
RuntimeError If the mediapipe vision task failed to close.

convert_to_normalized_rect

View source

Converts from ImageProcessingOptions to NormalizedRect, performing sanity checks on-the-fly.

If the input ImageProcessingOptions is not present, returns a default NormalizedRect covering the whole image with rotation set to 0. If 'roi_allowed' is false, an error will be returned if the input ImageProcessingOptions has its 'region_of_interest' field set.

Args
options Options for image processing.
image The image to process.
roi_allowed Indicates if the region_of_interest field is allowed to be set. By default, it's set to True.

Returns
A normalized rect proto that represents the image processing options.

create_from_model_path

View source

Creates an ImageSegmenter object from a TensorFlow Lite model and the default ImageSegmenterOptions.

Note that the created ImageSegmenter instance is in image mode, for performing image segmentation on single image inputs.

Args
model_path Path to the model.

Returns
ImageSegmenter object that's created from the model file and the default ImageSegmenterOptions.

Raises
ValueError If failed to create ImageSegmenter object from the provided file such as invalid file path.
RuntimeError If other types of error occurred.

create_from_options

View source

Creates the ImageSegmenter object from image segmenter options.

Args
options Options for the image segmenter task.

Returns
ImageSegmenter object that's created from options.

Raises
ValueError If failed to create ImageSegmenter object from ImageSegmenterOptions such as missing the model.
RuntimeError If other types of error occurred.

get_graph_config

View source

Returns the canonicalized CalculatorGraphConfig of the underlying graph.

segment

View source

Performs the actual segmentation task on the provided MediaPipe Image.

Args
image MediaPipe Image.
image_processing_options Options for image processing.

Returns
If the output_type is CATEGORY_MASK, the returned vector of images is per-category segmented image mask. If the output_type is CONFIDENCE_MASK, the returned vector of images contains only one confidence image mask. A segmentation result object that contains a list of segmentation masks as images.

Raises
ValueError If any of the input arguments is invalid.
RuntimeError If image segmentation failed to run.

segment_async

View source

Sends live image data (an Image with a unique timestamp) to perform image segmentation.

Only use this method when the ImageSegmenter is created with the live stream running mode. The input timestamps should be monotonically increasing for adjacent calls of this method. This method will return immediately after the input image is accepted. The results will be available via the result_callback provided in the ImageSegmenterOptions. The segment_async method is designed to process live stream data such as camera input. To lower the overall latency, image segmenter may drop the input images if needed. In other words, it's not guaranteed to have output per input image.

The result_callback prvoides:

  • A segmentation result object that contains a list of segmentation masks as images.
  • The input image that the image segmenter runs on.
  • The input timestamp in milliseconds.

Args
image MediaPipe Image.
timestamp_ms The timestamp of the input image in milliseconds.
image_processing_options Options for image processing.

Raises
ValueError If the current input timestamp is smaller than what the image segmenter has already processed.

segment_for_video

View source

Performs segmentation on the provided video frames.

Only use this method when the ImageSegmenter is created with the video running mode. It's required to provide the video frame's timestamp (in milliseconds) along with the video frame. The input timestamps should be monotonically increasing for adjacent calls of this method.

Args
image MediaPipe Image.
timestamp_ms The timestamp of the input video frame in milliseconds.
image_processing_options Options for image processing.

Returns
If the output_type is CATEGORY_MASK, the returned vector of images is per-category segmented image mask. If the output_type is CONFIDENCE_MASK, the returned vector of images contains only one confidence image mask. A segmentation result object that contains a list of segmentation masks as images.