Class that performs image classification on images.
mp.tasks.vision.ImageClassifier(
graph_config: mp.calculators.core.constant_side_packet_calculator_pb2.mediapipe_dot_framework_dot_calculator__pb2.CalculatorGraphConfig
,
running_mode: mp.tasks.vision.RunningMode
,
packet_callback: Optional[Callable[[Mapping[str, packet_module.Packet]], None]] = None
) -> None
The API expects a TFLite model with optional, but strongly recommended,
TFLite Model Metadata.
|
(kTfLiteUInt8/kTfLiteFloat32)
- image input of size
[batch x height x width x channels] .
- batch inference is not supported (
batch is required to be 1).
- only RGB inputs are supported (
channels is required to be 3).
- if type is kTfLiteFloat32, NormalizationOptions are required to be
attached to the metadata for input normalization.
|
At least one output tensor with:
(kTfLiteUInt8/kTfLiteFloat32)
N
classes and either 2 or 4 dimensions, i.e. [1 x N]
or
[1 x 1 x 1 x N]
- optional (but recommended) label map(s) as AssociatedFiles with type
TENSOR_AXIS_LABELS, containing one label per line. The first such
AssociatedFile (if any) is used to fill the
class_name
field of the
results. The display_name
field is filled from the AssociatedFile (if
any) whose locale matches the display_names_locale
field of the
ImageClassifierOptions
used at creation time ("en" by default, i.e.
English). If none of these are available, only the index
field of the
results will be filled.
- optional score calibration can be attached using ScoreCalibrationOptions
and an AssociatedFile with type TENSOR_AXIS_SCORE_CALIBRATION. See
metadata_schema.fbs 1 for more details.
An example of such model can be found at:
https://tfhub.dev/bohemian-visual-recognition-alliance/lite-model/models/mushroom-identification_v1/1
Args |
graph_config
|
The mediapipe vision task graph config proto.
|
running_mode
|
The running mode of the mediapipe vision task.
|
packet_callback
|
The optional packet callback for getting results
asynchronously in the live stream mode.
|
Raises |
ValueError
|
The packet callback is not properly set based on the task's
running mode.
|
Methods
classify
View source
classify(
image: mp.Image
,
image_processing_options: Optional[mp.tasks.vision.holistic_landmarker.image_processing_options_module.ImageProcessingOptions
] = None
) -> mp.tasks.audio.AudioClassifierResult
Performs image classification on the provided MediaPipe Image.
Args |
image
|
MediaPipe Image.
|
image_processing_options
|
Options for image processing.
|
Returns |
A classification result object that contains a list of classifications.
|
Raises |
ValueError
|
If any of the input arguments is invalid.
|
RuntimeError
|
If image classification failed to run.
|
classify_async
View source
classify_async(
image: mp.Image
,
timestamp_ms: int,
image_processing_options: Optional[mp.tasks.vision.holistic_landmarker.image_processing_options_module.ImageProcessingOptions
] = None
) -> None
Sends live image data (an Image with a unique timestamp) to perform image classification.
Only use this method when the ImageClassifier is created with the live
stream running mode. The input timestamps should be monotonically increasing
for adjacent calls of this method. This method will return immediately after
the input image is accepted. The results will be available via the
result_callback
provided in the ImageClassifierOptions
. The
classify_async
method is designed to process live stream data such as
camera input. To lower the overall latency, image classifier may drop the
input images if needed. In other words, it's not guaranteed to have output
per input image.
The result_callback
provides:
- A classification result object that contains a list of classifications.
- The input image that the image classifier runs on.
- The input timestamp in milliseconds.
Args |
image
|
MediaPipe Image.
|
timestamp_ms
|
The timestamp of the input image in milliseconds.
|
image_processing_options
|
Options for image processing.
|
Raises |
ValueError
|
If the current input timestamp is smaller than what the image
classifier has already processed.
|
classify_for_video
View source
classify_for_video(
image: mp.Image
,
timestamp_ms: int,
image_processing_options: Optional[mp.tasks.vision.holistic_landmarker.image_processing_options_module.ImageProcessingOptions
] = None
) -> mp.tasks.audio.AudioClassifierResult
Performs image classification on the provided video frames.
Only use this method when the ImageClassifier is created with the video
running mode. It's required to provide the video frame's timestamp (in
milliseconds) along with the video frame. The input timestamps should be
monotonically increasing for adjacent calls of this method.
Args |
image
|
MediaPipe Image.
|
timestamp_ms
|
The timestamp of the input video frame in milliseconds.
|
image_processing_options
|
Options for image processing.
|
Returns |
A classification result object that contains a list of classifications.
|
Raises |
ValueError
|
If any of the input arguments is invalid.
|
RuntimeError
|
If image classification failed to run.
|
close
View source
close() -> None
Shuts down the mediapipe vision task instance.
Raises |
RuntimeError
|
If the mediapipe vision task failed to close.
|
convert_to_normalized_rect
View source
convert_to_normalized_rect(
options: mp.tasks.vision.holistic_landmarker.image_processing_options_module.ImageProcessingOptions
,
image: mp.Image
,
roi_allowed: bool = True
) -> mp.tasks.components.containers.NormalizedRect
Converts from ImageProcessingOptions to NormalizedRect, performing sanity checks on-the-fly.
If the input ImageProcessingOptions is not present, returns a default
NormalizedRect covering the whole image with rotation set to 0. If
'roi_allowed' is false, an error will be returned if the input
ImageProcessingOptions has its 'region_of_interest' field set.
Args |
options
|
Options for image processing.
|
image
|
The image to process.
|
roi_allowed
|
Indicates if the region_of_interest field is allowed to be
set. By default, it's set to True.
|
Returns |
A normalized rect proto that represents the image processing options.
|
create_from_model_path
View source
@classmethod
create_from_model_path(
model_path: str
) -> 'ImageClassifier'
Creates an ImageClassifier
object from a TensorFlow Lite model and the default ImageClassifierOptions
.
Note that the created ImageClassifier
instance is in image mode, for
classifying objects on single image inputs.
Args |
model_path
|
Path to the model.
|
Returns |
ImageClassifier object that's created from the model file and the
default ImageClassifierOptions .
|
Raises |
ValueError
|
If failed to create ImageClassifier object from the provided
file such as invalid file path.
|
RuntimeError
|
If other types of error occurred.
|
create_from_options
View source
@classmethod
create_from_options(
options: mp.tasks.vision.ImageClassifierOptions
)