Base options for the hand gesture recognizer task.
running_mode
The running mode of the task. Default to the image mode.
Gesture recognizer task has three running modes: 1) The image mode for
recognizing hand gestures on single image inputs. 2) The video mode for
recognizing hand gestures on the decoded frames of a video. 3) The live
stream mode for recognizing hand gestures on a live stream of input data,
such as from camera.
num_hands
The maximum number of hands can be detected by the recognizer.
min_hand_detection_confidence
The minimum confidence score for the hand
detection to be considered successful.
min_hand_presence_confidence
The minimum confidence score of hand presence
score in the hand landmark detection.
min_tracking_confidence
The minimum confidence score for the hand tracking
to be considered successful.
canned_gesture_classifier_options
Options for configuring the canned
gestures classifier, such as score threshold, allow list and deny list of
gestures. The categories for canned gesture classifiers are: ["None",
"Closed_Fist", "Open_Palm", "Pointing_Up", "Thumb_Down", "Thumb_Up",
"Victory", "ILoveYou"]. Note this option is subject to change.
custom_gesture_classifier_options
Options for configuring the custom
gestures classifier, such as score threshold, allow list and deny list of
gestures. Note this option is subject to change.
result_callback
The user-defined result callback for processing live stream
data. The result callback should only be specified when the running mode
is set to the live stream mode.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-01 UTC."],[],[],null,["# mp.tasks.vision.GestureRecognizerOptions\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/google/mediapipe/blob/master/mediapipe/tasks/python/vision/gesture_recognizer.py#L163-L251) |\n\nOptions for the gesture recognizer task. \n\n mp.tasks.vision.GestureRecognizerOptions(\n base_options: ../../../mp/tasks/BaseOptions,\n running_mode: ../../../mp/tasks/vision/RunningMode = ../../../mp/tasks/vision/FaceDetectorOptions#running_mode,\n num_hands: int = 1,\n min_hand_detection_confidence: float = 0.5,\n min_hand_presence_confidence: float = 0.5,\n min_tracking_confidence: float = 0.5,\n canned_gesture_classifier_options: ../../../mp/tasks/components/processors/ClassifierOptions = dataclasses.field(default_factory=_ClassifierOptions),\n custom_gesture_classifier_options: ../../../mp/tasks/components/processors/ClassifierOptions = dataclasses.field(default_factory=_ClassifierOptions),\n result_callback: Optional[Callable[[GestureRecognizerResult, image_module.Image, int], None]] = None\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `base_options` | Base options for the hand gesture recognizer task. |\n| `running_mode` | The running mode of the task. Default to the image mode. Gesture recognizer task has three running modes: 1) The image mode for recognizing hand gestures on single image inputs. 2) The video mode for recognizing hand gestures on the decoded frames of a video. 3) The live stream mode for recognizing hand gestures on a live stream of input data, such as from camera. |\n| `num_hands` | The maximum number of hands can be detected by the recognizer. |\n| `min_hand_detection_confidence` | The minimum confidence score for the hand detection to be considered successful. |\n| `min_hand_presence_confidence` | The minimum confidence score of hand presence score in the hand landmark detection. |\n| `min_tracking_confidence` | The minimum confidence score for the hand tracking to be considered successful. |\n| `canned_gesture_classifier_options` | Options for configuring the canned gestures classifier, such as score threshold, allow list and deny list of gestures. The categories for canned gesture classifiers are: \\[\"None\", \"Closed_Fist\", \"Open_Palm\", \"Pointing_Up\", \"Thumb_Down\", \"Thumb_Up\", \"Victory\", \"ILoveYou\"\\]. Note this option is subject to change. |\n| `custom_gesture_classifier_options` | Options for configuring the custom gestures classifier, such as score threshold, allow list and deny list of gestures. Note this option is subject to change. |\n| `result_callback` | The user-defined result callback for processing live stream data. The result callback should only be specified when the running mode is set to the live stream mode. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `__eq__`\n\n __eq__(\n other\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Class Variables --------------- ||\n|-------------------------------|------------------------------------------|\n| min_hand_detection_confidence | `0.5` |\n| min_hand_presence_confidence | `0.5` |\n| min_tracking_confidence | `0.5` |\n| num_hands | `1` |\n| result_callback | `None` |\n| running_mode | `\u003cVisionTaskRunningMode.IMAGE: 'IMAGE'\u003e` |\n\n\u003cbr /\u003e"]]