[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-07 UTC."],[],[],null,["# mediapipe_model_maker.gesture_recognizer.HandDataPreprocessingParams\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/google/mediapipe/blob/master/mediapipe/model_maker/python/vision/gesture_recognizer/dataset.py#L37-L46) |\n\nA dataclass wraps the hand data preprocessing hyperparameters. \n\n mediapipe_model_maker.gesture_recognizer.HandDataPreprocessingParams(\n shuffle: bool = True, min_detection_confidence: float = 0.7\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|----------------------------|----------------------------------------------------------------|\n| `shuffle` | A boolean controlling if shuffle the dataset. Default to true. |\n| `min_detection_confidence` | confidence threshold for hand detection. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `__eq__`\n\n __eq__(\n other\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Class Variables --------------- ||\n|--------------------------|--------|\n| min_detection_confidence | `0.7` |\n| shuffle | `True` |\n\n\u003cbr /\u003e"]]