Learning rate to use for gradient descent training.
batch_size
Batch size for training.
epochs
Number of training iterations over the dataset.
do_fine_tuning
If true, the base module is trained together with the
classification layer on top.
l1_regularizer
A regularizer that applies a L1 regularization penalty.
l2_regularizer
A regularizer that applies a L2 regularization penalty.
label_smoothing
Amount of label smoothing to apply. See tf.keras.losses for
more details.
do_data_augmentation
A boolean controlling whether the training dataset is
augmented by randomly distorting input images, including random cropping,
flipping, etc. See utils.image_preprocessing documentation for details.
decay_samples
Number of training samples used to calculate the decay steps
and create the training optimizer.
warmup_steps
Number of warmup steps for a linear increasing warmup schedule
on learning rate. Used to set up warmup schedule by model_util.WarmUp.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-07 UTC."],[],[],null,["# mediapipe_model_maker.image_classifier.HParams\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/google/mediapipe/blob/master/mediapipe/model_maker/python/vision/image_classifier/hyperparameters.py#L21-L61) |\n\nThe hyperparameters for training image classifiers.\n\nInherits From: [`BaseHParams`](../../mediapipe_model_maker/face_stylizer/hyperparameters/hp/BaseHParams) \n\n mediapipe_model_maker.image_classifier.HParams(\n learning_rate: float = 0.001,\n batch_size: int = 2,\n epochs: int = 10,\n steps_per_epoch: Optional[int] = None,\n class_weights: Optional[Mapping[int, float]] = None,\n shuffle: bool = False,\n repeat: bool = False,\n export_dir: str = tempfile.mkdtemp(),\n distribution_strategy: str = 'off',\n num_gpus: int = 0,\n tpu: str = '',\n do_fine_tuning: bool = False,\n l1_regularizer: float = 0.0,\n l2_regularizer: float = 0.0001,\n label_smoothing: float = 0.1,\n do_data_augmentation: bool = True,\n decay_samples: int = (10000 * 256),\n warmup_epochs: int = 2,\n checkpoint_frequency: int = 1,\n one_hot: bool = True,\n multi_labels: bool = False\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `learning_rate` | Learning rate to use for gradient descent training. |\n| `batch_size` | Batch size for training. |\n| `epochs` | Number of training iterations over the dataset. |\n| `do_fine_tuning` | If true, the base module is trained together with the classification layer on top. |\n| `l1_regularizer` | A regularizer that applies a L1 regularization penalty. |\n| `l2_regularizer` | A regularizer that applies a L2 regularization penalty. |\n| `label_smoothing` | Amount of label smoothing to apply. See tf.keras.losses for more details. |\n| `do_data_augmentation` | A boolean controlling whether the training dataset is augmented by randomly distorting input images, including random cropping, flipping, etc. See utils.image_preprocessing documentation for details. |\n| `decay_samples` | Number of training samples used to calculate the decay steps and create the training optimizer. |\n| `warmup_steps` | Number of warmup steps for a linear increasing warmup schedule on learning rate. Used to set up warmup schedule by model_util.WarmUp. |\n| `checkpoint_frequency` | Frequency to save checkpoint. |\n| `one_hot` | Whether the label data is score input or one-hot. |\n| `multi_labels` | Whether the model predict multi labels. |\n| `steps_per_epoch` | Dataclass field |\n| `class_weights` | Dataclass field |\n| `shuffle` | Dataclass field |\n| `repeat` | Dataclass field |\n| `export_dir` | Dataclass field |\n| `distribution_strategy` | Dataclass field |\n| `num_gpus` | Dataclass field |\n| `tpu` | Dataclass field |\n| `warmup_epochs` | Dataclass field |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `get_strategy`\n\n[View source](https://github.com/google/mediapipe/blob/master/mediapipe/model_maker/python/core/hyperparameters.py#L86-L87) \n\n get_strategy()\n\n### `__eq__`\n\n __eq__(\n other\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Class Variables --------------- ||\n|-----------------------|----------------------------|\n| batch_size | `2` |\n| checkpoint_frequency | `1` |\n| class_weights | `None` |\n| decay_samples | `2560000` |\n| distribution_strategy | `'off'` |\n| do_data_augmentation | `True` |\n| do_fine_tuning | `False` |\n| epochs | `10` |\n| export_dir | `'/tmpfs/tmp/tmpnt_h4p9w'` |\n| l1_regularizer | `0.0` |\n| l2_regularizer | `0.0001` |\n| label_smoothing | `0.1` |\n| learning_rate | `0.001` |\n| multi_labels | `False` |\n| num_gpus | `0` |\n| one_hot | `True` |\n| repeat | `False` |\n| shuffle | `False` |\n| steps_per_epoch | `None` |\n| tpu | `''` |\n| warmup_epochs | `2` |\n\n\u003cbr /\u003e"]]