Hyperparameters used for training models.
mediapipe_model_maker.face_stylizer.hyperparameters.hp.BaseHParams(
learning_rate: float,
batch_size: int,
epochs: int,
steps_per_epoch: Optional[int] = None,
class_weights: Optional[Mapping[int, float]] = None,
shuffle: bool = False,
repeat: bool = False,
export_dir: str = tempfile.mkdtemp(),
distribution_strategy: str = 'off',
num_gpus: int = 0,
tpu: str = ''
)
A common set of hyperparameters shared by the training jobs of all model
maker tasks.
Attributes |
learning_rate
|
The learning rate to use for gradient descent training.
|
batch_size
|
Batch size for training.
|
epochs
|
Number of training iterations over the dataset.
|
steps_per_epoch
|
An optional integer indicate the number of training steps
per epoch. If not set, the training pipeline calculates the default steps
per epoch as the training dataset size divided by batch size.
|
class_weights
|
An optional mapping of indices to weights for weighting the
loss function during training.
|
shuffle
|
True if the dataset is shuffled before training.
|
repeat
|
True if the training dataset is repeated infinitely to support
training without checking the dataset size.
|
export_dir
|
The location of the model checkpoint files.
|
distribution_strategy
|
A string specifying which Distribution Strategy to
use. Accepted values are 'off', 'one_device', 'mirrored',
'parameter_server', 'multi_worker_mirrored', and 'tpu' -- case
insensitive. 'off' means not to use Distribution Strategy; 'tpu' means to
use TPUStrategy using tpu_address . See the tf.distribute.Strategy
documentation for more details:
https://www.tensorflow.org/api_docs/python/tf/distribute/Strategy.
|
num_gpus
|
How many GPUs to use at each worker with the
DistributionStrategies API. The default is 0.
|
tpu
|
The TPU resource to be used for training. This should be either the
name used when creating the Cloud TPU, a grpc://ip.address.of.tpu:8470
url, or an empty string if using a local TPU.
|
Methods
get_strategy
View source
get_strategy()
__eq__
__eq__(
other
)
Class Variables |
class_weights
|
None
|
distribution_strategy
|
'off'
|
export_dir
|
'/tmpfs/tmp/tmpnt_h4p9w'
|
num_gpus
|
0
|
repeat
|
False
|
shuffle
|
False
|
steps_per_epoch
|
None
|
tpu
|
''
|