The hyperparameters for training face stylizers.

Inherits From: BaseHParams

learning_rate Learning rate to use for gradient descent training.
batch_size Batch size for training.
epochs Number of training iterations.
beta_1 beta_1 used in tf.keras.optimizers.Adam.
beta_2 beta_2 used in tf.keras.optimizers.Adam.
steps_per_epoch Dataclass field
class_weights Dataclass field
shuffle Dataclass field
repeat Dataclass field
export_dir Dataclass field
distribution_strategy Dataclass field
num_gpus Dataclass field
tpu Dataclass field



View source


batch_size 4
beta_1 0.0
beta_2 0.99
class_weights None
distribution_strategy 'off'
epochs 100
export_dir '/tmpfs/tmp/tmpnt_h4p9w'
learning_rate 0.0008
num_gpus 0
repeat False
shuffle False
steps_per_epoch None
tpu ''