View source on GitHub |
Perceptual loss based on VGG19 pretrained on the ImageNet dataset.
Inherits From: PerceptualLoss
mediapipe_model_maker.face_stylizer.face_stylizer.loss_functions.VGGPerceptualLoss(
loss_weight: Optional[mediapipe_model_maker.face_stylizer.face_stylizer.loss_functions.PerceptualLossWeight
] = None
)
Reference:
Perceptual loss measures high-level perceptual and semantic differences between images.
Args | |
---|---|
loss_weight
|
Loss weight coefficients. |
Attributes | ||
---|---|---|
activity_regularizer
|
Optional regularizer function for the output of this layer. | |
autotune_steps_per_execution
|
Settable property to enable tuning for steps_per_execution | |
compute_dtype
|
The dtype of the layer's computations.
This is equivalent to Layers automatically cast their inputs to the compute dtype, which
causes computations and the output to be in the compute dtype as well.
This is done by the base Layer class in Layers often perform certain internal computations in higher precision
when |
|
distribute_reduction_method
|
The method employed to reduce per-replica values during training.
Unless specified, the value "auto" will be assumed, indicating that
the reduction strategy should be chosen based on the current
running environment.
See |
|
distribute_strategy
|
The tf.distribute.Strategy this model was created under.
|
|
dtype
|
The dtype of the layer weights.
This is equivalent to |
|
dtype_policy
|
The dtype policy associated with this layer.
This is an instance of a |
|
dynamic
|
Whether the layer is dynamic (eager-only); set in the constructor. | |
input
|
Retrieves the input tensor(s) of a layer.
Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer. |
|
input_spec
|
InputSpec instance(s) describing the input format for this layer.
When you create a layer subclass, you can set
Now, if you try to call the layer on an input that isn't rank 4
(for instance, an input of shape
Input checks that can be specified via
For more information, see |
|
jit_compile
|
Specify whether to compile the model with XLA.
XLA is an optimizing compiler
for machine learning. For more information on supported operations please refer to the XLA documentation. Also refer to known XLA issues for more details. |
|
layers
|
||
losses
|
List of losses added using the add_loss() API.
Variable regularization tensors are created when this property is
accessed, so it is eager safe: accessing
|
|
metrics
|
Return metrics added using compile() or add_metric() .
|
|
metrics_names
|
Returns the model's display labels for all outputs.
|
|
name
|
Name of the layer (string), set in the constructor. | |
name_scope
|
Returns a tf.name_scope instance for this class.
|
|
non_trainable_weights
|
List of all non-trainable weights tracked by this layer.
Non-trainable weights are not updated during training. They are
expected to be updated manually in |
|
output
|
Retrieves the output tensor(s) of a layer.
Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer. |
|
run_eagerly
|
Settable attribute indicating whether the model should run eagerly.
Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls. By default, we will attempt to compile your model to a static graph to deliver the best execution performance. |
|
steps_per_execution
|
Settable steps_per_execution variable. Requires a compiled model.
</td>
</tr><tr>
<td> submodules`
|
Sequence of all sub-modules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
|
|