View source on GitHub |
Class that performs embedding extraction on audio clips or audio stream.
mp.tasks.audio.AudioEmbedder(
graph_config: mp.calculators.core.constant_side_packet_calculator_pb2.mediapipe_dot_framework_dot_calculator__pb2.CalculatorGraphConfig
,
running_mode: mp.tasks.audio.RunningMode
,
packet_callback: Optional[Callable[[Mapping[str, packet_module.Packet]], None]] = None
) -> None
This API expects a TFLite model with mandatory TFLite Model Metadata that contains the mandatory AudioProperties of the solo input audio tensor and the optional (but recommended) label items as AssociatedFiles with type TENSOR_AXIS_LABELS per output embedding tensor.
Input tensor | |
---|---|
(kTfLiteFloat32)
|
At least one output tensor with: (kTfLiteUInt8/kTfLiteFloat32)
N
components corresponding to theN
dimensions of the returned feature vector for this output layer.- Either 2 or 4 dimensions, i.e.
[1 x N]
or[1 x 1 x 1 x N]
.
Raises | |
---|---|
ValueError
|
The packet callback is not properly set based on the task's running mode. |
Methods
close
close() -> None
Shuts down the mediapipe audio task instance.
Raises | |
---|---|
RuntimeError
|
If the mediapipe audio task failed to close. |
create_audio_record
create_audio_record(
num_channels: int, sample_rate: int, required_input_buffer_size: int
) -> audio_record.AudioRecord
Creates an AudioRecord instance to record audio stream.
The returned AudioRecord instance is initialized and client needs to call the appropriate method to start recording.
Note that MediaPipe Audio tasks will up/down sample automatically to fit the sample rate required by the model. The default sample rate of the MediaPipe pretrained audio model, Yamnet is 16kHz.
Args | |
---|---|
num_channels
|
The number of audio channels. |
sample_rate
|
The audio sample rate. |
required_input_buffer_size
|
The required input buffer size in number of float elements. |
Returns | |
---|---|
An AudioRecord instance. |
Raises | |
---|---|
ValueError
|
If there's a problem creating the AudioRecord instance. |
create_from_model_path
@classmethod
create_from_model_path( model_path: str ) -> 'AudioEmbedder'
Creates an AudioEmbedder
object from a TensorFlow Lite model and the default AudioEmbedderOptions
.
Note that the created AudioEmbedder
instance is in audio clips mode, for
embedding extraction on the independent audio clips.
Args | |
---|---|
model_path
|
Path to the model. |
Returns | |
---|---|
AudioEmbedder object that's created from the model file and the
default AudioEmbedderOptions .
|
Raises | |
---|---|
ValueError
|
If failed to create AudioEmbedder object from the provided
file such as invalid file path.
|
RuntimeError
|
If other types of error occurred. |
create_from_options
@classmethod
create_from_options( options:
mp.tasks.audio.AudioEmbedderOptions
) -> 'AudioEmbedder'
Creates the AudioEmbedder
object from audio embedder options.
Args | |
---|---|
options
|
Options for the audio embedder task. |
Returns | |
---|---|
AudioEmbedder object that's created from options .
|
Raises | |
---|---|
ValueError
|
If failed to create AudioEmbedder object from
AudioEmbedderOptions such as missing the model.
|
RuntimeError
|
If other types of error occurred. |
embed
embed(
audio_clip: mp.tasks.components.containers.AudioData
) -> List[mp.tasks.audio.AudioEmbedderResult
]
Performs embedding extraction on the provided audio clips.
The audio clip is represented as a MediaPipe AudioData. The method accepts
audio clips with various length and audio sample rate. It's required to
provide the corresponding audio sample rate within the AudioData
object.
The input audio clip may be longer than what the model is able to process in a single inference. When this occurs, the input audio clip is split into multiple chunks starting at different timestamps. For this reason, this function returns a vector of EmbeddingResult objects, each associated ith a timestamp corresponding to the start (in milliseconds) of the chunk data on which embedding extraction was carried out.
Args | |
---|---|
audio_clip
|
MediaPipe AudioData. |
Returns | |
---|---|
An AudioEmbedderResult object that contains a list of embedding result
objects, each associated with a timestamp corresponding to the start
(in milliseconds) of the chunk data on which embedding extraction was
carried out.
|
Raises | |
---|---|
ValueError
|
If any of the input arguments is invalid, such as the sample
rate is not provided in the AudioData object.
|
RuntimeError
|
If audio embedding extraction failed to run. |
embed_async
embed_async(
audio_block: mp.tasks.components.containers.AudioData
,
timestamp_ms: int
) -> None
Sends audio data (a block in a continuous audio stream) to perform audio embedding extraction.
Only use this method when the AudioEmbedder is created with the audio
stream running mode. The input timestamps should be monotonically increasing
for adjacent calls of this method. This method will return immediately after
the input audio data is accepted. The results will be available via the
result_callback
provided in the AudioEmbedderOptions
. The
embed_async
method is designed to process auido stream data such as
microphone input.
The input audio data may be longer than what the model is able to process in a single inference. When this occurs, the input audio block is split into multiple chunks. For this reason, the callback may be called multiple times (once per chunk) for each call to this function.
The result_callback
provides:
- An
AudioEmbedderResult
object that contains a list of embeddings. - The input timestamp in milliseconds.
Args | |
---|---|
audio_block
|
MediaPipe AudioData. |
timestamp_ms
|
The timestamp of the input audio data in milliseconds. |
Raises | |
---|---|
ValueError
|
If any of the followings:
1) The sample rate is not provided in the |
__enter__
__enter__()
Return self
upon entering the runtime context.
__exit__
__exit__(
unused_exc_type, unused_exc_value, unused_traceback
)
Shuts down the mediapipe audio task instance on exit of the context manager.
Raises | |
---|---|
RuntimeError
|
If the mediapipe audio task failed to close. |