View source on GitHub |
A container for the input tensor metadata information of Bert models.
tflite_support.metadata_writers.metadata_info.BertInputTensorsMd(
model_buffer: bytearray,
ids_name: str,
mask_name: str,
segment_name: str,
ids_md: Optional[tflite_support.metadata_writers.metadata_info.TensorMd
] = None,
mask_md: Optional[tflite_support.metadata_writers.metadata_info.TensorMd
] = None,
segment_ids_md: Optional[tflite_support.metadata_writers.metadata_info.TensorMd
] = None,
tokenizer_md: Union[None, tflite_support.metadata_writers.metadata_info.BertTokenizerMd
, tflite_support.metadata_writers.metadata_info.SentencePieceTokenizerMd
] = None
)
Args | |
---|---|
model_buffer
|
valid buffer of the model file. |
ids_name
|
name of the ids tensor, which represents the tokenized ids of the input text. |
mask_name
|
name of the mask tensor, which represents the mask with 1 for real tokens and 0 for padding tokens. |
segment_name
|
name of the segment ids tensor, where 0 stands for the
first sequence, and 1 stands for the second sequence if exists.
|
ids_md
|
input ids tensor informaton. |
mask_md
|
input mask tensor informaton. |
segment_ids_md
|
input segment tensor informaton. |
tokenizer_md
|
information of the tokenizer used to process the input
string, if any. Supported tokenziers are: BertTokenizer 1 and
SentencePieceTokenizer 2. If the tokenizer is RegexTokenizer
3, refer to nl_classifier.MetadataWriter .
|
Methods
create_input_process_unit_metadata
create_input_process_unit_metadata() -> List[tflite_support.metadata_schema_py_generated.ProcessUnitT
]
Creates the input process unit metadata.
create_input_tesnor_metadata
create_input_tesnor_metadata() -> List[tflite_support.metadata_schema_py_generated.TensorMetadataT
]
Creates the input metadata for the three input tesnors.
get_tokenizer_associated_files
get_tokenizer_associated_files() -> List[str]
Gets the associated files that are packed in the tokenizer.