Mbajtësit e imazheve lejojnë futjen e imazheve në një vektor tipar me dimension të lartë që përfaqëson kuptimin semantik të një imazhi, i cili më pas mund të krahasohet me vektorin e veçorive të imazheve të tjera për të vlerësuar ngjashmërinë e tyre semantike.
Ndryshe nga kërkimi i imazheve , ngulitësi i imazheve lejon llogaritjen e ngjashmërisë midis imazheve në lëvizje në vend që të kërkoni përmes një indeksi të paracaktuar të ndërtuar nga një korpus imazhesh.
Përdorni API-në e Task Library ImageEmbedder për të vendosur ngulitësin tuaj të personalizuar të imazhit në aplikacionet tuaja celulare.
Karakteristikat kryesore të ImageEmbedder API
Përpunimi i imazhit në hyrje, duke përfshirë rrotullimin, ndryshimin e madhësisë dhe konvertimin e hapësirës së ngjyrave.
Rajoni i interesit të imazhit të hyrjes.
Funksioni i integruar i shërbimeve për të llogaritur ngjashmërinë e kosinusit midis vektorëve të veçorive.
Modele të mbështetura të ngulitjes së imazhit
Modelet e mëposhtme janë të garantuara të jenë të pajtueshme me ImageEmbedder API.
// InitializationImageEmbedderOptionsoptions:options.mutable_model_file_with_metadata()->set_file_name(model_path);options.set_l2_normalize(true);std::unique_ptr<ImageEmbedder>image_embedder=ImageEmbedder::CreateFromOptions(options).value();// Create input frame_buffer_1 and frame_buffer_2 from your inputs `image_data1`, `image_data2`, `image_dimension1` and `image_dimension2`.// See more information here: tensorflow_lite_support/cc/task/vision/utils/frame_buffer_common_utils.hstd::unique_ptr<FrameBuffer>frame_buffer_1=CreateFromRgbRawBuffer(image_data1,image_dimension1);std::unique_ptr<FrameBuffer>frame_buffer_2=CreateFromRgbRawBuffer(image_data2,image_dimension2);// Run inference on two images.constEmbeddingResultresult_1=image_embedder->Embed(*frame_buffer_1);constEmbeddingResultresult_2=image_embedder->Embed(*frame_buffer_2);// Compute cosine similarity.doublesimilarity=ImageEmbedder::CosineSimilarity(result_1.embeddings[0].feature_vector(),result_2.embeddings[0].feature_vector());
Shikoni kodin burimor për më shumë opsione për të konfiguruar ImageEmbedder .
Ekzekutoni konkluzionet në Python
Hapi 1: Instaloni paketën TensorFlow Lite Support Pypi.
Ju mund të instaloni paketën TensorFlow Lite Support Pypi duke përdorur komandën e mëposhtme:
pipinstalltflite-support
Hapi 2: Përdorimi i modelit
fromtflite_support.taskimportvision# Initialization.image_embedder=vision.ImageEmbedder.create_from_file(model_path)# Run inference on two images.image_1=vision.TensorImage.create_from_file('/path/to/image1.jpg')result_1=image_embedder.embed(image_1)image_2=vision.TensorImage.create_from_file('/path/to/image2.jpg')result_2=image_embedder.embed(image_2)# Compute cosine similarity.feature_vector_1=result_1.embeddings[0].feature_vectorfeature_vector_2=result_2.embeddings[0].feature_vectorsimilarity=image_embedder.cosine_similarity(result_1.embeddings[0].feature_vector,result_2.embeddings[0].feature_vector)
Shikoni kodin burimor për më shumë opsione për të konfiguruar ImageEmbedder .
Shembuj të rezultateve
Ngjashmëria e kosinusit midis vektorëve të tipareve të normalizuara kthen një rezultat midis -1 dhe 1. Sa më i lartë është më mirë, dmth një ngjashmëri kosinusi prej 1 do të thotë që dy vektorët janë identikë.
[[["E lehtë për t'u kuptuar","easyToUnderstand","thumb-up"],["E zgjidhi problemin tim","solvedMyProblem","thumb-up"],["Tjetër","otherUp","thumb-up"]],[["Mungojnë informacionet që më nevojiten","missingTheInformationINeed","thumb-down"],["Shumë e ndërlikuar/shumë hapa","tooComplicatedTooManySteps","thumb-down"],["E papërditësuar","outOfDate","thumb-down"],["Problem përkthimi","translationIssue","thumb-down"],["Problem me kampionët/kodin","samplesCodeIssue","thumb-down"],["Tjetër","otherDown","thumb-down"]],["Përditësimi i fundit: 2025-07-28 UTC."],[],[],null,["# Integrate image embedders\n\nImage embedders allow embedding images into a high-dimensional feature vector\nrepresenting the semantic meaning of an image, which can then be compared with\nthe feature vector of other images to evaluate their semantic similarity.\n\nAs opposed to\n[image search](./image_searcher),\nthe image embedder allows computing the similarity between images on-the-fly\ninstead of searching through a predefined index built from a corpus of images.\n\nUse the Task Library `ImageEmbedder` API to deploy your custom image embedder\ninto your mobile apps.\n\nKey features of the ImageEmbedder API\n-------------------------------------\n\n- Input image processing, including rotation, resizing, and color space\n conversion.\n\n- Region of interest of the input image.\n\n- Built-in utility function to compute the\n [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) between\n feature vectors.\n\nSupported image embedder models\n-------------------------------\n\nThe following models are guaranteed to be compatible with the `ImageEmbedder`\nAPI.\n\n- Feature vector models from the\n [Google Image Modules collection on Kaggle Models](https://www.kaggle.com/models?id=141,200,270,271,241,301,295,268,265,229,288).\n\n- Custom models that meet the\n [model compatibility requirements](#model-compatibility-requirements).\n\nRun inference in C++\n--------------------\n\n // Initialization\n ImageEmbedderOptions options:\n options.mutable_model_file_with_metadata()-\u003eset_file_name(model_path);\n options.set_l2_normalize(true);\n std::unique_ptr\u003cImageEmbedder\u003e image_embedder = ImageEmbedder::CreateFromOptions(options).value();\n\n // Create input frame_buffer_1 and frame_buffer_2 from your inputs `image_data1`, `image_data2`, `image_dimension1` and `image_dimension2`.\n // See more information here: tensorflow_lite_support/cc/task/vision/utils/frame_buffer_common_utils.h\n std::unique_ptr\u003cFrameBuffer\u003e frame_buffer_1 = CreateFromRgbRawBuffer(\n image_data1, image_dimension1);\n std::unique_ptr\u003cFrameBuffer\u003e frame_buffer_2 = CreateFromRgbRawBuffer(\n image_data2, image_dimension2);\n\n // Run inference on two images.\n const EmbeddingResult result_1 = image_embedder-\u003eEmbed(*frame_buffer_1);\n const EmbeddingResult result_2 = image_embedder-\u003eEmbed(*frame_buffer_2);\n\n // Compute cosine similarity.\n double similarity = ImageEmbedder::CosineSimilarity(\n result_1.embeddings[0].feature_vector(),\n result_2.embeddings[0].feature_vector());\n\nSee the\n[source code](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/cc/task/vision/image_embedder.h)\nfor more options to configure `ImageEmbedder`.\n\nRun inference in Python\n-----------------------\n\n### Step 1: Install TensorFlow Lite Support Pypi package.\n\nYou can install the TensorFlow Lite Support Pypi package using the following\ncommand: \n\n pip install tflite-support\n\n### Step 2: Using the model\n\n from tflite_support.task import vision\n\n # Initialization.\n image_embedder = vision.ImageEmbedder.create_from_file(model_path)\n\n # Run inference on two images.\n image_1 = vision.TensorImage.create_from_file('/path/to/image1.jpg')\n result_1 = image_embedder.embed(image_1)\n image_2 = vision.TensorImage.create_from_file('/path/to/image2.jpg')\n result_2 = image_embedder.embed(image_2)\n\n # Compute cosine similarity.\n feature_vector_1 = result_1.embeddings[0].feature_vector\n feature_vector_2 = result_2.embeddings[0].feature_vector\n similarity = image_embedder.cosine_similarity(\n result_1.embeddings[0].feature_vector, result_2.embeddings[0].feature_vector)\n\nSee the\n[source code](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/python/task/vision/image_embedder.py)\nfor more options to configure `ImageEmbedder`.\n\nExample results\n---------------\n\nCosine similarity between normalized feature vectors return a score between -1\nand 1. Higher is better, i.e. a cosine similarity of 1 means the two vectors are\nidentical. \n\n Cosine similarity: 0.954312\n\nTry out the simple\n[CLI demo tool for ImageEmbedder](https://github.com/tensorflow/tflite-support/tree/master/tensorflow_lite_support/examples/task/vision/desktop#imageembedder)\nwith your own model and test data.\n\nModel compatibility requirements\n--------------------------------\n\nThe `ImageEmbedder` API expects a TFLite model with optional, but strongly\nrecommended\n[TFLite Model Metadata](../../models/metadata).\n\nThe compatible image embedder models should meet the following requirements:\n\n- An input image tensor (kTfLiteUInt8/kTfLiteFloat32)\n\n - image input of size `[batch x height x width x channels]`.\n - batch inference is not supported (`batch` is required to be 1).\n - only RGB inputs are supported (`channels` is required to be 3).\n - if type is kTfLiteFloat32, NormalizationOptions are required to be attached to the metadata for input normalization.\n- At least one output tensor (kTfLiteUInt8/kTfLiteFloat32)\n\n - with `N` components corresponding to the `N` dimensions of the returned feature vector for this output layer.\n - Either 2 or 4 dimensions, i.e. `[1 x N]` or `[1 x 1 x 1 x N]`."]]