Performs face detection on the provided single image and waits synchronously for the response. Only use this method when the FaceDetector is created with running mode image.
Performs face detection on the provided video frame and waits synchronously for the response. Only use this method when the FaceDetector is created with running mode video.
Sets new options for the FaceDetector.Calling setOptions() with a subset of options only affects those options. You can reset an option back to its default value by explicitly setting it to undefined.
FaceDetector.createFromModelBuffer()
Initializes the Wasm runtime and creates a new face detector based on the provided model asset buffer.
Performs face detection on the provided single image and waits synchronously for the response. Only use this method when the FaceDetector is created with running mode image.
Performs face detection on the provided video frame and waits synchronously for the response. Only use this method when the FaceDetector is created with running mode video.
Calling setOptions() with a subset of options only affects those options. You can reset an option back to its default value by explicitly setting it to undefined.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-07 UTC."],[],[],null,["# FaceDetector class\n\n\u003cbr /\u003e\n\nPerforms face detection on images.\n\n**Signature:** \n\n export declare class FaceDetector extends VisionTaskRunner \n\n**Extends:** VisionTaskRunner\n\nMethods\n-------\n\n| Method | Modifiers | Description |\n|-------------------------------------------------------------------------------------------------------------------------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [createFromModelBuffer(wasmFileset, modelAssetBuffer)](./tasks-vision.facedetector#facedetectorcreatefrommodelbuffer) | `static` | Initializes the Wasm runtime and creates a new face detector based on the provided model asset buffer. |\n| [createFromModelPath(wasmFileset, modelAssetPath)](./tasks-vision.facedetector#facedetectorcreatefrommodelpath) | `static` | Initializes the Wasm runtime and creates a new face detector based on the path to the model asset. |\n| [createFromOptions(wasmFileset, faceDetectorOptions)](./tasks-vision.facedetector#facedetectorcreatefromoptions) | `static` | Initializes the Wasm runtime and creates a new face detector from the provided options. |\n| [detect(image, imageProcessingOptions)](./tasks-vision.facedetector#facedetectordetect) | | Performs face detection on the provided single image and waits synchronously for the response. Only use this method when the FaceDetector is created with running mode `image`. |\n| [detectForVideo(videoFrame, timestamp, imageProcessingOptions)](./tasks-vision.facedetector#facedetectordetectforvideo) | | Performs face detection on the provided video frame and waits synchronously for the response. Only use this method when the FaceDetector is created with running mode `video`. |\n| [setOptions(options)](./tasks-vision.facedetector#facedetectorsetoptions) | | Sets new options for the FaceDetector.Calling `setOptions()` with a subset of options only affects those options. You can reset an option back to its default value by explicitly setting it to `undefined`. |\n\nFaceDetector.createFromModelBuffer()\n------------------------------------\n\nInitializes the Wasm runtime and creates a new face detector based on the provided model asset buffer.\n\n**Signature:** \n\n static createFromModelBuffer(wasmFileset: WasmFileset, modelAssetBuffer: Uint8Array): Promise\u003cFaceDetector\u003e;\n\n### Parameters\n\n| Parameter | Type | Description |\n|------------------|-------------|--------------------------------------------------------------------------------------|\n| wasmFileset | WasmFileset | A configuration object that provides the location of the Wasm binary and its loader. |\n| modelAssetBuffer | Uint8Array | A binary representation of the model. |\n\n**Returns:**\n\nPromise\\\u003c[FaceDetector](./tasks-vision.facedetector#facedetector_class)\\\u003e\n\nFaceDetector.createFromModelPath()\n----------------------------------\n\nInitializes the Wasm runtime and creates a new face detector based on the path to the model asset.\n\n**Signature:** \n\n static createFromModelPath(wasmFileset: WasmFileset, modelAssetPath: string): Promise\u003cFaceDetector\u003e;\n\n### Parameters\n\n| Parameter | Type | Description |\n|----------------|-------------|--------------------------------------------------------------------------------------|\n| wasmFileset | WasmFileset | A configuration object that provides the location of the Wasm binary and its loader. |\n| modelAssetPath | string | The path to the model asset. |\n\n**Returns:**\n\nPromise\\\u003c[FaceDetector](./tasks-vision.facedetector#facedetector_class)\\\u003e\n\nFaceDetector.createFromOptions()\n--------------------------------\n\nInitializes the Wasm runtime and creates a new face detector from the provided options.\n\n**Signature:** \n\n static createFromOptions(wasmFileset: WasmFileset, faceDetectorOptions: FaceDetectorOptions): Promise\u003cFaceDetector\u003e;\n\n### Parameters\n\n| Parameter | Type | Description |\n|---------------------|-----------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------|\n| wasmFileset | WasmFileset | A configuration object that provides the location of the Wasm binary and its loader. |\n| faceDetectorOptions | [FaceDetectorOptions](./tasks-vision.facedetectoroptions#facedetectoroptions_interface) | The options for the FaceDetector. Note that either a path to the model asset or a model buffer needs to be provided (via `baseOptions`). |\n\n**Returns:**\n\nPromise\\\u003c[FaceDetector](./tasks-vision.facedetector#facedetector_class)\\\u003e\n\nFaceDetector.detect()\n---------------------\n\nPerforms face detection on the provided single image and waits synchronously for the response. Only use this method when the FaceDetector is created with running mode `image`.\n\n**Signature:** \n\n detect(image: ImageSource, imageProcessingOptions?: ImageProcessingOptions): DetectionResult;\n\n### Parameters\n\n| Parameter | Type | Description |\n|------------------------|-------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| image | [ImageSource](./tasks-vision#imagesource) | An image to process. |\n| imageProcessingOptions | ImageProcessingOptions | the `ImageProcessingOptions` specifying how to process the input image before running inference. A result containing the list of detected faces. |\n\n**Returns:**\n\n[DetectionResult](./tasks-vision.detectionresult#detectionresult_interface)\n\nFaceDetector.detectForVideo()\n-----------------------------\n\nPerforms face detection on the provided video frame and waits synchronously for the response. Only use this method when the FaceDetector is created with running mode `video`.\n\n**Signature:** \n\n detectForVideo(videoFrame: ImageSource, timestamp: number, imageProcessingOptions?: ImageProcessingOptions): DetectionResult;\n\n### Parameters\n\n| Parameter | Type | Description |\n|------------------------|-------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| videoFrame | [ImageSource](./tasks-vision#imagesource) | A video frame to process. |\n| timestamp | number | The timestamp of the current frame, in ms. |\n| imageProcessingOptions | ImageProcessingOptions | the `ImageProcessingOptions` specifying how to process the input image before running inference. A result containing the list of detected faces. |\n\n**Returns:**\n\n[DetectionResult](./tasks-vision.detectionresult#detectionresult_interface)\n\nFaceDetector.setOptions()\n-------------------------\n\nSets new options for the FaceDetector.\n\nCalling `setOptions()` with a subset of options only affects those options. You can reset an option back to its default value by explicitly setting it to `undefined`.\n\n**Signature:** \n\n setOptions(options: FaceDetectorOptions): Promise\u003cvoid\u003e;\n\n### Parameters\n\n| Parameter | Type | Description |\n|-----------|-----------------------------------------------------------------------------------------|-----------------------------------|\n| options | [FaceDetectorOptions](./tasks-vision.facedetectoroptions#facedetectoroptions_interface) | The options for the FaceDetector. |\n\n**Returns:**\n\nPromise\\\u003cvoid\\\u003e"]]