Performs hand gesture recognition on images.
Signature:
export declare class GestureRecognizer extends VisionTaskRunner
Extends: VisionTaskRunner
Properties
Property | Modifiers | Type | Description |
---|---|---|---|
HAND_CONNECTIONS | static |
Connection[] | An array containing the pairs of hand landmark indices to be rendered with connections. |
Methods
Method | Modifiers | Description |
---|---|---|
createFromModelBuffer(wasmFileset, modelAssetBuffer) | static |
Initializes the Wasm runtime and creates a new gesture recognizer based on the provided model asset buffer. |
createFromModelPath(wasmFileset, modelAssetPath) | static |
Initializes the Wasm runtime and creates a new gesture recognizer based on the path to the model asset. |
createFromOptions(wasmFileset, gestureRecognizerOptions) | static |
Initializes the Wasm runtime and creates a new gesture recognizer from the provided options. |
recognize(image, imageProcessingOptions) | Performs gesture recognition on the provided single image and waits synchronously for the response. Only use this method when the GestureRecognizer is created with running mode image . |
|
recognizeForVideo(videoFrame, timestamp, imageProcessingOptions) | Performs gesture recognition on the provided video frame and waits synchronously for the response. Only use this method when the GestureRecognizer is created with running mode video . |
|
setOptions(options) | Sets new options for the gesture recognizer.Calling setOptions() with a subset of options only affects those options. You can reset an option back to its default value by explicitly setting it to undefined . |
GestureRecognizer.HAND_CONNECTIONS
An array containing the pairs of hand landmark indices to be rendered with connections.
Signature:
static HAND_CONNECTIONS: Connection[];
GestureRecognizer.createFromModelBuffer()
Initializes the Wasm runtime and creates a new gesture recognizer based on the provided model asset buffer.
Signature:
static createFromModelBuffer(wasmFileset: WasmFileset, modelAssetBuffer: Uint8Array): Promise<GestureRecognizer>;
Parameters
Parameter | Type | Description |
---|---|---|
wasmFileset | WasmFileset | A configuration object that provides the location of the Wasm binary and its loader. |
modelAssetBuffer | Uint8Array | A binary representation of the model. |
Returns:
Promise<GestureRecognizer>
GestureRecognizer.createFromModelPath()
Initializes the Wasm runtime and creates a new gesture recognizer based on the path to the model asset.
Signature:
static createFromModelPath(wasmFileset: WasmFileset, modelAssetPath: string): Promise<GestureRecognizer>;
Parameters
Parameter | Type | Description |
---|---|---|
wasmFileset | WasmFileset | A configuration object that provides the location of the Wasm binary and its loader. |
modelAssetPath | string | The path to the model asset. |
Returns:
Promise<GestureRecognizer>
GestureRecognizer.createFromOptions()
Initializes the Wasm runtime and creates a new gesture recognizer from the provided options.
Signature:
static createFromOptions(wasmFileset: WasmFileset, gestureRecognizerOptions: GestureRecognizerOptions): Promise<GestureRecognizer>;
Parameters
Parameter | Type | Description |
---|---|---|
wasmFileset | WasmFileset | A configuration object that provides the location of the Wasm binary and its loader. |
gestureRecognizerOptions | GestureRecognizerOptions | The options for the gesture recognizer. Note that either a path to the model asset or a model buffer needs to be provided (via baseOptions ). |
Returns:
Promise<GestureRecognizer>
GestureRecognizer.recognize()
Performs gesture recognition on the provided single image and waits synchronously for the response. Only use this method when the GestureRecognizer is created with running mode image
.
Signature:
recognize(image: ImageSource, imageProcessingOptions?: ImageProcessingOptions): GestureRecognizerResult;
Parameters
Parameter | Type | Description |
---|---|---|
image | ImageSource | A single image to process. |
imageProcessingOptions | ImageProcessingOptions | the ImageProcessingOptions specifying how to process the input image before running inference. The detected gestures. |
Returns:
GestureRecognizer.recognizeForVideo()
Performs gesture recognition on the provided video frame and waits synchronously for the response. Only use this method when the GestureRecognizer is created with running mode video
.
Signature:
recognizeForVideo(videoFrame: ImageSource, timestamp: number, imageProcessingOptions?: ImageProcessingOptions): GestureRecognizerResult;
Parameters
Parameter | Type | Description |
---|---|---|
videoFrame | ImageSource | A video frame to process. |
timestamp | number | The timestamp of the current frame, in ms. |
imageProcessingOptions | ImageProcessingOptions | the ImageProcessingOptions specifying how to process the input image before running inference. The detected gestures. |
Returns:
GestureRecognizer.setOptions()
Sets new options for the gesture recognizer.
Calling setOptions()
with a subset of options only affects those options. You can reset an option back to its default value by explicitly setting it to undefined
.
Signature:
setOptions(options: GestureRecognizerOptions): Promise<void>;
Parameters
Parameter | Type | Description |
---|---|---|
options | GestureRecognizerOptions | The options for the gesture recognizer. |
Returns:
Promise<void>