Face stylization guide for Web

The MediaPipe Face Stylizer task lets you apply face stylizations to faces in an image. You can use this task to create virtual avatars in various styles.

The code sample described in these instructions is available on GitHub. For more information about the capabilities, models, and configuration options of this task, see the Overview.

Code example

The example code for Face Stylizer provides a complete implementation of this task in JavaScript for your reference. This code helps you test this task and get started on building your own face stylization app. You can view, run, and edit the Face Stylizer example code using just your web browser.


This section describes key steps for setting up your development environment specifically to use Face Stylizer. For general information on setting up your web and JavaScript development environment, including platform version requirements, see the Setup guide for web.

JavaScript packages

Face Stylizer code is available through the MediaPipe @mediapipe/tasks-vision NPM package. You can find and download these libraries by following the instructions in the platform Setup guide.

You can install the required packages through NPM using the following command:

npm install @mediapipe/tasks-vision

If you want to import the task code with a content delivery network (CDN) service, add the following code in the <head> tag in your HTML file:

<!-- You can replace JSDeliver with another CDN if you prefer -->
  <script src="https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/vision_bundle.js"


The MediaPipe Face Stylizer task requires a trained model that is compatible with this task. For more information on available trained models for Face Stylizer, see the task overview Models section.

Select and download a model, and then store it within your project directory:


Create the task

Use one of the Face Stylizer createFrom...() functions to prepare the task for running inferences. Use the createFromModelPath() function with a relative or absolute path to the trained model file. If your model is already loaded into memory, you can use the createFromModelBuffer() method.

The code example below demonstrates using the createFromOptions() function to set up the task. The createFromOptions function lets you customize the Face Stylizer with configuration options.

The following code demonstrates how to build and configure the task with custom options:

const vision = await FilesetResolver.forVisionTasks(
  // path/to/wasm/root
const facestylizer = await FaceStylizer.createFromOptions(
      baseOptions: {
        modelAssetPath: "https://storage.googleapis.com/mediapipe-models/face_stylizer/blaze_face_stylizer/float32/latest/face_stylizer_color_sketch.task"

Prepare data

Face Stylizer can stylize faces in images in any format supported by the host browser. The task also handles data input preprocessing, including resizing, rotation and value normalization.

Run the task

The Face Stylizer uses the stylize() method to trigger inferences. The task processes the data, attempts to stylize faces, and then reports the results. Calls to the Face Stylizer stylize() method run synchronously and block the user interface thread.

The following code demonstrates how execute the processing with the task model:

const image = document.getElementById("image") as HTMLImageElement;
const faceStylizerResult = faceStylizer.stylize(image);

Handle and display results

The Face Stylizer returns an MPImage object with a stylization of the most prominent face within the input image.

The following shows an example of the output data from this task:

The output above was created by applying the Color sketch model to the following input image: