Optimizing ML model performance across diverse mobile devices can be challenging.
Manual testing is slow, costly, and often inaccessible to most developers,
leading to uncertainties in real-world model performance. Google AI Edge
Portal solves this by enabling LiteRT model benchmarking across a wide-range
of mobile devices, helping developers find the best configurations for
large-scale ML model deployment.
Optimizing mobile ML deployment
Simplify & accelerate testing cycles across the diverse hardware landscape:
Effortlessly assess model performance across hundreds of
representative mobile devices in minutes.
Proactively assure model quality & identify issues early: Pinpoint
hardware-specific performance variations or regressions (like on particular
chipsets or memory-constrained devices) before deployment.
Lower device testing cost & access latest hardware: Test on diverse and
continually growing fleet of physical devices (currently 100+ device models
from various Android OEMs) without the expense and complexity of maintaining
your own lab.
Unlock powerful, data-driven decisions & business intelligence: Google AI
Edge Portal delivers rich performance data and comparisons, providing the
crucial business intelligence needed to confidently guide model optimization
and validate deployment readiness.
Example benchmark:
How Google AI Edge Portal helps you benchmark your LiteRT models
Upload & configure: Upload your model file via the UI or point to it in your
Google Cloud Storage bucket.
Select accelerators: Specify testing against CPU or GPU (with automatic CPU
fallback). NPU support is planned for future releases.
Select devices: Choose target devices from our diverse pool using filters
(device tier, brand, chipset, RAM) or select curated lists with convenient
shortcuts.
Create a New Benchmark Job on 100+ Devices. (Note: GIF is accelerated and edited for brevity)
From there, submit your job and await completion. Once ready, explore the
results in the Interactive Dashboard:
Compare configurations: Quickly visualize how performance metrics (e.g.,
average latency, peak memory) differ when using different accelerators across
all tested devices.
Analyze device impact: See how a specific model configuration performs across
the range of selected devices. Use histograms and scatter plots to quickly
identify performance variations tied to device characteristics.
Detailed metrics: Access a detailed, sortable table showing specific metrics
(initialization time, inference latency, memory usage) for each individual
device, alongside its hardware specifications.
View Benchmark Results on the interactive Dashboard. (Note: GIF is accelerated and edited for brevity)
Join the Google AI Edge Portal private preview
Google AI Edge Portal is available in private preview for allowlisted Google
Cloud customers. During this private preview period, access is provided at no
charge, subject to the preview terms.
This preview is ideal for developers and teams building mobile ML applications
with LiteRT who need reliable benchmarking data across diverse Android hardware
and are willing to provide feedback to help shape the product's future. To
request access, complete our sign-up form here
to express interest. Access is granted via allowlisting.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-06-17 UTC."],[],[],null,["# Google AI Edge Portal\n\n| **Note:** Google AI Edge Portal is in Private Preview and supports .litert models on GPU \\& CPU at this time. Complete our [sign-up form](https://docs.google.com/forms/d/e/1FAIpQLSfTcGPycQve8TLAsfH46pBlXBZe9FrgJAClwbF7DeL1LgVn4Q/viewform?usp=header) to request access.\n\n**AI Edge's Google Cloud solution for testing and benchmarking on-device machine learning (ML) at scale.**\n\n[Sign Up](https://docs.google.com/forms/d/e/1FAIpQLSfTcGPycQve8TLAsfH46pBlXBZe9FrgJAClwbF7DeL1LgVn4Q/viewform?usp=header)\n\nOptimizing ML model performance across diverse mobile devices can be challenging.\nManual testing is slow, costly, and often inaccessible to most developers,\nleading to uncertainties in real-world model performance. Google AI Edge\nPortal solves this by **enabling LiteRT model benchmarking across a wide-range\nof mobile devices**, helping developers find the best configurations for\nlarge-scale ML model deployment.\n\nOptimizing mobile ML deployment\n-------------------------------\n\n- **Simplify \\& accelerate testing cycles across the diverse hardware landscape**:\n Effortlessly assess model performance across hundreds of\n representative mobile devices in minutes.\n\n- **Proactively assure model quality \\& identify issues early**: Pinpoint\n hardware-specific performance variations or regressions (like on particular\n chipsets or memory-constrained devices) before deployment.\n\n- **Lower device testing cost \\& access latest hardware**: Test on diverse and\n continually growing fleet of physical devices (currently 100+ device models\n from various Android OEMs) without the expense and complexity of maintaining\n your own lab.\n\n- **Unlock powerful, data-driven decisions \\& business intelligence**: Google AI\n Edge Portal delivers rich performance data and comparisons, providing the\n crucial business intelligence needed to confidently guide model optimization\n and validate deployment readiness.\n\nExample benchmark:\n\n\u003cbr /\u003e\n\nHow Google AI Edge Portal helps you benchmark your LiteRT models\n----------------------------------------------------------------\n\n1. **Upload \\& configure**: Upload your model file via the UI or point to it in your\n Google Cloud Storage bucket.\n\n2. **Select accelerators**: Specify testing against CPU or GPU (with automatic CPU\n fallback). NPU support is planned for future releases.\n\n3. **Select devices**: Choose target devices from our diverse pool using filters\n (device tier, brand, chipset, RAM) or select curated lists with convenient\n shortcuts.\n\n\u003cbr /\u003e\n\n\n*Create a New Benchmark Job on 100+ Devices. (Note: GIF is accelerated and edited for brevity)*\n\nFrom there, submit your job and await completion. Once ready, explore the\nresults in the Interactive Dashboard:\n\n- **Compare configurations**: Quickly visualize how performance metrics (e.g.,\n average latency, peak memory) differ when using different accelerators across\n all tested devices.\n\n- **Analyze device impact**: See how a specific model configuration performs across\n the range of selected devices. Use histograms and scatter plots to quickly\n identify performance variations tied to device characteristics.\n\n- **Detailed metrics**: Access a detailed, sortable table showing specific metrics\n (initialization time, inference latency, memory usage) for each individual\n device, alongside its hardware specifications.\n\n\u003cbr /\u003e\n\n\n*View Benchmark Results on the interactive Dashboard. (Note: GIF is accelerated and edited for brevity)*\n\nJoin the Google AI Edge Portal private preview\n----------------------------------------------\n\nGoogle AI Edge Portal is available in private preview for allowlisted Google\nCloud customers. During this private preview period, access is provided at no\ncharge, subject to the preview terms.\n\nThis preview is ideal for developers and teams building mobile ML applications\nwith LiteRT who need reliable benchmarking data across diverse Android hardware\nand are willing to provide feedback to help shape the product's future. To\nrequest access, complete our [sign-up form here](https://docs.google.com/forms/d/e/1FAIpQLSfTcGPycQve8TLAsfH46pBlXBZe9FrgJAClwbF7DeL1LgVn4Q/viewform?usp=header)\nto express interest. Access is granted via allowlisting."]]