LiteRT-LM is a production-ready, open-source inference framework designed to deliver high-performance, cross-platform LLM deployments on edge devices.
- Cross-Platform Support: Run on Android, iOS, Web, Desktop, and IoT (e.g. Raspberry Pi).
- Hardware Acceleration: Get peak performance and system stability by leveraging GPU and NPU accelerators across diverse hardware.
- Multi-Modality: Build with LLMs that have vision and audio support.
- Tool Use: Function calling support for agentic workflows with constrained decoding for improved accuracy.
- Broad Model Support: Run Gemma, Llama, Phi-4, Qwen and more.
On-Device GenAI Showcase
The Google AI Edge Gallery is an experimental app designed to showcase on-device Generative AI capabilities running entirely offline using LiteRT-LM.
- Google Play: Use LLMs locally on supported Android devices.
- App Store: Experience on-device AI on your iOS device.
- GitHub Source: View the source code for the gallery app to learn how to integrate LiteRT-LM inside your own projects.
Featured Model: Gemma-4-E2B
- Model Size: 2.58 GB
Additional technical details are in the HuggingFace model card
Platform (Device) Backend Prefill (tk/s) Decode (tk/s) Time to First Token (seconds) Peak CPU Memory (MB) Android (S26 Ultra) CPU 557 47 1.8 1733 GPU 3808 52 0.3 676 iOS (iPhone 17 Pro) CPU 532 25 1.9 607 GPU 2878 56 0.3 1450 Linux (Arm 2.3 & 2.8 GHz, NVIDIA GeForce RTX 4090) CPU 260 35 4 1628 GPU 11234 143 0.1 913 macOS (MacBook Pro M4) CPU 901 42 1.1 736 GPU 7835 160 0.1 1623 IoT (Raspberry Pi 5 16GB) CPU 133 8 7.8 1546
Start Building
The following snippets show how to get started with the LiteRT-LM CLI, as well as Python, Kotlin, and C++ APIs.
CLI
litert-lm run model.litertlm --prompt="What is the capital of France?"
Python
engine = litert_lm.Engine("model.litertlm")
with engine.create_conversation() as conversation:
response = conversation.send_message("What is the capital of France?")
print(f"Response: {response['content'][0]['text']}")
Kotlin
val engineConfig = EngineConfig(
modelPath = "/path/to/your/model.litertlm",
backend = Backend.CPU(),
)
val engine = Engine(engineConfig)
engine.initialize()
val conversation = engine.createConversation()
print(conversation.sendMessage("What is the capital of France?"))
C++
auto model_assets = ModelAssets::Create(model_path);
CHECK_OK(model_assets);
auto engine_settings = EngineSettings::CreateDefault(
model_assets,
/*backend=*/litert::lm::Backend::CPU);
absl::StatusOr<std::unique_ptr<Engine>> engine = Engine::CreateEngine(engine_settings);
CHECK_OK(engine);
auto conversation_config = ConversationConfig::CreateDefault(**engine);
CHECK_OK(conversation_config);
absl::StatusOr<std::unique_ptr<Conversation>> conversation = Conversation::Create(**engine, *conversation_config);
CHECK_OK(conversation);
absl::StatusOr<Message> model_message = (*conversation)->SendMessage(
JsonMessage{
{"role", "user"},
{"content", "What is the capital of France?"}
});
CHECK_OK(model_message);
std::cout << *model_message << std::endl;
| Language | Status | Best For... | Documentation |
|---|---|---|---|
| CLI | 🚀 Early Preview |
Getting started with LiteRT-LM in less than 1 min. | CLI Guide |
| Python | ✅ Stable |
Rapid prototyping, development, on desktop & Raspberry Pi. | Python Guide |
| Kotlin | ✅ Stable |
Native Android apps and JVM-based desktop tools. Optimized for Coroutines. | Android (Kotlin) Guide |
| C++ | ✅ Stable |
High-performance, cross-platform core logic and embedded systems. | C++ Guide |
| Swift | 🚀 In Dev |
Native iOS and macOS integration with specialized Metal support. | Coming Soon |
Supported Backends & Platforms
| Acceleration | Android | iOS | macOS | Windows | Linux | IoT |
|---|---|---|---|---|---|---|
| CPU | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| GPU | ✅ | ✅ | ✅ | ✅ | ✅ | - |
| NPU | ✅ | - | - | - | - | - |
Supported Models
The following table lists models supported by LiteRT-LM. For more detailed performance numbers and model cards, visit the LiteRT Community on Hugging Face.
| Model | Type | Size (MB) | Details | Device | CPU Prefill (tk/s) | CPU Decode (tk/s) | GPU Prefill (tk/s) | GPU Decode (tk/s) |
|---|---|---|---|---|---|---|---|---|
| Gemma4-E2B | Chat | 2583 | Model Card | Samsung S26 Ultra | 557 | 47 | 3808 | 52 |
| iPhone 17 Pro | 532 | 25 | 2878 | 57 | ||||
| MacBook Pro M4 | 901 | 42 | 7835 | 160 | ||||
| Gemma4-E4B | Chat | 3654 | Model Card | Samsung S26 Ultra | 195 | 18 | 1293 | 22 |
| iPhone 17 Pro | 159 | 10 | 1189 | 25 | ||||
| MacBook Pro M4 | 277 | 27 | 2560 | 101 | ||||
| Gemma-3n-E2B | Chat | 2965 | Model Card | MacBook Pro M3 | 233 | 28 | - | - |
| Samsung S24 Ultra | 111 | 16 | 816 | 16 | ||||
| Gemma-3n-E4B | Chat | 4235 | Model Card | MacBook Pro M3 | 170 | 20 | - | - |
| Samsung S24 Ultra | 74 | 9 | 548 | 9 | ||||
| Gemma3-1B | Chat | 1005 | Model Card | Samsung S24 Ultra | 177 | 33 | 1191 | 24 |
| FunctionGemma | Base | 289 | Model Card | Samsung S25 Ultra | 2238 | 154 | - | - |
| phi-4-mini | Chat | 3906 | Model Card | Samsung S24 Ultra | 67 | 7 | 314 | 10 |
| Qwen2.5-1.5B | Chat | 1598 | Model Card | Samsung S25 Ultra | 298 | 34 | 1668 | 31 |
| Qwen3-0.6B | Chat | 586 | Model Card | Vivo X300 Pro | 165 | 9 | 580 | 21 |
| Qwen2.5-0.5B | Chat | 521 | Model Card | Samsung S24 Ultra | 251 | 30 | - | - |
Reporting Issues
If you encounter a bug or have a feature request, report at LiteRT-LM GitHub Issues.