[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-24 (世界標準時間)。"],[],[],null,["# MediaPipe Tasks provides the core programming interface of the MediaPipe\nSolutions suite, including a set of libraries for deploying innovative ML\nsolutions onto devices with a minimum of code. It supports multiple platforms,\nincluding Android, Web / JavaScript, Python, and support for iOS is coming soon.\n\n**Easy to use, well-defined cross-platform APIs** \n\nRun ML Inferences with just 5 lines of code. Use the powerful and easy-to-use\nsolution APIs in MediaPipe Tasks as building blocks to build your own ML\nfeatures.\n\n**Customizable solutions** \n\nYou can leverage all benefits MediaPipe Tasks provides, and easily customize it\nusing models built with your own data via [Model\nMaker](/edge/mediapipe/solutions/model_maker). For example, you can create a model\nthat recognizes the custom gestures you defined using the [Model Maker\nGestureRecognizer API](/mediapipe/solutions/customization/gesture_recognizer),\nand deploy the model onto desired platforms using the [Tasks GestureRecognizer\nAPI](/edge/mediapipe/solutions/vision/gesture_recognizer).\n\n**High performance ML pipelines** \n\nTypical on-device ML solutions combine multiple ML and non-ML blocks, slowing\nperformance. MediaPipe Tasks provides optimized ML pipelines with end-to-end\nacceleration on CPU, GPU, and TPU to meet the needs of real time on-device use\ncases.\n\nSupported platforms\n-------------------\n\nThis section provides an overview of MediaPipe Tasks for each supported\nplatform. For specific implementations, see the platform-specific development\n[guides](/edge/mediapipe/solutions/vision/object_detector/android) for each task. For\nhelp getting your development environment set up to use MediaPipe Tasks on a\nplatform, check out the platform [setup\nguides](/mediapipe/solutions/setup_android).\n| **Attention:** This MediaPipe Solutions Preview is an early release. [Learn more](/edge/mediapipe/solutions/about#notice).\n\n### Android\n\nThe MediaPipe Tasks [Java\nAPI](/edge/api/mediapipe/java/com/google/mediapipe/framework/image/package-summary)\nfor Android is divided into packages that perform ML tasks in major domains,\nincluding vision, natural language, and audio. The following is a list of\ndependencies you can add to your Android app development project to enable these\nAPIs: \n\n dependencies {\n implementation 'com.google.mediapipe:tasks-vision:latest.release'\n implementation 'com.google.mediapipe:tasks-text:latest.release'\n implementation 'com.google.mediapipe:tasks-audio:latest.release'\n }\n\nFor specific implementation details, see the platform-specific development\n[guides](/edge/mediapipe/solutions/vision/object_detector/android) for each solution\nin MediaPipe Tasks.\n\n### Python\n\nThe MediaPipe Tasks [Python API](/edge/api/mediapipe/python/mp) has a few main\nmodules for solutions that perform ML tasks in major domains, including vision,\nnatural language, and audio. The following shows you the install command and a\nlist of imports you can add to your Python development project to enable these\nAPIs: \n\n $ python -m pip install mediapipe\n\n import mediapipe as mp\n from mediapipe.tasks import python\n from mediapipe.tasks.python import vision\n from mediapipe.tasks.python import text\n from mediapipe.tasks.python import audio\n\nFor specific implementation details, see the platform-specific development\n[guides](/edge/mediapipe/solutions/vision/object_detector/python) for each solution\nin MediaPipe Tasks.\n\n### Web and JavaScript\n\nThe MediaPipe Tasks [Web JavaScript API](/edge/api/mediapipe/js/tasks-vision) is\ndivided into packages that perform ML tasks in major domains, including vision,\nnatural language, and audio. The following is a list of script imports you can\nadd to your Web and JavaScript development project to enable these APIs: \n\n \u003chead\u003e\n \u003cscript src=\"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/vision_bundle.js\"\n crossorigin=\"anonymous\"\u003e\u003c/script\u003e\n \u003cscript src=\"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-text/text_bundle.js\"\n crossorigin=\"anonymous\"\u003e\u003c/script\u003e\n \u003cscript src=\"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-audio/audio_bundle.js\"\n crossorigin=\"anonymous\"\u003e\u003c/script\u003e\n \u003c/head\u003e\n\nFor specific implementation details, see the platform-specific development\n[guides](/edge/mediapipe/solutions/vision/object_detector/web_js) for each solution\nin MediaPipe Tasks."]]