[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-08 UTC."],[],[],null,["# MediaPipeTasksVision Framework Reference\n\nRunningMode\n===========\n\n enum RunningMode : UInt, @unchecked Sendable\n\nMediaPipe vision task running mode. A MediaPipe vision task can be run with three different\nmodes: image, video and live stream.\n- `\n ``\n ``\n `\n\n ### [image](#/c:@E@MPPRunningMode@MPPRunningModeImage)\n\n `\n ` \n The mode for running a mediapipe vision task on single image inputs. \n\n #### Declaration\n\n Swift \n\n case image = 0\n\n- `\n ``\n ``\n `\n\n ### [video](#/c:@E@MPPRunningMode@MPPRunningModeVideo)\n\n `\n ` \n The mode for running a mediapipe vision task on the decoded frames of a video. \n\n #### Declaration\n\n Swift \n\n case video = 1\n\n- `\n ``\n ``\n `\n\n ### [liveStream](#/c:@E@MPPRunningMode@MPPRunningModeLiveStream)\n\n `\n ` \n The mode for running a mediapipe vision task on a live stream of input data, such as from the\n camera. \n\n #### Declaration\n\n Swift \n\n case liveStream = 2"]]