MediaPipe 솔루션은 애플리케이션에 인공지능 (AI) 및 머신러닝 (ML) 기법을 빠르게 적용할 수 있는 라이브러리 및 도구 모음을 제공합니다. 이러한 솔루션을 애플리케이션에 즉시 연결하고, 필요에 맞게 맞춤설정하고, 여러 개발 플랫폼에서 사용할 수 있습니다. MediaPipe 솔루션은 MediaPipe 오픈소스 프로젝트의 일부이므로 애플리케이션 요구사항에 맞게 솔루션 코드를 추가로 맞춤설정할 수 있습니다. MediaPipe 솔루션 모음에는 다음이 포함됩니다.
다음 라이브러리와 리소스는 각 MediaPipe 솔루션의 핵심 기능을 제공합니다.
MediaPipe Tasks: 솔루션을 배포하기 위한 크로스 플랫폼 API 및 라이브러리입니다. 자세히 알아보기
MediaPipe 모델: 각 솔루션과 함께 사용할 수 있는 선행 학습된 실행 준비 상태의 모델입니다.
다음 도구를 사용하여 솔루션을 맞춤설정하고 평가할 수 있습니다.
MediaPipe Model Maker: 데이터를 사용하여 솔루션용 모델을 맞춤설정합니다. 자세히 알아보기
MediaPipe 스튜디오: 브라우저에서 솔루션을 시각화, 평가, 벤치마킹합니다. 자세히 알아보기
사용 가능한 솔루션
MediaPipe 솔루션은 여러 플랫폼에서 사용할 수 있습니다. 각 솔루션에는 하나 이상의 모델이 포함되며 일부 솔루션의 경우 모델을 맞춤설정할 수도 있습니다. 다음 목록에는 지원되는 각 플랫폼에서 사용할 수 있는 솔루션과 Model Maker를 사용하여 모델을 맞춤설정할 수 있는지 여부가 나와 있습니다.
왼쪽 탐색 트리에 나열된 시각, 텍스트, 오디오 태스크를 선택하여 MediaPipe 솔루션을 시작할 수 있습니다.
MediaPipe Tasks와 함께 사용할 개발 환경을 설정하는 데 도움이 필요한 경우 Android, 웹 앱, Python 설정 가이드를 확인하세요.
기존 솔루션
2023년 3월 1일부터 아래에 나열된 MediaPipe 기존 솔루션에 대한 지원이 종료되었습니다. 다른 모든 MediaPipe 기존 솔루션은 새 MediaPipe 솔루션으로 업그레이드됩니다. 자세한 내용은 아래 목록을 참고하세요. 모든 MediaPipe 기존 솔루션의 코드 저장소 및 사전 빌드된 바이너리는 계속해서 있는 그대로 제공됩니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["필요한 정보가 없음","missingTheInformationINeed","thumb-down"],["너무 복잡함/단계 수가 너무 많음","tooComplicatedTooManySteps","thumb-down"],["오래됨","outOfDate","thumb-down"],["번역 문제","translationIssue","thumb-down"],["샘플/코드 문제","samplesCodeIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-07-24(UTC)"],[],[],null,["# MediaPipe Solutions guide\n\nMediaPipe Solutions provides a suite of libraries and tools for you to quickly\napply artificial intelligence (AI) and machine learning (ML) techniques in your\napplications. You can plug these solutions into your applications immediately,\ncustomize them to your needs, and use them across multiple development\nplatforms. MediaPipe Solutions is part of the MediaPipe [open source\nproject](https://github.com/google/mediapipe), so you can further customize the\nsolutions code to meet your application needs. The MediaPipe Solutions suite\nincludes the following:\n\nThese libraries and resources provide the core functionality for each MediaPipe\nSolution:\n\n- **MediaPipe Tasks** : Cross-platform APIs and libraries for deploying solutions. [Learn more](/edge/mediapipe/solutions/tasks)\n- **MediaPipe Models**: Pre-trained, ready-to-run models for use with each solution.\n\nThese tools let you customize and evaluate solutions:\n\n- **MediaPipe Model Maker** : Customize models for solutions with your data. [Learn\n more](/edge/mediapipe/solutions/model_maker)\n- **MediaPipe Studio** : Visualize, evaluate, and benchmark solutions in your browser. [Learn\n more](/edge/mediapipe/solutions/studio)\n\nAvailable solutions\n-------------------\n\nMediaPipe Solutions are available across multiple platforms. Each solution\nincludes one or more models, and you can customize models for some solutions as\nwell. The following list shows what solutions are available for each supported\nplatform and if you can use Model Maker to customize the model:\n\n| Solution | Android | Web | Python | iOS | Customize model |\n|------------------------------------------------------------------------------------|---------|-----|--------|-----|-----------------|\n| [LLM Inference API](/edge/mediapipe/solutions/genai/llm_inference) | | | | | |\n| [Object detection](/edge/mediapipe/solutions/vision/object_detector) | | | | | |\n| [Image classification](/edge/mediapipe/solutions/vision/image_classifier) | | | | | |\n| [Image segmentation](/edge/mediapipe/solutions/vision/image_segmenter) | | | | | |\n| [Interactive segmentation](/edge/mediapipe/solutions/vision/interactive_segmenter) | | | | | |\n| [Hand landmark detection](/edge/mediapipe/solutions/vision/hand_landmarker) | | | | | |\n| [Gesture recognition](/edge/mediapipe/solutions/vision/gesture_recognizer) | | | | | |\n| [Image embedding](/edge/mediapipe/solutions/vision/image_embedder) | | | | | |\n| [Face detection](/edge/mediapipe/solutions/vision/face_detector) | | | | | |\n| [Face landmark detection](/edge/mediapipe/solutions/vision/face_landmarker) | | | | | |\n| [Face stylization](/edge/mediapipe/solutions/vision/face_stylizer) | | | | | |\n| [Pose landmark detection](/edge/mediapipe/solutions/vision/pose_landmarker) | | | | | |\n| [Image generation](/edge/mediapipe/solutions/vision/image_generator) | | | | | |\n| [Text classification](/edge/mediapipe/solutions/text/text_classifier) | | | | | |\n| [Text embedding](/edge/mediapipe/solutions/text/text_embedder) | | | | | |\n| [Language detector](/edge/mediapipe/solutions/text/language_detector) | | | | | |\n| [Audio classification](/edge/mediapipe/solutions/audio/audio_classifier) | | | | | |\n\nGet started\n-----------\n\nYou can get started with MediaPipe Solutions by selecting any of the tasks\nlisted in the left navigation tree, including\n[vision](/edge/mediapipe/solutions/vision/object_detector),\n[text](/edge/mediapipe/solutions/text/text_classifier), and\n[audio](/edge/mediapipe/solutions/audio/audio_classifier) tasks.\nIf you need help setting up a development environment for use with MediaPipe\nTasks, check out the setup guides for\n[Android](/edge/mediapipe/solutions/setup_android),\n[web apps](/edge/mediapipe/solutions/setup_web), and\n[Python](/edge/mediapipe/solutions/setup_python).\n\nLegacy solutions\n----------------\n\nWe have ended support for the MediaPipe Legacy Solutions listed below as of\nMarch 1, 2023. All other MediaPipe Legacy Solutions will be upgraded to a new\nMediaPipe Solution. See the list below for details. The\n[code repository](https://github.com/google/mediapipe/tree/master/mediapipe) and\nprebuilt binaries for all MediaPipe Legacy Solutions will continue to be\nprovided on an as-is basis.\n\n| Legacy Solution | Status | New MediaPipe Solution |\n|-----------------------------------------------------------------------------------------------------------------------------|--------------------------------------|--------------------------------------------------------------|\n| Face Detection ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/face_detection.md)) | [Upgraded](./vision/face_detector) | [Face detection](./vision/face_detector) |\n| Face Mesh ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/face_mesh.md)) | [Upgraded](./vision/face_landmarker) | [Face landmark detection](./vision/face_landmarker) |\n| Iris ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/iris.md)) | [Upgraded](./vision/face_landmarker) | [Face landmark detection](./vision/face_landmarker) |\n| Hands ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/hands.md)) | [Upgraded](./vision/hand_landmarker) | [Hand landmark detection](./vision/hand_landmarker) |\n| Pose ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/pose.md)) | [Upgraded](./vision/pose_landmarker) | [Pose landmark detection](./vision/pose_landmarker) |\n| Holistic ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/holistic.md)) | Upgrade | [Holistic landmarks detection](./vision/holistic_landmarker) |\n| Selfie segmentation ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/selfie_segmentation.md)) | [Upgraded](./vision/image_segmenter) | [Image segmentation](./vision/image_segmenter) |\n| Hair segmentation ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/hair_segmentation.md)) | [Upgraded](./vision/image_segmenter) | [Image segmentation](./vision/image_segmenter) |\n| Object detection ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/object_detection.md)) | [Upgraded](./vision/object_detector) | [Object detection](./vision/object_detector) |\n| Box tracking ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/box_tracking.md)) | Support ended | |\n| Instant motion tracking ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/instant_motion_tracking.md)) | Support ended | |\n| Objectron ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/objectron.md)) | Support ended | |\n| KNIFT ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/knift.md)) | Support ended | |\n| AutoFlip ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/autoflip.md)) | Support ended | |\n| MediaSequence ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/media_sequence.md)) | Support ended | |\n| YouTube 8M ([info](https://github.com/google/mediapipe/blob/master/docs/solutions/youtube_8m.md)) | Support ended | |"]]