[[["เข้าใจง่าย","easyToUnderstand","thumb-up"],["แก้ปัญหาของฉันได้","solvedMyProblem","thumb-up"],["อื่นๆ","otherUp","thumb-up"]],[["ไม่มีข้อมูลที่ฉันต้องการ","missingTheInformationINeed","thumb-down"],["ซับซ้อนเกินไป/มีหลายขั้นตอนมากเกินไป","tooComplicatedTooManySteps","thumb-down"],["ล้าสมัย","outOfDate","thumb-down"],["ปัญหาเกี่ยวกับการแปล","translationIssue","thumb-down"],["ตัวอย่าง/ปัญหาเกี่ยวกับโค้ด","samplesCodeIssue","thumb-down"],["อื่นๆ","otherDown","thumb-down"]],["อัปเดตล่าสุด 2025-05-26 UTC"],[],[],null,["# LiteRT overview\n\n| **Note:** LiteRT Next is available in Alpha. The new APIs improve and simplify on-device hardware acceleration. For more information, see the [LiteRT\n| Next documentation](./next/overview).\n\nLiteRT (short for Lite Runtime), formerly known as TensorFlow Lite, is Google's\nhigh-performance runtime for on-device AI. You can find ready-to-run LiteRT\nmodels for a wide range of ML/AI tasks, or convert and run TensorFlow, PyTorch,\nand JAX models to the TFLite format using the AI Edge conversion and\noptimization tools.\n\nKey features\n------------\n\n- **Optimized for on-device machine learning**: LiteRT addresses five key ODML\n constraints: latency (there's no round-trip to a server), privacy (no\n personal data leaves the device), connectivity (internet connectivity is not\n required), size (reduced model and binary size) and power consumption\n (efficient inference and a lack of network connections).\n\n- **Multi-platform support** : Compatible with [Android](./android) and\n [iOS](./ios/quickstart) devices, [embedded\n Linux](./microcontrollers/python), and\n [microcontrollers](./microcontrollers/overview).\n\n- **Multi-framework model options** : AI Edge provides tools to convert models\n from TensorFlow, PyTorch, and JAX models into the FlatBuffers format\n (`.tflite`), enabling you to use a wide range of state-of-the-art models on\n LiteRT. You also have access to model optimization tools that can handle\n quantization and metadata.\n\n- **Diverse language support**: Includes SDKs for Java/Kotlin, Swift,\n Objective-C, C++, and Python.\n\n- **High performance** : [Hardware acceleration](./performance/delegates)\n through specialized delegates like GPU and iOS Core ML.\n\nDevelopment workflow\n--------------------\n\nThe LiteRT development workflow involves identifying an ML/AI problem, choosing\na model that solves that problem, and implementing the model on-device. The\nfollowing steps walk you through the workflow and provides links to further\ninstructions.\n\n### 1. Identify the most suitable solution to the ML problem\n\nLiteRT offers users a high level of flexibility and customizability when it\ncomes to solving machine learning problems, making it a good fit for users who\nrequire a specific model or a specialized implementation. Users looking for\nplug-and-play solutions may prefer [MediaPipe\nTasks](https://ai.google.dev/edge/mediapipe/solutions/tasks), which provides\nready-made solutions for common machine learning tasks like object detection,\ntext classification, and LLM inference.\n\nChoose one of the following AI Edge frameworks:\n\n- **LiteRT**: Flexible and customizable runtime that can run a wide range of models. Choose a model for your use case, convert it to the LiteRT format (if necessary), and run it on-device. If you intend to use LiteRT, keep reading.\n- **MediaPipe Tasks** : Plug-and-play solutions with default models that allow for customization. Choose the task that solves your AI/ML problem, and implement it on multiple platforms. If you intend to use MediaPipe Tasks, refer to the [MediaPipe\n Tasks](https://ai.google.dev/edge/mediapipe/solutions/tasks) documentation.\n\n### 2. Choose a model\n\nA LiteRT model is represented in an efficient portable format known as\n[FlatBuffers](https://google.github.io/flatbuffers/), which uses the `.tflite`\nfile extension.\n\nYou can use a LiteRT model in the following ways:\n\n- **Use an existing LiteRT model:** The simplest approach is to use a LiteRT\n model already in the `.tflite` format. These models do not require any added\n conversion steps. You can find LiteRT models on [Kaggle\n Models](https://www.kaggle.com/models?framework=tfLite).\n\n- **Convert a model into a LiteRT model:** You can use the [TensorFlow\n Converter](./models/convert_tf), [PyTorch\n Converter](./models/convert_pytorch), or [JAX\n converter](./models/convert_jax) to convert models to the FlatBuffers format\n (`.tflite`) and run them in LiteRT. To get started, you can find models on\n the following sites:\n\n - **TensorFlow models** on [Kaggle\n Models](https://www.kaggle.com/models?framework=tensorFlow2) and [Hugging Face](https://huggingface.co/models?library=tf)\n - **PyTorch models** on [Hugging\n Face](https://huggingface.co/models?library=pytorch) and [`torchvision`](https://pytorch.org/vision/0.9/models.html)\n - **JAX models** on [Hugging Face](https://huggingface.co/models?library=jax)\n\nA LiteRT model can optionally include *metadata* that contains human-readable\nmodel descriptions and machine-readable data for automatic generation of pre-\nand post-processing pipelines during on-device inference. Refer to [Add\nmetadata](./models/metadata) for more details.\n\n### 3. Integrate the model into your app\n\nYou can implement your LiteRT models to run inferences completely on-device on\nweb, embedded, and mobile devices. LiteRT contains APIs for\n[Python](https://ai.google.dev/edge/api/tflite/python/tf/lite), [Java and\nKotlin](https://ai.google.dev/edge/api/tflite/java/org/tensorflow/lite/package-summary)\nfor Android, [Swift](https://ai.google.dev/edge/api/tflite/swift/Classes) for\niOS, and [C++](https://ai.google.dev/edge/api/tflite/cc) for micro-devices.\n\nUse the following guides to implement a LiteRT model on your preferred platform:\n\n- [Run on Android](./android/index): Run models on Android devices using the Java/Kotlin APIs.\n- [Run on iOS](./ios/quickstart): Run models on iOS devices using the Swift APIs.\n- [Run on Micro](./microcontrollers/overview): Run models on embedded devices using the C++ APIs.\n\nOn Android and iOS devices, you can improve performance using hardware\nacceleration. On either platform you can use a [GPU\nDelegate](./performance/gpu), and on iOS you can use the [Core ML\nDelegate](./ios/coreml). To add support for new hardware accelerators, you can\n[define your own delegate](./performance/implementing_delegate).\n\nYou can run inference in the following ways based on the model type:\n\n- **Models without metadata** : Use the [LiteRT Interpreter](/edge/litert/inference) API.\n Supported on multiple platforms and languages such as Java, Swift, C++,\n Objective-C and Python.\n\n- **Models with metadata** : You can build custom inference pipelines with the\n [LiteRT Support Library](./android/metadata/lite_support).\n\nMigrate from TF Lite\n--------------------\n\nApplications that use TF Lite libraries will continue to function, but all new\nactive development and updates will only be included in LiteRT packages. The\nLiteRT APIs contain the same method names as the TF Lite APIs, so migrating to\nLiteRT does not require detailed code changes.\n\nFor more information, refer to the [migration guide](./migration).\n\nNext steps\n----------\n\nNew users should get started with the [LiteRT quickstart](./inference). For\nspecific information, see the following sections:\n\n**Model conversion**\n\n- [Convert TensorFlow models](./models/convert_tf)\n- [Convert PyTorch models](./models/convert_pytorch)\n- [Convert PyTorch Generative AI models](./models/edge_generative)\n- [Convert JAX models](./models/convert_jax)\n\n**Platform guides**\n\n- [Run on Android](./android/index)\n- [Run on iOS](./ios/quickstart)\n- [Run on Micro](./microcontrollers/overview)"]]