Inferencing models with metadata can be as easy as just
a few lines of code. LiteRT metadata contains a rich description of
what the model does and how to use the model. It can empower code generators to
automatically generate the inference code for you, such as using the Android
Studio ML Binding feature or LiteRT
Android code generator. It can also be used to
configure your custom inference pipeline.
Tools and libraries
LiteRT provides varieties of tools and libraries to serve different
tiers of deployment requirements as follows:
Generate model interface with Android code generators
There are two ways to automatically generate the necessary Android wrapper code
for LiteRT model with metadata:
Android Studio ML Model Binding is tooling available
within Android Studio to import LiteRT model through a graphical
interface. Android Studio will automatically configure settings for the
project and generate wrapper classes based on the model metadata.
LiteRT Code Generator is an executable that
generates model interface automatically based on the metadata. It currently
supports Android with Java. The wrapper code removes the need to interact
directly with ByteBuffer. Instead, developers can interact with the
LiteRT model with typed objects such as Bitmap and Rect.
Android Studio users can also get access to the codegen feature through
Android Studio ML Binding.
Build custom inference pipelines with the LiteRT Support Library
LiteRT Support Library is a cross-platform library
that helps to customize model interface and build inference pipelines. It
contains varieties of util methods and data structures to perform pre/post
processing and data conversion. It is also designed to match the behavior of
TensorFlow modules, such as TF.Image and TF.Text, ensuring consistency from
training to inferencing.
Explore pretrained models with metadata
Browse Kaggle Models to
download pretrained models with metadata for both vision and text tasks. Also
see different options of visualizing the
metadata.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-08-30 UTC."],[],[],null,["Inferencing [models with metadata](../../models/metadata) can be as easy as just\na few lines of code. LiteRT metadata contains a rich description of\nwhat the model does and how to use the model. It can empower code generators to\nautomatically generate the inference code for you, such as using the [Android\nStudio ML Binding feature](../metadata/codegen#mlbinding) or [LiteRT\nAndroid code generator](../metadata/codegen#codegen). It can also be used to\nconfigure your custom inference pipeline.\n\nTools and libraries\n\nLiteRT provides varieties of tools and libraries to serve different\ntiers of deployment requirements as follows:\n\nGenerate model interface with Android code generators\n\nThere are two ways to automatically generate the necessary Android wrapper code\nfor LiteRT model with metadata:\n\n1. [Android Studio ML Model Binding](./codegen#mlbinding) is tooling available\n within Android Studio to import LiteRT model through a graphical\n interface. Android Studio will automatically configure settings for the\n project and generate wrapper classes based on the model metadata.\n\n2. [LiteRT Code Generator](./codegen#codegen) is an executable that\n generates model interface automatically based on the metadata. It currently\n supports Android with Java. The wrapper code removes the need to interact\n directly with `ByteBuffer`. Instead, developers can interact with the\n LiteRT model with typed objects such as `Bitmap` and `Rect`.\n Android Studio users can also get access to the codegen feature through\n [Android Studio ML Binding](./codegen#mlbinding).\n\nBuild custom inference pipelines with the LiteRT Support Library\n\n[LiteRT Support Library](./lite_support) is a cross-platform library\nthat helps to customize model interface and build inference pipelines. It\ncontains varieties of util methods and data structures to perform pre/post\nprocessing and data conversion. It is also designed to match the behavior of\nTensorFlow modules, such as TF.Image and TF.Text, ensuring consistency from\ntraining to inferencing.\n\nExplore pretrained models with metadata\n\nBrowse [Kaggle Models](https://www.kaggle.com/models?framework=tfLite) to\ndownload pretrained models with metadata for both vision and text tasks. Also\nsee different options of [visualizing the\nmetadata](../../models/metadata#visualize_the_metadata).\n\nLiteRT Support GitHub repo\n\nVisit the [LiteRT Support GitHub\nrepo](https://github.com/tensorflow/tflite-support) for more examples and source\ncode."]]