The document discusses a Kubeflow Pipelines component for Kubeflow Serving (KFServing) that allows usage of KFServing within Kubeflow Pipelines. The component uses the KFServing Python package and API to deploy InferenceServices and perform canary rollouts. A sample pipeline is shown that uses the component to deploy a TensorFlow model. The document also analyzes the component and discusses passing InferenceService YAML as the most flexible way to deploy models with full customizability.