This document provides an overview of machine learning interpretability. It defines interpretability as the ability to explain model decisions in understandable terms. Not all systems and models require interpretability. The document discusses the goals of interpretability like building trustworthy models and ensuring fairness. Interpretability can examine models globally or locally. Popular interpretability techniques discussed include LIME, LRP, DeepLIFT and SHAP. LIME approximates a model's behavior locally with an interpretable model. LRP, DeepLIFT and SHAP attribute importance to input features for a model's predictions.