The document discusses automatic and interpretable machine learning using H2O and LIME. It provides an introduction and agenda, then discusses why interpretability is important. It introduces the LIME framework for interpreting complex machine learning models locally. It also discusses H2O AutoML for automatically training and tuning many models. Examples are provided for regression using the Boston Housing dataset, where a random forest model is trained and its predictions are explained locally using LIME.