The document describes research on human-centered AI and interactive explanation methods. It discusses explainable AI and the goals of explaining model outcomes to increase user trust and acceptance, and enabling users to interact with the explanation process to improve models. It then provides an overview of the Augment/HCI research group at KU Leuven and its work on explanation methods, recommendation techniques, and evaluating explanations through user studies.