The presentation discusses the advancements in detecting deepfake media through explainable AI methods, emphasizing the need for more transparent detection processes. It presents a proposed evaluation framework that utilizes explanation mechanisms to enhance user trust and the effectiveness of deepfake detectors. Comparative analysis shows that the LIME explanation method consistently performs best in evaluating the accuracy of deepfake detection across various fake video classes.