This document provides a summary of a presentation on achieving trustable artificial intelligence (AI) for complex systems. The presentation discusses making data, systems understanding, and AI algorithms more trustable. It suggests deeper data extraction, wider integration of multi-modal data, and augmenting limited data to make data more trustable. A holistic view of systems and balancing simplification and complication can aid understanding. Moving beyond correlation to causation and explaining rather than treating AI as a black box can improve trust in algorithms. The overall goal is to develop explainable and reliable AI that humans will feel confident using to understand and manage complex life science and information technology systems.