Abstract
While trust is foundational to the doctor-patient relationship, the introduction of AI into healthcare settings poses the risk of eroding this trust, and such erosion cannot be countered simply by appealing to the notion of “trustworthy AI.” We argue that trust presupposes specific epistemic attitudes that cannot be meaningfully applied to AI systems. Accordingly, our focus is not on specifying which capabilities AI must exhibit in order to appear trustworthy, but on examining from an epistemological perspective how the use of AI reshapes the dynamics of trust within the doctor-patient relationship. To this end, we first sketch conceptions of trust and demonstrate how trust differs from reliance. We then combine the model of Computational Reliabilism with an epistemic framework to develop a matrix for the ethical analysis of our use cases. Finally, we apply this framework to three scenarios of melanoma detection, risk prediction, and psychotherapy chatbots, which we construct by mapping epistemic stances across different modes of human-machine interaction, ranging from collaborative support with varying degrees of autonomy to the replacement of human-human interaction. We argue that the application of AI in the doctor-patient relationship exposes what we call a “reliability gap” — a conceptual space where the opaque nature of advanced AI systems prevents both doctors and patients from independently verifying their reliability. This creates a dynamic where reliability in the AI’s performance is increasingly mediated by the doctor as a proxy. Our use cases demonstrate that the more autonomous and opaque AI systems are, the more trust in the doctor becomes essential for bridging reliability gaps, while threatening to overburden the doctor’s central role.