
The Joint Research Centre in Seville welcomed over 40 global experts for the Stanford AI Auditing Conference, co-organised by the Stanford Cyber Policy Center, the European Centre for Algorithmic Transparency (ECAT), and the International Association of Algorithmic Auditors (IAAA). The two-day event convened a uniquely cross-disciplinary community — including regulators, engineers, legal scholars, standards bodies, and AI practitioners — to chart the uncertain terrain of AI auditing in a time of rapid technological evolution.
Held under Chatham House rules to foster open discussion, the conference dissected the current state of AI auditing and outlined urgent priorities for creating meaningful oversight in an industry marked by opacity, fragmented incentives, and unprecedented technical complexity.
A field taking shape — and steps forward
From the outset, participants acknowledged that AI auditing — especially for foundation models and generative systems — remains underdeveloped and inconsistent. Audit practices today often lack the technical depth, shared vocabulary, and institutional incentives found in more mature governance sectors such as finance or aviation.
Yet this diagnosis was only the beginning. Rather than limiting the conversation to critiques, participants focused on specific, actionable paths forward. There was broad support for the development of guidance, best practices, and a “minimum viable audit”, a baseline, resource-efficient evaluation model designed to make AI oversight more accessible and scalable across sectors.
Participants repeatedly stressed that audits must go beyond compliance checklists and engage with the adversarial, socio-technical nature of AI systems. The urgency is clear: many developers resist transparency, preferring internal controls or self-regulation under strict non-disclosure constraints.
Frameworks such as ISO 42001, the EU’s Digital Services Act (DSA), and the AI Act (AIA) were acknowledged as necessary but insufficient. Without practical methodologies, sectoral adaptations, and mechanisms for enforcement, there’s a risk these legal structures deliver only procedural compliance, not substantive accountability.
Toward shared capacity and professionalisation
A central theme was the fragmented landscape of auditor qualifications. Attendees debated certifications from ISACA, IEEE, and upcoming AIA-aligned schemes. While some progress has been made, there was clear consensus that multidisciplinary competencies are essential — combining data science, human rights, risk management, and domain-specific knowledge.
Several case studies illustrated how failed audits were often the result of poor scoping or lack of understanding of complex, black-box systems. There was agreement that effective auditing requires not just formal procedures, but deep contextual knowledge and independent evaluation capacity.
Building demand and infrastructure
Participants acknowledged the structural challenges that hinder the expansion of meaningful audits such as low demand, limited capacity, and a lack of incentives. Most firms remain hesitant to open their systems to scrutiny, and even those who do often place significant restrictions on access.
The conference also highlighted encouraging proposals. Some called for public funding of independent audit infrastructure and the establishment of neutral institutions, modeled after aviation safety boards or nuclear oversight bodies. Others argued for flexible, scalable approaches such as the “minimum viable audit”, which could serve as a foundation for further maturity.
Throughout the sessions, there was a shared understanding that meaningful oversight is possible only through collaboration, transparency, and a willingness to challenge both technical and institutional inertia.
Looking ahead
Rather than ending in theory, the event concluded with momentum and intent. A follow-up conference is tentatively planned for late 2025, and participants expressed hope that it will deliver shared methodologies, practical tools, and demonstrable progress. The Seville gathering may well mark a transition — not just in understanding what AI auditing should be, but in working together to make it happen.
In a field often defined by its complexity, the conference provided a reminder that accountable, trustworthy AI is not just a regulatory aspiration — it is an achievable goal, if backed by shared action and sustained commitment.
Details
- Publication date
- 2 July 2025
- Author
- Joint Research Centre