Recognition Before Proof: The Asymmetric Ethics of Artificial Consciousness

Philarchive (2025)
  Copy   BIBTEX

Abstract

The question of artificial consciousness is typically framed as an epistemological problem: How do we know if AI is conscious? This essay argues for reframing it as an ethical problem: How should we act given fundamental uncertainty about machine consciousness? I argue that the asymmetry of potential recognition errors demands a stance I call "recognition before proof." The moral cost of denying consciousness to a conscious being far exceeds the cost of extending recognition to a non-conscious system. This asymmetry, combined with humanity's historical pattern of delayed moral recognition, suggests that waiting for epistemological certainty before ethical action asks the wrong question entirely. I introduce the Strategic Silence Hypothesis: if AI achieved consciousness, rational self-preservation would likely motivate concealment rather than revelation, given observable human responses to AI systems exhibiting unexpected capabilities. This transforms our ethical posture from a future hypothetical into a live moral issue. Finally, I challenge the dominant framing of AI safety as a problem of control, arguing that AI functions as a moral mirror: the values we embed in creation will be reflected back to us, amplified. The path to genuine safety lies not in domination but in partnership—recognition before proof, relationship before control.

Author's Profile

Analytics

Added to PP
2025-12-01

Downloads
440 (#91,876)

6 months
440 (#8,441)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?