What About Them? Consciousness as an Ethical Constraint to Consciousness Verification Procedures
Bahen 1160
I will identify a tension between promising tests devised to verify the presence of consciousness in AI and the
risk of imposing novel ethical harms against conscious AI as potential members of our moral community. That is, some of these
tests overlook the moral considerations that would apply to the AI were it conscious. As a case study, I will focus on Susan
Schneider's AI Consciousness Test (ACT), which intentionally provides the AI with an incomplete training database, free of all
consciousness-related material. However, I will object to this strategy because of its potential to harm test subjects. My
argument is that the ACT creates a context analogous to that of epistemic injustice in human contexts. My main conclusion is
that to avoid ethical pitfalls, consciousness tests should always be made ethically suitable for conscious test subjects.
In this sense, suitability should be assessed with a context-based approach. In the context of the ACT, a conscious subject would
presumably have the capacity to be a knower; as in, at a minimum, the subject would know the fact of its consciousness. Thus,
to be ethically suitable, the ACT should not harm test subjects in that capacity.