The first option is to act on the assumption that AIs lack consciousness. Even as they become more sophisticated and more integrated into our lives, we shouldn’t factor them into our moral decisions. Even if an AI says that it’s conscious, we should regard this as the accidental product of unconscious processes. But this gung-ho approach risks ethical disaster. We could create a new class of sentient minds and then systematically fail to recognise their sentience. The second option is to exercise caution and presume that AIs are conscious. Even if we have doubts about whether computer systems experience anything, we should act on the assumption that any sophisticated AI is sentient. But this risks a different kind of ethical disaster. We might dedicate valuable resources to the well-being of insentient automata that could have been used to help living, sentient humans. The four possibilities above capture our predicament.
Faced with this predicament, perhaps we should apply the brakes on the development of AI until we have a better understanding of
consciousness. This is what Thomas Metzinger argued when he recently proposed a global moratorium on deliberate attempts to create artificial consciousness. But how long should this moratorium last? If we have to wait until we have a complete explanation of consciousness, we could be waiting a long time, perhaps depriving the world of the benefits that more sophisticated AI might bring. And why think that AI will only be conscious if we deliberately engineer it to be so? Even with the moratorium, we risk creating conscious AI by accident. In fact, conscious AI might be with us already!
So what’s to be done? Obviously there’s important work for philosophers and cognitive scientists to do on the problem of consciousness, but we can’t rest our hopes on that being dealt with any time soon. Something we can do though is reflect more on why consciousness matters in the first place. Are we right to assume that having subjective experiences is what makes something worthy of moral consideration? Does all consciousness matter morally, or does consciousness have to take a particular form before we should start worrying about it? Or might it be that AI deserves rights regardless of whether it is conscious? Questions like these might not be easy to answer but, as AI marches forward, they become increasingly difficult to avoid.