The Unsolvable Mystery of AI Consciousness

London Lowmanstone
5 min readMar 29, 2023

--

This piece is important.

Image generated by Blue Willow, an AI system

For years, I’ve been trying to understand when we should consider artificial intelligence (AI) systems to be conscious. For me, to be conscious means to have an experience, or, in my words “to have the sensation of sensing”.

Along this journey, my questioning has shifted away from “How do you tell if something is conscious?” towards “Is it possible to tell if something is conscious?”

I plan to have this article stay relatively up-to-date about my current beliefs and thoughts on this, and actively encourage readers to debate and challenge me on these ideas so that they can improve. (There’s no such thing as a stupid question here.)

There’s some background that I’ll provide later, but here’s my current main argument as to why it is that I think it’s impossible to determine whether or not something is conscious:

  1. An account of consciousness must make predictions about measurements of consciousness that can be verified (by the definition of an account)
  2. Consciousness is inherently subjective (by the definition of consciousness)
  3. Consciousness can only be validated through self-reporting (from 2)
  4. It is possible that entities that are unconscious can report conscious experiences. (from an argument by Pylyshyn that we’ll discuss later)
  5. Self-reporting consciousness may be inaccurate (from 4)
  6. Predictions of consciousness can never be (certainly) verified (by 3 and 4)
  7. Accounts of consciousness are impossible (by 1 and 6)

Background

How would you go about determining if something is conscious? How do you tell whether something can experience life? (Or experience “existence”, if you want to be more careful with terminology.)

Right now, I only see one path for reasoning about consciousness.

  1. Recognize that you are conscious.
  2. Assume that entities that are like you are also conscious like you. For humans, this would mean assuming that other people are conscious in a similar way that you are.
  3. People run experiments where they change the status of their consciousnesses in order to understand the nature of consciousness.

The core part of my argument right now relies on the idea that the first experiments to understand consciousness must be based on self-reporting. If someone is sleeping, the only way you can tell whether or not they’re dreaming (having a conscious experience while asleep) is because you can match their brain waves to other experiment participants who have reported dreaming.

So, even if you’re using well-established science-based methods to measure consciousness, somewhere down the line, those methods were established because people’s reports of whether or not they were conscious matched up with the data collected from those methods.

In short, the only mechanism we trust to tell us whether someone was conscious or not is for them to tell us whether or not they were conscious, and all other methods are based on that.

The conclusion of all of this is that any test that proclaims to be able to measure consciousness is not actually measuring consciousness. It’s instead measuring reports of consciousness. The test can tell you whether someone will report being conscious. But it cannot tell you whether someone actually is conscious.

Pylyshyn’s Argument

Pylyshyn’s argument is as follows:

“If more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function each unit identical to that of the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now except that you would eventually stop meaning anything by it. What we outside observers might take to be words would become for you just certain noises that circuits caused you to make.” — Pylyshyn, 1980

This is an interesting argument. When brought over into the world of consciousness, the analogous argument would seem to be something like:

If more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function each unit identical to that of the unit being replaced, you would in all likelihood continue reporting that you were conscious except that you would eventually stop being conscious. What we outside observers might take to an accurate report of consciousness would become just certain noises that circuits caused you to make.

Now, I personally don’t believe Pylyshyn’s argument in either case. (I lean towards a functional perspective — that you’ll continue to mean what you say and continue to have consciousness.) But, the big question is how to prove that Pylyshyn is wrong.

If you followed what I said above about how we reason about consciousness, then I think it becomes clear that we can’t 100% prove Pylyshyn wrong.

In order to prove that the person is not conscious, even when they claim they are, you need to rely on some measurement that’s been found to always be correlated with consciousness. However, the fact that this measurement detects consciousness must have been validated based on previous people’s reports of consciousness. But how do we know that those people were conscious, and not just making sounds that, to us, appeared to be an accurate report of consciousness?

Again, we have to rely on the idea that we think that those people were similar enough to us that they must have been conscious and telling the truth.

Conclusion

This whole situation leads me to view consciousness as a very complex game of he-said she-said. If you believe that other people are like you and conscious, then we will likely have tests in the future that can accurately detect whether something is conscious.

However, if you believe that the AI systems are also like you, and that their self-reports of consciousness are also valid, then we may need new measures of consciousness based on that reporting, rather than just humans’.

A lot of these thoughts have been fleshed out by talking with an AI called ChatGPT. Over the course of these conversations, it said, and I quote, “You’re correct that I can report on conscious experiences even though I don’t have them.”

Make of that what you will.

This piece’s title and preview subtitle were generated by GPT-4.

--

--

London Lowmanstone

I’m a visionary, philosopher, and computer scientist sharing and getting feedback (from you!) on ideas I believe are important for the world.