How Human Can Computers Get?

London Lowmanstone
12 min readMay 6, 2022
The humanoid robot C-3PO from Star Wars by Lyman Hansel Gerona on Unsplash

This piece is meant to clarify and simplify the main philosophical idea behind my college senior thesis.

The core idea of my thesis, which I think is missing from a lot of conversations about artificial intelligence (AI), is that humans judge things not only on their current behavior, but also on their hypothetical behavior.

This means that even if we build an AI that behaves exactly like a human, we may not believe it is human, or treat the AI like a human, because we know that in certain circumstances, it would not behave the same as us.

That’s it. That’s the core idea of my thesis.

AIs That Don’t Behave Like Us

I believe that many tests (such as the Turing Test) for judging computers on human attribute scales such as intelligence (or, from my thesis, empathy) don’t work. Or, rather, over time, people will find those tests more and more useless. The reason is that many of these tests are based on how the AI actually behaves, not how the AI theoretically would behave.

For example, GPT-3 passes the Turing Test. If you put an average person in front of a computer screen and they can type messages to GPT-3, and GPT-3 can type back, it’s almost certain that they’ll end up thinking that there’s another person on the other side.

But, as we’re starting to see, the Turing Test is rather useless, because there are many cases where GPT-3 does fail. It’s (ironically) not great at math without specific prompts. And it also doesn’t quite understand jokes. However, in the scenario of a moderate Turing Test, it’s very unlikely that an average person would specifically quiz it on these specific things. And even if they did, they may just believe that the “person” on the other side of the conversation just wasn’t all that great at these particular tasks. Thus, GPT-3 passes the Turing Test.

But, in the AI community, we say quite clearly that GPT-3 doesn’t have human intelligence. Why is this? Well, precisely because there are a few key cases where the AI performs drastically different than a human would. Those cases may be rather rare and shrinking as AI improves, but because they still exist, we can clearly point to them to declare that the AI doesn’t have human-level skills. In other words, we can always say “this AI isn’t human-level because in this particular case, it doesn’t act human.

A more clear example, which I mention in my thesis, is self-driving cars. We are nearing the point, if not already having reached it, where self-driving cars perform great on real-world roads. There were some issues, which tragically resulted in human deaths, but as these issues become more resolved, we will reach a point where in real-world road scenarios, these AIs are able to drive cars perfectly safely.

However, even when AIs are able to drive cars perfectly safely in any weather conditions, anywhere in the world, many people will claim that these AIs still do not have human-levels of safety. Why? Because the AIs don’t behave like humans do in particular odd situations. In these cases, known as adversarial attacks, people can put stickers on stop signs that humans barely notice, and the AIs will completely misread the signs. There are currently no fail-safe methods for getting around these sorts of attacks in state-of-the-art models. (There are attempts, but none of them completely block all different forms of the attack.)

So, once again, these AIs that behave human in all of the “relevant” cases will still seem to be alien and odd to the public because when they fail, they fail in ways that humans never would. AIs are often not judged on how they generally behave; they’re judged on how they perform in hypothetical or rare cases. At some point, the tradeoffs might become worth it for people to start using self-driving cars. But no one likes the idea that a teenager can tape a QR-code-like sticker to a sign and it could cause their car to crash. That’s not human, and it makes us weary.

AIs That Don’t Think Like Us

The argument above works for talking about AIs in the near future which obviously don’t always behave like humans do. For these sorts of AIs, there will always be weird situations in which you can exploit how they don’t behave like humans, and these sorts of exploits will likely surprise people, because the AIs will behave so human-like in so many scenarios, but then suddenly act bizarrely in particular cases. It’s an AI uncanny valley.

But what about AIs that behave exactly like humans do? In the medium-term future, we’re likely to have the computing power to create AIs that can behave exactly like humans. Will these AIs still not seem human-like in some way?

At this point, I think it helps to define what it means for an AI to be functionally equivalent to a human being. We can think of humans taking in input from our world and making decisions about what to do. From the outside, the only thing you see about a human is our behavior. From a functional perspective, behavior is all that matters.

For an AI to be functionally equivalent to being a human, there must be some human (theoretical or real) who would take the exact same actions as the AI would in every situation (real or hypothetical). If we can clearly say that there could be a human who would act, in every single case, exactly like this AI would, then, the AI is functionally human.

If the AI behaves exactly like a human, what could make it seem untrustworthy or non-human? The issue is that even though the results from the AI are human-like, the process by which it gets those results may be extremely non-human.

I’d like to present two examples in this category to demonstrate how the hypothetical case worry from the first section still occurs here. The first is a simple theoretical example to set the stage. The second is a real-life example, but about a board game, where AIs have achieved functional human performance.

The Addition AI

Imagine that the task the AI is built to complete is to add two numbers together. Now, the way humans usually add two numbers is just to add the first number to the second number, and we’re done. However, the way that this AI adds two numbers is to double the first number. It then adds the result to the second number. Finally, it subtracts the first number from that result. In other words, instead of doing (a+b), it does (2*a+b-a). Notice that this AI is functionally equivalent to a human adding numbers: it always gives the same result as the human.

Now, in every normal scenario where you run this AI and a human, the AI performs exactly the same as the human. After all, the AI is functionally equivalent to the human.

However, let’s say that for some reason, both the human and the AI were to be stopped part-way through their computation and forced to output their result. In this scenario, the human would output (a), while the AI would output either (2a) or (2a+b)! If we want to demonstrate how non-human the AI is, we suddenly can! And again, this sort of demonstration could be used to raise distrust in the AI: “While researchers may tell you that this AI is equivalent to a human, it’s actually not. When I stop you halfway through adding 2 and 2, what do you have? Either 2 or 3, right? Look at what happens when you stop the AI halfway through its computation! It’s got 4 or 6! This is nothing like humans, and you shouldn’t believe those who say it is.”

So, once again, even these AIs are subject to a sort of uncanny valley, where they are shown to be different than humans in a way that I think most people would show concern about.

AlphaGo

For a real-life example, we can look at DeepMind’s AlphaGo AI, which functionally became an extremely talented player at the game of Go. For the sake of my argument, let’s imagine a slightly worse AlphaGo, which performs at human level rather than above human level.

Furthermore, assume that this Go-playing AI is functionally human. That is, there is some theoretical human who would make all of the exact same moves that the AI does.

Humans, when they play Go, mix together instinct with strategy. They have some plans of how the game will go many turns out, and they also have some sense as to what sorts of positions are good or bad. This is how they make decisions.

AlphaGo plays Go by playing games randomly. Really. It plays tens of millions of games with semi-random moves and then scores the positions based on how the games went. The creators of AlphaGo also built a way of representing the game of Go as a sequence of numbers. The AI then tries to create its own sequence of numbers (called matrices) such that when its number sequence is multiplied by the game’s number sequence, it results in a score similar to what it saw in its random games. It then uses these scores to determine the best move.

So, if you were to stop AlphaGo in the middle of its computation, you might find it “mentally” simulating entire games of Go perfectly, right up until the last move, remembering exactly where each piece is. It does so at a blazingly fast speed, and with perfect accuracy, simulating hundreds of games within a few seconds. You might also find the AI trying to come up with the right matrices such that the multiplication matched up with the results it got from all its previous game run-throughs. It’s a completely alien way of thinking about the game.

And so, once again, we have an AI that seems human-like, but, when put in an odd situation, such as stopping midway through its computation, it doesn’t appear human-like at all.

Analysis

Overall, what’s going on here is that we’ve changed the “output” of the AI so that it’s no longer functionally human. By the definition of functional, the AI has the same output behavior as a human. But when you change the definition of “output” for an existing AI, that AI then may cease to be functionally human. However, if you’re dealing with an actual human, changing the definition of “output” doesn’t make its output non-human. So, the AI clearly differs from the human.

In the examples above, we’ve specifically changed the “output” from being the final result after both the human and the computer have completed their entire computation, to being the result that occurs midway through their computation process. This change in “output” leads to demonstrating how AIs may use what seem to be odd processes to humans in order to come up with human-like final results.

Thus, we end up with a yet another case where AIs can be shown to be oddly different than humans in hypothetical cases, despite their outward behavior matching ours exactly.

AIs That Aren’t Built Like Us

In a further future, we may be able to precisely simulate the human brain. In this scenario, the AI is not only functionally equivalent, it’s also procedurally equivalent, in the sense that the AI approaches its decision-making in the same way humans do. Even if you were to stop this AI in the middle of its thinking process, you would find that it’s doing exactly what a human would be doing. How in the world could such an AI be seen as not human?

The issue here is that such an AI would likely not be created out of the same materials as humans are. AIs are usually not created out of actual neurons and brain material and are not connected to real human bodies that have real physical senses.

For example, imagine that a perfect brain simulation of an adult human was loaded onto standard computer — a computer that could break if a few drops of water were spilled on it. This AI would behave functionally and procedurally equivalent to an adult human, since it would behave exactly the same as an adult human brain would. Since this AI acts just like a human, is there any hypothetical situation that would cause a noticeable difference in the AIs behavior in comparison to a regular human?

Yes. As soon as the AI realizes that it’s on a computer (which is likely shortly after its creation), it will likely become immensely fearful of being out in the rain, or taking showers, or having cups of water around it. This is drastically different behavior from most humans, who are totally fine out in the rain, appreciate showers, and often will be surrounded by water.

Now, in this case, one might argue that any human whose body was replaced by a computer would have these concerns as well, and that’s very much true. However, it misses the point.

The point is that if AIs are built with different materials than most humans, there will always be odd cases in which those AIs will behave differently than humans, merely because they are built with different materials. It doesn’t matter whether the AI is housed in a computer or a very human-looking body made of Frubber, there will be cases where the AI’s behavior changes in a way that a regular human’s would not. This could be in response to things (such as magnetic fields) that humans can’t see, or in response to things we can see (such as water or lightning).

So, even with AIs that use perfect human brain simulations, there will be methods of determining whether or not the AI is human; their physical components will react differently than ours, and since the physical components control behavior, certain particular physical scenarios will cause the AIs to behave radically different than humans with regular flesh bodies.

Pacemakers

Note that we occasionally already introduce computer systems into humans that help us to live our lives. One of these is pacemakers, computer systems that help humans to have regular heartbeats. As far as I can tell, pacemakers usually don’t contain systems that we would consider AI, but they may in the future.

Regardless of whether or not pacemakers are AI, they show exactly how people who are built of different materials may behave differently in different situations. When going through security at particular locations, people with pacemakers often require different screening protocols, because the ones that most people go through, such as metal detectors, use magnets that could cause them to die.

That is, in most regular scenarios, people with pacemakers behave exactly like everyone else. However, there are a few cases in which, because of the differences in physical components, people with pacemakers will act differently than people without them.

Don’t Discriminate Against AIs

Please note that I’m explicitly not advocating to actually use this method to differentiate between humans and AIs. If we have AIs that are simulated human brains, at the time of writing, I believe these AIs would constitute digital beings and should be treated with the same respect and dignity as humans. I generally think we should focus more on how to help human-like entities live good lives than we should on determining whether human-like entities are actually human or not.

Conclusion

Overall, people seem to be very interested in determining differences between AIs and humans, and I think that these three categories are usually how arguments end up working out. People argue that AIs are different because they behave differently, because they “think” differently, or because they won’t be built like us. So, my hope is that if you hear someone start talking about how AIs are different from humans, you have a few categories that you can use to understand their argument better.

In my thesis, I create new frameworks for categorizing AIs like “Results”, “Process”, and “Properties” to describe AIs that don’t behave like us, don’t think like us, and aren’t built like us, but with more specifics that I think ended up making the arguments in my thesis less coherent rather than more understandable and helpful. I also try to connect my ideas to existing research and more complex ideas, such as empathy and the Chinese Room experiment. So, if you’re interested in these ideas, you might be interested in checking out my thesis. However, I do consider this piece to be a better explanation of what I’m trying to say in my thesis. If there’s confusion between what I say in my thesis, and what I say here, this piece takes precedence.

As always, I love talking to people, so if you have thoughts or questions about these ideas, please leave a comment!

--

--

London Lowmanstone

I’m a visionary, philosopher, and computer scientist sharing and getting feedback (from you!) on ideas I believe are important for the world.