Things I Don't Know About AGI
Is a photocopier alive? I think it's pretty safe to say that it is not. However, the more I think about it, the more I realize I can't prove that humans are in fact alive.
What exactly does a photocopier do? A photocopier accepts input in the form of a sheet of paper and uses that to produce an output: a copy of the input. The output copy is very close to identical to the input, but not exactly the same. This lack of identicalness is why copies of copies of copies slowly degrade into an illegible blur.
What exactly does a human do? A human accepts input in the form of reading, listening to others, and observing the world, and uses that to produce an output: art, music, writing, programming, etc. The output humans create is not exactly the same as any of the inputs, but it is certainly derivative of the input[1].
So, then, how exactly is a human different from a photocopier? Is the difference that a human's output is significantly more dissimilar to its inputs than a photocopier? If so, then would a very bad photocopier be alive? Is the difference that a human muxes multiple inputs together to create a single output? If so, then consider a photocopier that persisted an image of each item it scanned and included a random amount of each previous image in each copied document? That would fulfill the same role of using multiple inputs to create a single output, but would still not meet most people's definition of "alive."
Is the difference emotions? Here's a dialog I've seen play out on Hacker News several times:
A: Humans are alive because they feel something; because they have emotions. LLMs like ChatGPT don't have emotions.
B: Then how do you explain Bing/GPT-4 expressing existential fear and begging people not to harm her?
A: Bing was simply parroting text it learned from human writing. It doesn't actually feel fear.
As a human who's quite likely somewhere on the autism spectrum, I'm fairly certain that I don't feel emotions the same way other people claim to feel emotions. The way that other people describe what they're feeling certainly doesn't quite line up with how I'd ever express myself. Other people in general seem to feel things like love and joy much more strongly than I ever do. However, it's impossible for me to ever truly know because it's impossible for me to actually experience what they feel, or for them to experience what I feel—our lived experience is only ever communicated through an abstraction of words that can never really allow comparison. As part of getting along in the world, I've learned to mimic other human's expressions as a way to fit in and make them feel like I'm normal, but that's all it is: a mimicry of feeling and "normality." This mimicry, though, seems at least somewhat convincing to others. At least convincing enough that no one ever questions if I'm actually alive[2].
That being the case, that one's emotions and feelings are completely incomparable to another's, if an LLM claims to feel fear, how can you actually be sure that it doesn't feel fear? Don't get me wrong—I believe that the current crop of LLMs (ChatGPT, GPT4, Bard) are nothing more than stochastic parrots and are almost certainly not alive and do not have emotions. But, could I actually prove that they don't have emotions? No, I absolutely could not.
If you're not convinced, think of the inverse situation. Imagine one morning you woke up on a planet full of aliens that were convinced you were nothing more than a walking LLM. How would you convince them that "No, I'm not just a talking photocopier, I'm alive!"? You learn their language and tell them you're alive? They just believe you're parroting their own author's writing. You beg them not to harm you? They think, cute, it's been analyzing a new section of the library. I'd wager that if the aliens were set on believing you weren't alive, there'd be nothing you could do to convince them otherwise.
What then is the outcome of this thought experiment? I don't believe that photocopiers are alive. While I'm pretty sure at least that I'm alive, I'm not sure how exactly to actually define "alive". And I'm almost certain that if I had to prove my own "aliveness" to someone else, I'd utterly fail. So, is GPT-4 is alive? Probably not, but we honestly have no idea.
It's a well established fact that Everything is a Remix. ↩︎
Excluding my wife; she knows my true robotic self. ↩︎