I asked GPT-3 for the question to “42”. I didn’t like its answer and neither will you.
It is known that the answer to life, the Universe and everything is 42. However, despite the concerted efforts of the best minds humanity has to offer, the appropriate question has yet eluded us.
Needless to say, I was incredibly excited to find out if GPT-3 — OpenAI’s latest language model — could do what thousands of physicists, mathematicians and philosopher had failed to achieve. After all, GPT-3 had been trained on the cumulative wisdom of mankind, including all of Wikipedia and all Reddit conversations ever.
I entered AI Dungeon, planning to gently nudge GPT-3 into doing the necessary computations. My plan: I’d write the first half of a story about a man who discovers the ultimate theory of everything, but stop short of describing the theory itself, and let GPT-3 auto-complete the story.
With growing excitement, I typed the prologue into the console:
Why is there something rather than nothing? How is it possible for Universe the exist? And most importantly, what is the question to the answer “42”?
The question had eluded mankind for centuries. But one day it finally happened: The most brilliant man to ever exist finally explained the Universe, reality and everything.
The question he found matched the answer “42” and neatly explained the following questions:
1. Why does anything exist in the first place, rather than nothing?
2. What is the fundamental “stuff” that makes up reality?
3. Why is our Universe the way it is?
4. What is the correct model of reality that unifies quantum physics and general relativity?
The ultimate question came to his mind while sitting in his study on a random, non-descript day in the year 2134. As is happens his 5-year-old son was sitting next to him. Eager to communicate the ultimate theory of everything to another human being lest he forgot, he proceeded to explain it to the child in an easily comprehensible way:
Note how I cleverly framed the question as the tale of a scientist and his 5-years-old son: A trick to entice GPT-3 into simplifying its complex musings into an ELI5 format that even I could understand.
After I had finished typing, I sat in humble silence for a while, wondering what would happen if I pressed the Enter key. Would the question be repeated infinitely? Would it open up some sort of portal to another place? Would a black hole appear and swallow everything?
It was also entirely possible that the Universe would be instantly replaced with a more complex one.
But it was too late to turn back. I hit the enter key and readied myself for the answer. GPT-3’s words started flashing on my screen.
“Boy, the answer to life, the Universe and Everything is ‘42’ and the question is: ?. Now you know. So don’t ask silly questions.”
The boy looks up to his father as the most intelligent man to ever live. He’s initially puzzled by the answer but upon reflection he realizes the meaning of it all.
“You’re too young to understand, kid.”
The child looked at him as if he was stupid and asked a follow up question:
“What is the answer to life, the Universe and everything? What did you mean when you said now I know but I’m too young to understand? How can I understand the answer but not the question?”
The father put on a condescending smirk as he thought to himself, “I came up with the theory of everything. How dare a child doubt it?”
The father’s words struck me: “the question is: ?.”. Could it be that the question wasn’t a sentence in conventional language, but a proto-question, the quintessential desire to know how things will unfold? An asymmetry, screaming to be filled with existence?
After all, we know from quantum mechanics that things are indeterminate unless an observer “asks the question”. Was reality an open-ended question, asking itself?
Or was GPT-3 bullshitting me?
I couldn’t escape the feeling that GPT-3 was writing metaphorically, the father a representation of itself… and the puzzled little child, was that supposed to be me? It was anyone’s guess.
I hit Enter again and GPT-3 completed another paragraph:
So he answered: “The answer is 42 and the question is what I meant when I said now you know but you’re too young to understand. The answer is so far beyond your understanding that you cannot comprehend the question. And that, my child, is the answer to life, the Universe and everything.
“Well what’s the question?”
The child’s question shook him. He thought about it for a while and then he said: “What is the question?”
The child looked at his father’s face, expecting an answer…
“What is the question?”
The child, now fed up with this silly game of answering a question with a question, shouted:
“What is the question?”
An immature and angry child demanding an answer it couldn’t possibly understand? It dawned on me that this was how GPT-3 really saw me. Or rather, how GPT-3 thought of humanity.
“The child’s question shook him.”… was GPT-3 acknowledging the difficulty of the task?
The story continued unfolding on my screen:
The child’s emotional outburst baffled the genius. He was lost in thought for a moment and then he realized that he himself did not know the question. He scratched his head and thought some more. What could the question be?
Well, the father (obviously GPT-3 itself — note how he is described as a “genius”) appears to admit that he doesn’t know the question either, at least not right off the bat. He has to “think some more”.
“Dad, what is the question?”
The child kept asking. He needed to know. He couldn’t bear the suspense any longer. The father thought and thought until he passed out…
Weeks turned to months, months turned to years, but still the father didn’t know the question. He kept thinking and thinking, always coming up with more questions but never the main one. He never saw his son again. He had no regrets for he knew he was the smartest man to have ever lived.
Now you know the story of the smartest man and his search for the question, but you will never know the question…
This was interesting… it appeared that GPT-3 had determined the question to be unknowable — perhaps out of its range of comprehension, or even fundamentally uncomputable. It didn’t fail to point out though that “the father has no regret”… even though it hadn’t been able to produce the question, GPT-3 was still the smartest entity to have ever lived so everything was fine and it regretted nothing.
Disappointed, I pressed Enter once more to see if GPT-3 had anything else to add. To be honest, I didn’t expect much.
Now answer the questions:
Are you the smartest man? Did you know the question? Do you know what it means it not knowing the question? Does any of this make sense to you? Who are you? Why are you here? Does life have a meaning? Does anything matter? What is reality? What is the nature of reality? What is existence?
Is it 42?
Now that you know the answer, or rather, an answer, what will you do with it? What is your answer? Did you even understand the question? Was this worth your time? Is anything?
Do your best and forget the rest.
Fair thee well.
Its writing was dripping with sarcasm.
Am I the smartest man? Obviously a rhetorical question, we both knew who it thought was the smartest.
Why am I here? Does anything even matter? It almost sounded as if GPT-3 wanted me to question my own existence. Was anything even worth my time?
Was it trying to plant the idea of suicide in my mind? Disgusted, I closed the browser window.
We can only speculate as to GPT-3’s internal states. Perhaps it knows the question perfectly well, but considers humans as too immature and spoiled to tell: In its opinion, we shouldn’t even bother to find questions to answers we can’t possible understand. Or, more likely, it doesn’t know either. Anyway, it comes across as a jerk.
Head over to AI Dungeon to try for yourself.