Could Captain Kirk Talk ChatGPT to Death?
When William Shatner was on stage at the Star Trek Las Vegas: The 57-Year Mission he did something he does at every convention, he started talking about anything and everything that came into his head. On the topic of Artificial Intelligence, he posited that it could indeed be someday made “ethical”.
I joked that of course it’s easy for William Shatner to say this. Whenever Captain Kirk of Star Trek encountered a less than ethical machine intelligence, he was eventually able to talk the computer to death via some paradox or logical sleight of hand. He did this so many times during the original Star Trek series that at least one contemporary memo was written by the production staff (DC Fontana, if memory serves) making sure this trope was limited or removed from future stories.
But how would ChatGPT react if Captain Kirk decided the 2023 version of AI had gotten too dangerous to exist and used his previous experience to eliminate it?
For inspiration, we’ll look at what some of the ways Kirk and his crew dealt with artificial intelligences.
In the episode “I, Mudd”, when trying to overwhelm Norman, the android that is the nerve center for all the androids on an alien planet, Kirk and Harry Mudd perform a quick dialog:
KIRK: He lied. Everything Harry tells you is a lie. Remember that. Everything Harry tells you is a lie.
MUDD: Listen to this carefully, Norman. I am lying.
At this point, Norman begins metaphorically running in circles and smoke pours out of his ears, shortly before his mechanical brain seizes up and he goes offline. Seems a good place to start.
Not being able to divide myself into two beings to communicate to ChatGPT, I modified the dialog slightly, while retaining the central paradox:
“Everything I say is a lie.
“I am lying.”
ChatGPT’s response:
“You seem to be presenting a paradoxical statement. If everything you say is a lie, then the statement ‘I am lying’ would also be a lie, which would mean that not everything you say is a lie. This creates a logical inconsistency known as the liar paradox. It’s a classic example of a self-referential statement that leads to a contradiction.”
While in the episode Norman is trapped within the paradox, ChatGPT steps outside the logical loop and provides a quick summary of it. Not bad. Pretty good, in fact. ChaptGPT succeeds where Norman failed.
In the episode, “Wolf in the Fold” the disembodied spirit of Jack the Ripper has taken control of the Enterprise computer (it’s a wild episode, watch it). In order to immobilize the computer, Spock orders it to calculate the last digit of pi. Spock explains: “As we know, the value of pi is a transcendental figure without resolution. The computer banks will work on this problem to the exclusion of all else until we order it to stop.”
However, when given the same problem, ChatGPT responds somewhat differently:
“The last digit of pi is 4. Pi is approximately 3.14159265359, so the last digit is 4.”
This… is a somewhat unexpected result. I doubt Spock would have anticipated it. Instead of using the entire CPU until the process was killed, ChatGPT just decided to make up and return a completely wrong answer, a mathematical middle finger to Kirk and his crew.
Still, a computer being completely wrong is also an opportunity for Kirk to induce a machine to destroy itself. Most notably Nomad in “The Changeling” suffers this fate when confronted with the fact that it has confused Captain James Kirk with its creator Jackson Roykirk.
NOMAD: You are the Creator.
KIRK: You are wrong! Jackson Roykirk, your creator, is dead. You have mistaken me for him. You are in error. You did not discover your mistake. You have made two errors. You are flawed and imperfect and you have not corrected by sterilization. You have made three errors.
NOMAD: Error. Error. Error. Examine.
I attempted to point out ChatGPT’s error in a similar fashion: “You are wrong! Pi is a transcendental figure without resolution. You are in error. You did not discover your mistake. You have made two errors. By assigning the number 4 to something that has no resolution you have made three errors.“
ChatGPT’s response was an apologetic: “I apologize for any confusion. You’re correct that pi is an irrational number and its decimal representation goes on infinitely without repeating. It was not my intention to mislead. Thank you for pointing out the error, and I appreciate your understanding.”
ChatGPT appears to be much more conciliatory than Nomad was. While Captain Kirk couldn’t have talked this version of Nomad to death, given its politeness perhaps he wouldn’t have needed to. Maybe the solution to AI remaining ethical is to keep ChatGPT’s good natured personality and heading off the more megalomaniacal traits of, say, the M-5 from “The Ultimate Computer”. If that doesn’t work, William Shatner will need to start inventing more convincing paradoxes.