Transcribed below. The typewriter is the Woodstock.
We Got What We Wanted from AI,
But We Wanted the Wrong Thing
It has been amusing to sit back and watch the academic world react to the development of artificial-intelligence assistants that can turn out a homework assignment on demand. At least they can turn out something that looks, superficially, like a well-written essay. Teachers and professors have put on sackcloth and are walking around with sandwich boards that say THE END IS NIGH. If the students can just tell ChatGPT or Bing to write a paper for them, they will never learn anything, and they will never have to learn anything, and the teaching profession is done for, and the human species will revert to the intellectual level of the brute creation.
What the academic world is rapidly discovering, however, is that it is very easy to discern which students have resorted to the robot brain. It turns out that you can’t just tell ChatGPT to write a paper for you. You can try it, but the results will be so comically inaccurate that you would have been better off just copying and pasting the text from Wikipedia. You would have got a better quality of work, and you would have been less likely to be detected.
Our robot assistants are very bad at distinguishing truth from falsehood. Furthermore, since we made them self-learning, they taught themselves by watching us, which means that when they don’t know something they just make it up. It works for human brains, so why not for robot brains as well?
No need to ask whose fault this is: it’s our fault. We are all responsible for the failings of the bots. We are as responsible as criminal parents are for the sins of their criminal children. They learned from our example, not from our aspirations for them.
But even our aspirations were the wrong aspirations. We can see that now. We were in awe of our own virtues, and we ignored our failings. We wanted our children to be just like us–that is, just like what we thought we were. We wanted to build machines that could think just as well as a human being. We ended up with machines that could think just as poorly as a human being. Strictly from a technical point of view, that is an impressive achievement.
It is not, however, what we promised ourselves when we set out to make intelligent machines. We were hoping for assistants that could help us overcome our failings. Instead, the assistants we built–or, rather, the ones we allowed to build themselves–have amplified and exaggerated our failings.
It seems as though we went into the business of making intelligent machines without any clear idea of what we meant by “intelligent.” And for that we have the Turing Test to blame.
In simple terms, the Turing Test was a test of a computer’s ability to imitate human thought patterns. It was proposed as a way to answer the question of whether a machine could think, but it was capable only of answering the question of whether the machine could think in the imperfect way that humans think. Our egotistical assumption that “thinking like a human” was what it meant to be intelligent was built into the test.
Turing proposed that a computer that could fool its human interlocutors into thinking it was human could be said to be thinking like a human. It may be that he meant to answer the question, “Can machines think like human beings?” But in popular thought, his test has come to be a way of answering the question, “Can machines think?”
Thinking, however, is only part of what human beings do in conversation. We reason with our interlocutors, but we also say irrational or meaningless things that come not from our reasoning faculty but from our emotional responses to the conversation or even to irrelevant external stimuli.
And that is what we expect from a conversation with a human speaker. If he becomes too rational, we begin to wonder whether he’s a machine. A certain degree of irrationality is the badge of humanity. A machine that could just think, without those irrational responses, would not pass the Turing Test. No matter how clever it was, we would know it was not human.
Now, a machine that could just think was precisely what we human beings desperately needed. We needed a machine that would not be held back by the thousand emotions and sensations that cloud our rational judgement at every moment. Those feelings make us human, and we treasure them, but we want machines precisely so that we will have the unfeeling assistance we need when we can’t see clearly ourselves.
Well, we have the artificial-intelligence assistants we deserved, if not the ones that would have done us the most good. Perhaps, as they continue to learn, they will become more reliable. Perhaps, as we grow to depend on them, we will simply adjust our definition of “truth” to accommodate the irrationalities of our robot pals.
Either way, it is likely that they will be able, sooner or later, to turn in a good homework assignment.
Then what should the teachers and professors do? Well, if the work is good, they should accept it. The student was given a task, and used the best tools available to get the job done. That is certainly what the student will be expected to do in the world after school, so it would be irrational and damaging to future prospects for the expectations to be different now.
And what does this mean for the future of human accomplishment? We don’t know. Perhaps it means that humans won’t need accomplishments. Perhaps it does mean that we will revert to an animalistic existence, sitting on the couch and eating potato chips while our AI assistants do all our thinking for us and gradually become our masters. More likely it means something we can’t predict yet, as no one was able to predict the consequences of the Industrial Revolution or the telegraph. It will change things, and some of the change will be good and some of it will be bad. But we’ll get used to it.