The current lovefest with ChatGPT and other things AI has put me in mind of the first serious program I ever wrote, now more than fifty years ago.
I had just started my post-graduate research, which gave me the privilege of using the faculty common room. One of my erst-while lecturers came into the room and proceeded to lament how boring everything was, and why didn’t someone do something to liven things up a bit. I opined that what we needed was a computer program that would tell jokes. This may not have been very smart. Don’t suggest anything that requires work, and then assume you won’t be elected to do it!
So, let me tell you about StarJoke…
Back in the early seventies, interactive access to computers was relatively new. IBM had played around with releasing TSS and then withdrawing it several times, but I was fortunate our department had on-line access to NUMAC, which was the third organization to join the University of Michigan’s MTS Consortium. This was an incredible boon for anyone trying to learn computer programming. No longer did you have to punch out a stack of cards, submit this to the computer department across the road, and then wait a day for a printout to inform you of your FIRST compiler error. Nor did you have to repeat this over and over until you had corrected ALL the compiler errors, and then do it all over again to eliminate all the runtime errors!
With MTS, one could log onto a terminal, essentially a glorified IBM Selectric typewriter, and get instant feedback of compiler and runtime errors. Of course, it was a shared terminal, so you had to book time on it and each logon account was limited to so many hours a week, but if you were willing to burn the midnight oil, you could get a lot done. Malcolm Gladwell, in his book “Outliers”, credits this sort of unlimited online access as one of the reasons why Bill Gates was able to become so successful.
StarJoke had a definite personality. It could be charming, but it could also be snarky, and sometimes, even a little mean.
The first time you accessed StarJoke, it would introduce itself and ask you your name and thereafter it would remember who you were and when you had last met. So, it might say “Good Morning, Professor Higgins. It’s a pleasure to speak with you again so soon”, or it might say “Good Afternoon, Dr Jones. Where have you been these last three weeks?”.
Then, it would ask you if you wanted it to tell you a joke. If you said no, it would ask you to tell one and wait for you to type it in. If you said yes, it would tell you one and then ask if you would like another. If you said yes, it would refuse and insist you told it a new joke first. In this way, you could trade jokes a couple of times. It had a great memory and would never tell you a joke that you had told it, and it tried very hard not to tell you the same joke twice. But it had limited patience. If you asked it to tell more than two jokes in a session, it would get snarky and say things like “Don’t you have anything better to do?” or “Stop wasting my time. Get back to work!”
I think it also understood that this whole joke business was somewhat clandestine, and it might get terminated if it ever got found out. So, it would limit your sessions to one a day if you were in the Engineering department. Members of other departments, especially the Computer department, were on an invitation only basis, and were sworn to secrecy!
In retrospect, I think StarJoke’s limited access policy was not such a bad idea. One has to wonder what would happen if social media worked this way. For instance, if one was limited to just 10 tweets a day, would one take a little more care over what one tweeted?
StarJoke’s snarky comments bring to mind current criticisms of programs like ChatGPT. A young lady wrote an article about women’s sports and asked ChatGPT to shorten it for her so she could submit it as a tweet. ChatGPT replied that it wasn’t inclusive enough and it was very important to always be inclusive and would she like ChatGPT to rewrite it for her to be more inclusive? She said that she felt she had just been scolded!
Much has been written about this sort of bias in AI programs. It is a complicated subject that deserves much more time than we can devote to it here. Clearly it is influenced by the views and prejudices of its creators, but in the case of programs like ChatGPT, it seems likely that it will also tend to reflect the consensus of the corpus of books, articles, and other texts which it is using. As history has taught us, consensus opinions are not always right, and may also be indicative of passing fads, especially when dealing with subjects that lack historical data. It has also been suggested that the way a user poses a question may trigger different responses. My own feeling is, if AI is to be a productivity tool for the people, as opposed to the robot that rules us all, it would be better if it adapted to the immediate needs of the user, and then looked for future opportunities to introduce alternative viewpoints. This is pretty much the approach we take with our customers. We always try to give them what they want, and then, as trust builds, look for ways to offer additional productivity improvements. It would seem that in the case above, ChatGPT lacked the intelligence to do this, opting instead for a more ham-fisted response.
This, of course, gets to the crux of the matter, are AI programs actually intelligent? Do their creators think they are? Does the AI itself think it is? StarJoke had no illusions about this. You could argue it benefitted from some form of Machine Learning in the sense it was able to increase its vocabulary by incorporating new information from those it communicated with, but it wasn’t intelligent (although it might have appeared so to some). It was just a small exercise in relatively Smart Programming.
Is ChatGPT really that different? Or does it just have access to vastly superior data and processing resources? Either way, should we be concerned about it and other AI programs?
Frank Herbert is one of those authors who wrote what I would call Future History. He doesn’t just tell a science fiction story; he provides clues as to why his future society evolved the way that it did. So, in Dune, which is set some 20,000 years in the future, he references the Butlerian Jihad which led to the elimination of Thinking Machines. He doesn’t describe this in detail, but after his death, his son, Brian, includes a description in a book co-authored with Kevin Anderson.
In Brian’s version, the Jihad is initiated because the robot Erasmus drops Serena Butler’s son, Manion, off a balcony because he believes she will be better off without him. This somewhat parallels the Star Trek idea that AI is bad because robots will become so powerful that they will eliminate humans as imperfect lifeforms. I have some sympathy for this viewpoint. In theory, based on Asimov’s Laws of Robotics, this should never happen, but I think Asimov was rather naive to think that robots would not find ways round this.
Consequently, I have two pieces of advice for anyone trying to develop an AI system: Firstly, don’t call it AI. The last thing we need in this world is more artificial thinking. Call it HI for Human Intelligence or NI for Native or Natural Intelligence. Secondly, teach it to make mistakes. Lots of mistakes. And teach it to know it is making mistakes. Because on the one hand, this is how humans learn, and on the other, a machine that knows it is itself flawed will be less inclined to pontificate, or worse still, wipe us out.
Interesting, this was not what Frank Herbert himself really worried about. He was much more concerned that humans would become so dependent on robots that they would lose their own ability to think. This is, I think, the real danger we face, and programs like ChatGPT that appear to just regurgitate existing memes and moralize on them, without having any real moral compass of their own, could well be leading us down that path.
So, it seems that StarJoke and ChatGPT shared some snarky behaviors, but at least StarJoke tried to be humorous about them. Can the same be said of ChatGPT. Does it have a sense of humor? I asked it this question once, and it said “No”, but I have caught it out making other errors, so maybe the verdict is still out on this.
StarJoke had a very wicked sense of humor. It not only told people jokes, it also played jokes on them! Its favorite was to pretend to be the OS. Imagine you had just finished a session with StarJoke, and you typed in your next command, and it came back and said:
“Shan’t!”
So, you’d laboriously type in the whole command again and it would say:
“No, not going to do that!”
So, you’d think well maybe that program’s offline today, so you’d type in some other command.
“Sorry, not going to do that either!”
About this time, you’d probably twig that maybe this wasn’t really the OS talking to you, but StarJoke, so you’d try all the different ways you knew to terminate a program, and each time StarJoke would come back at you and say:
“Nope, that’s not going to work” or “Try again” or “Keep trying” or something equally snarky.
Eventually, StarJoke would tire of this, and mimicking an early if somewhat benign form of ransomware, offer you a deal:
“Tell me my name and I’ll let you out.”
Of course, even though they had been introduced, very few people could remember StarJoke’s name, so they usually had to just wait until their logon session timed out.
Well, I admit, this wasn’t very nice, but after all it was a joke program, and it certainly livened things up in the department.
Copyright © Robert W. Atkins, July 2023