While we’re not quite ready to give the full Systems Approach book-length treatment to AI and machine learning, we like to think we can give our readers a bit of perspective on the field when there is no shortage of hyperbole and strong opinions. We’re not claiming to be AI experts (although Bruce very nearly ended up studying in the legendary AI department at Edinburgh in the 1980s, as discussed in his recent talk). We’ve read many thousands of words on the topic and some of the central issues (in our view) are becoming clear.
A couple of questions put to me recently led me to think that perhaps it was time for another post on Large Language Models (LLMs) and the broader topic of AI. (As in an earlier post, I’m going to use “AI” in the way the term is most widely used now, an umbrella term that includes LLMs and other machine learning systems, in spite of some pushback. Noted roboticist Rodney Brooks has a good piece that touches on the change in meaning of “AI” since the term was coined over 60 years ago.) First, in a recent podcast, I was asked about the biggest challenge at Systems Approach, and I immediately answered “figuring out what’s important”. This goes way beyond AI, but it’s a challenge even to decide what aspects of AI warrant our attention. And when a person in the investing community asked for my opinion on AI, my response (which might have verged on a rant) was this: my biggest concern with AI is that too many people, whether they are journalists, investors, business decision makers or whatever, are focused on the wrong problem. This issue is well captured in a recent Scientific American article by Emily Bender (of Stochastic Parrots fame) and Alex Hanna of DAIR. If you only read one article on AI, I’d suggest that one–even if that means skipping the rest of this post. To summarize, we should focus not on theoretical problems of some future superhuman intelligence, but on the harms that AI is already capable of causing, such as fostering discrimination in areas from housing to health care, or helping the spread of misinformation.
That said, I continue to find the inner workings of AI quite fascinating and it’s worth understanding them well enough to know what AI is and is not capable of. In the last month I’ve read a number of more in-depth articles on LLMs, and these have given me a little more insight into why the question of what is really going on inside these systems remains, for most people, a matter of debate. I mostly agree with the position that LLMs have no idea what they are doing, but as with most topics, there is a bit more to this one than meets the eye.
If you want to go deep into the internals of LLMs such as ChatGPT, Stephen Wolfram has written an excellent (if long) article that is also available in book form. One of the aspects that he drills down on is what it means to have a “model” of something. For example, if we had a set of data showing how long it takes a cannonball to fall to earth from various heights, we could fit a straight line to the data, and extrapolate or interpolate to predict times to fall from other heights not in the data set. But by choosing a straight line, we’ve adopted a model that’s not very accurate, and will be increasingly inaccurate as we go further outside the range of the original data. Knowing how gravity works, we’d be more inclined to fit a parabola, but that’s only possible because we already have a model for gravity.
With LLMs, words are modelled in a vector space with hundreds of dimensions. The impressive feat is that an exceptionally complex model (with over a trillion parameters in GPT-4) can be trained (using vast amounts of input text) to do a pretty good job of mimicking human writing. In effect, GPT builds a model of language that captures a lot of the complexity of how humans string words together. With that model in place, an LLM is then able to generate strings of text not in the training data set–which is exactly what we observe when we interact with a system such as ChatGPT. As we know, the generated text often looks pretty authentic. As Timothy Lee and Sean Trott pointed out in another very helpful article at Ars Technica, LLMs deal with issues such as disambiguating the multiple meanings of words depending on context by passing the input text through multiple layers of neural networks. (“Fruit flies like a banana” is an example requiring some serious disambiguation.) Each layer is called a “transformer” (the T in GPT) and you can think of a line of text being passed through successive layers of transformers. Each layer adds metadata to the words: for example, having seen the sentence above, one transformer layer might add metadata to indicate that the word “flies” refers to insects rather than motion through the air.
There is a lot more in that article and I recommend you read it, but I had a disconcerting feeling as I was reading it that my confident assertion that LLMs have no understanding of the words they are producing was a bit overstated. At this stage, we all know of examples where LLMs have produced laughable results indicating a lack of understanding of the world, but the details of how they work show that they are very good at understanding language. I think the issue is the difference between understanding language (a set of symbols) and understanding the world. If a human understands language, we generally assume that they also understand the world, but making this extrapolation in the case of LLMs is a bridge too far. Here is a quote from the Ars Technica article that gave me pause:
For example, as an LLM “reads through” a short story, it appears to keep track of a variety of information about the story’s characters: sex and age, relationships with other characters, past and current location, personalities and goals, and so forth.
The description here comes awfully close to suggesting that the LLM “understands” what it is reading. Brooks calls out the issue here: we mistake performance (producing realistic text) for competence (understanding the world). Since he’s a roboticist (he founded iRobot), I found his prediction that GPT won’t be used for robots, because they have to understand the real world, very compelling. (Brooks is good at making predictions and he boosts his credibility by keeping his predictions online for the long haul and reporting back on them.) To quote:
… it will be bad if you try to connect a robot to GPT. GPTs have no understanding of the words they use, no way to connect those words, those symbols, to the real world. A robot needs to be connected to the real world and its commands need to be coherent with the real world. Classically it is known as the “symbol grounding problem”. GPT+robot is only ungrounded symbols.
This is the key takeaway for me: having a model for language is different from having a model of the world. For example, we know that LLMs have a tendency to make up citations. These citations look “correct” because they conform to the model of language (they have authors, realistic titles, journal names, etc.). But they fail a basic test: they are not drawn from real, legitimate publications. So the language model doesn’t understand “what is a legitimate citation” - a fact about the world that is pretty basic for a human to grasp.
So I remain convinced that we need to be cautious about how LLMs and other AI techniques are put to work. Not because they are going to achieve superhuman intelligence, but because they have serious limitations, and because humans are already using them in ways that cause harm. This is certainly not limited to AI, but the difficulty of understanding what AI systems are actually doing and the human tendency to assume greater competence than they really have presents some unique challenges.
Another interesting example of the failures of today’s LLMs has been reported in The Register, with ChatGPT proving itself to be quite poor at answering Stack Overflow-style questions. In more positive news, AI does seem to be helping keep track of the flora and fauna species recorded around British train tracks. You can go deeper on the risks of AI in a podcast with Timnit Gebru, founder of the Distributed AI Research Institute and another of the co-authors of the Stochastic Parrots paper. Cory Doctorow has another great piece on “openwashing” in the AI field. And don’t forget that you can do your part to decentralize the Internet again by following us on Mastodon.