Helen Toner’s latest post is also the latest in a long line of posts to realize that the terminology we’re using to describe how LLM-based technologies work is profoundly broken:
If you ask me, “AGI” stopped being a useful term as early as 2022. The original ChatGPT, released that November, was not human level in most ways… but it was clearly “general” in some meaningful way that had not been true of AI systems before the advent of large language models. Rather than ditch “AGI” in 2022, though, instead we just coined a new term, “general-purpose artificial intelligence,” which is obviously a totally different thing from artificial general intelligence.
In 2026, it’s time to move on.
Well, it’s been time to move on for a long time now. My first course on artificial intelligence at NC State, around 2007 or so, started with detailing the A* algorithm, which simply wouldn’t qualify as AI these days. I remember thinking at the time, “Is this really what everyone was talking about as AI?” Reader, it was. And 20 years from now, we’re going to look at LLMs and wonder how people ever thought this was supposed to be AI.
The problem is that explaining this to people, whether they have a technical background or not, is very hard. In 2007, I was already pretty technically savvy. I had a BS in Computer Engineering and I had spent years programming device drivers for the Linux kernel. It’s hard to be more technical than that. But I still thought there was something about AI that I just didn’t get. It turns out that there was: I didn’t get that the term was just a bunch of artifice. Since then, I’ve been talking with people about how the term does more to muddy understanding that to clarify things.
I haven’t been all that successful, but I have found a piece that I think does it better than I have: Emily Tucker’s Artifice and Intelligence. It’s worth reading in full because it’s great. Her four key takeaways are worth summarizing here:
- Be as specific as possible about what the technology in question is and how it works. Words like “intelligence” or “learning” aren’t technically specific, and they mask what’s really going on under the hood.
- Identify any obstacles to our own understanding of a technology that result from failures of corporate or government transparency. Accepting the premise that technology works without auditing or real-world accountability is a mistake.
- Name the corporations responsible for creating and spreading the technological product. Letting an organization take either the blame or the credit for something that’s really done by a third-party tech company is always a mistake.
- Attribute agency to the human actors building and using the technology, never to the technology itself. Technology is not ethically neutral; it’s designed by people who make ethical choices.
Emily’s piece was posted in March of 2022, and in the years since I’ve seen other writers come to the same basic set of conclusions on a pretty steady pace. I’m not intending to malign any of these folks. It’s never too late to get to the right answer. If anything, I was late in getting to this in 2007. And I could still be wrong.1 It’s not recognizing that you’re wrong that matters; it’s what we do next. I still think Emily’s takeaways are the best, most succinct articulation that I’ve seen on this. These are things that are still pretty rare to see people doing in practice.
-
This also seems like a problem in the field of human intelligence. They don’t appear to have gotten to anything like a fundamental primitive of “intelligence” to me. They keep splitting it into subcomponents and measuring it in different ways. I’m not an expert in the study of human intelligence, so I could be wrong here. These are just my impressions as someone who’s a curious outsider. ↩
Six Lines