The Threshold of Intelligence

Does the source matter?

Good and Evil Bots
(Image: ChatGPT)

AI is a simulation of intelligence. It’s right there in the name: artificial — not real — intelligence.

AGI, or Artificial General Intelligence is “a type of highly autonomous artificial intelligence (AI) intended to match or surpass human capabilities across most or all economically valuable cognitive work.”

A good way to think of it is that AI tends to be single task or limited in task focus, whereas AGI is, again by its very name, general purpose, where “general purpose” translates to “anything humans could choose to do”. I’m not sure I concur with Wikipedia’s inclusion of “economically valuable”, though.

But when questioning whether it’s actually intelligent, that pesky “A” is still there.

The real question should be: does it matter?

Look at the progression of intelligence simulation in the last few years.

It fools a few people occasionally. The best example is Eliza. Created in 1966, it was a small program that simulated a conversation by responding in relatively generic ways to certain kinds of prompts. Interestingly enough, it actually fooled a (very) few people into thinking it had achieved computer sentience.

It fools some of the people some of the time. This is perhaps how I’d characterize current LLMs (Large Language Models) like ChatGPT and its ilk. There are those who are holding conversations with these AI chatbots, either believing them to be “intelligent”, or willing to play along with the characterization because the conversations are often very realistic.

It fools most of the people most of the time. This has yet to exist, but seems on the horizon. This type of AGI is, of course, the goal of most of the players in the space. The intent is to simulate human interaction and thought to where, with few exceptions, we wouldn’t really be able to tell it’s not human. (Now, to be fair, to the best of my knowledge, it doesn’t yet exist. But perhaps I’m being fooled?).

It fools all the people all the time. My assumption is that some form of this is the long-term goal of AGI: to be indistinguishable from an actual human. While I suspect it’s possible, I don’t know whether this will really happen in my lifetime (or at all), regardless of the prognostications of the AI proponents.

Let’s assume this “all the people/all the time” AI comes into existence. Is it actually intelligence? Or does it remain just a really really really good simulation? And if it’s that good a simulation, does the distinction really matter? If it quacks like a duck … err … if it talks like a human, and interacts like a human, does it matter that it’s not a human? I suspect it matters, but how, exactly, does it matter? Most arguments against what such an AGI might do are arguments would apply to a human making the same decisions, so … how and where should the difference enter the discussion?

And then finally:

It no longer fools anyone, nor does it need to. This is the one that I think scares most people. At some point AGI — again, emphasis on the “A” — will surpass human capability, becoming an ASI (Artificial Super Intelligence). Used for good, it could solve problems “real” humans would have been unable to solve, and do so in record time. Used for evil … well, “results are unpredictable”, as the old saying goes, no matter how many people predict Skynet.

But it’s still a simulation, isn’t it? Simulating what? It’s still “artificial”, having been constructed, or is it? Is there some threshold beyond which the words no longer apply?

My take is that ultimately it doesn’t matter what we call it. Whether it’s genuinely “intelligent” is moot, and a discussion best left to the philosophers. These artificial programs or entities have a variety of distinguishing capabilities and behaviors, and those are what we need to pay attention to, so as to best, and perhaps most safely, leverage what’s been created.

2 thoughts on “The Threshold of Intelligence”

    • If it does such a good job of simulating the appearance and behavior of feelings that we can’t tell it’s an AI … does it matter? And even if it does matter, how would you tell if the simulation makes it indistinguishable from the real thing?

      Reply

Leave a Comment