I took my title from an article by Kevin Drum, because I so whole heartedly agree with the sentiment. He begins by saying:
"This is a battle that I suppose I have no hope of winning, but it would sure be nice if journalists (and marketing folks) could stop throwing around the term AI for everything that works a little better than it did last year. It’s true that some of the advances in things like machine learning, quantum computing, and conversation bots have been pretty stupendous. My jaw drops at some of this stuff, and yours should too. Nonetheless, none of it is artificial intelligence or, really, even very close to it. It’s like calling the original dynamo of the 1830s an electrical generating network. Those dynamos were important because they made things like telegraphs and telephones possible, but true electrification was still decades away. Likewise, what we’re doing today may end up as the foundation of true AI, but we aren’t there yet."
Kevin Drum
Machine Learning (ML) isn’t there yet but is it even on the right track? ML is based on “training” huge networks of interconnected “artificial neurons” to recognize patterns in vast databases.
In a recent TED talk, AI researcher Janelle Shane shared the weird, sometimes alarming antics of Artificial Neural Network (ANNs) AI algorithms as they try to solve human problems. She points out that the best ANNs we have today are maybe on par with worm brains.
So, if this is the right track, it has a very long way to go. A conclusion consistent with an annual survey of AI experts which asked how long untill we have a 50/50 chance to achieve human level intelligence Their estimates average to 80 years.
So why the reliance on ANNs? From MIT Technology Review, Artificial general intelligence: Are we close, and does it even make sense to try?
“Deep learning relies on neural networks, which are often described as being brain-like in that their digital neurons are inspired by biological ones. Human intelligence is the best example of general intelligence we have, so it makes sense to look at ourselves for inspiration.”
While it is reasonable to study the human brain, the only naturally existing example of intelligence we know about in the pursuit of AGI, it is gratuitous to suppose it is necessary to emulate its lowest level processing architecture to solve the problem or even that such an emulation is possible in a meaningful way.
The artificial neurons of ANNs are far more simplistic than the neurons of the human brain which have up to 128 branched dendrites each of which can have up to 40 synapses.
In any case many, other species have similar neural architectures without possessing high intelligence as we recognize it in humans. There is a vast difference from direct emulation of nature as opposed to studying nature to discover underlying principles.
The Wright Brothers studied the wings of birds, nature’s flying machines, to learn about aerodynamics. Those who tried direct emulation of birds by building machines with flapping wings failed. Aerodynamics is not one thing in birds and something else in aircraft. Why should intelligence be one thing in organisms and something else in machines?
So, what should be called Artificial Intelligence?
Perhaps we should, as Elon Musk likes to say, go back to First Principles. We know what “artificial” means but if we understood what intelligence was in essence, we would not have to rely on a test like Turing proposed.
As it occurs in nature, intelligence is a specific kind of information processing that goes on inside of an organic brain. It is the defining characteristic of our species; we are intelligent animals. Humans have that characteristic unique among known species or at least to such a degree that is might as well be a difference in kind. It is the characteristic that gives us our incomparable control (for better or worse) over the natural environment.
Intelligence is characterized by specific information processing operations including interference (logic), pattern recognition, and memory utilization. But computers can already do these as well or better than people. What is missing?
It is knowledge, a special kind of information structure that functions as in internal model of reality. This insight redefines the entire enterprise of Artificial Intelligence. Don’t try to build an artificial human brain, focus on the end product of intelligence, knowledge.
Doesn’t Machine Learning produce knowledge? Is not that what learning means? It takes but a moment of consideration to realize the answer is no. The word learning, whether we are talking about people or computers has two meanings so distinct we need different terms.
Neural Learning
This is the process whereby neural pathways, in the human brain or artificial neural networks in a computer, are trained through many iterations until a skill of some kind is attained. For example, in humans, learning how to ride a bike or swing on a trapeze. Through repetitive training, other animals can be taught skills they do they do not have in nature. For example, bears can be taught to ride a bicycle, but we do not consider them intelligent in the same sense as humans because of that.
Cognitive Learning
When we leave behind considerations of the underlying processing architecture of the brain and focus instead on functionality, we find there is a second qualitatively distinct type of learning. This second kind is the “higher brain function” that distinguishes humans as “intelligent animals,” from all others.
Cognitive learning is the process that results not in a skill but in knowledge, a network of connected concepts that model reality. This kind of learning creates ever better models by acquiring and integrating new ideas. It is knowledge, the end-product of the cognitive learning process, that is the real goal of AI.
Machine Learning is not cognitive, it results in a skill but not knowledge. It has none of the characteristics we recognize as general intelligence in ourselves. So much confusion could have been avoided if people stuck to the term “data science” and never called it AI.
Bryant Cruse Tweet
The only justification for calling them AI is, first they mimic organic neurons and one species of organism at least is intelligent (quite a stretch when put that way). Second is aspirational, some researchers hope ANNs might be given the capacity for knowledge someday. A great deal of confusion has been sown by this loose use of terms.
Are old-fashioned “knowledge representation” approaches AI?
Again, the answer is no. But here the reasons are a bit more complex and have to do with the nature of knowledge itself as opposed to data and information. While in common usage these terms are sometimes used interchangeably, they have generally accepted meanings. Data is a single assertion, “the sky is blue”, or symbol, such as a word or a number. Information is when data are organized in specific ways such as in a database, or an ordered stream or sequence such as this sentence. Knowledge on the other hand is a structure that models reality.
The relationship between models and their prototypes (the things that they are models of) is not one of representation. Symbols represent things without telling you anything about what is being represented.
Old fashioned “knowledge representation” was off the mark as is today’s renewed interest in Symbolic AI. Models resemble the reality, but they do not represent it. Models are structures, not symbols.
Expert systems and sematic networks never succeeded in creating models of reality that were independent from a reasoning process or linguistic semantics. Their building blocks are individual assertions, either connected by inference or semantics relationships. For that reason, they could never be made to scale to long many practical applications, too many assertions.
The practical difficulty of the problem is illustrated by Cyc an artificial intelligence project that has attempted to assemble a comprehensive ontology and knowledge base of everyday common sense knowledge, with the goal of enabling AI applications to perform human-like reasoning. It is essentially a huge rule-based “expert” system. The project can hardly be considered a success.
MIT’s Open Mind Common Sense AI project uses a semantic network instead of an expert system architecture but it suffers from the same failings, it has over 1 million facts or assertions. These projects bring to mind the plight of the medieval alchemists whose knowledge of the material world could only be acquired one experiment at a time. The “knowledge representation” approach does not scale.
Building a model of the commonsense world in a machine requires a compact specification for building models in the same sense that DNA is a compact specification for organisms. To do that, one must first discover the “atomic” core concepts from which all more complex concepts are composed and the connection relationships that result in valid models, that is in sense rather than nonsense.
If it does not produce knowledge it is not intelligence, artificial or natural.
Bryant Cruse Tweet