Artificial Intelligence, AI, has once again become a buzzword that is hard to ignore. It's easy to see why, computers are making more and more complex decisions, and deep learning is overtaking human experts in many tasks. However, this doesn't mean that we are close to achieving true AI, or that we are even going in the right direction. One of my favourite quotes on the subject is from Dijkstra, who said:
"The question of whether machines can think...
is about as relevant as the question of
whether submarines can swim."
Submarines are in many ways superior to any fish or animal at moving through water, but we would never call what they do swimming. The question is not relevant because it is simply a question of what we define as swimming. Conversely, a computer can do many mental tasks like arithmetic, far better than any human, but if our definition of 'thinking' does not include computers, the question becomes pointless. Therefore, in order to know how close we are to Artificial Intelligence, we need a good definition of intelligence that can potentially include both humans and computers.
While everybody has some idea of what intelligence is, as scientists, we need a well defined, reductionist definition. There are two requirements for this definition:
The first requirement means that we should have a look at the different general concepts people have about intelligence.
To start the definition: intelligence is a mental ability, as opposed to a physical ability like strength, for us, it happens in our mind, for computers in their software.
One controversial point I would like to argue is that there is no reason why it would be less intelligent if a computer is directly programmed by another intelligence as opposed to being able to learn that task. Learning is just a different way of programming, that has both advantages and disadvantages. This doesn't mean that AI is possible without learning, in fact I suspect that learning is a crucial ingredient. Instead, I mean that intelligence should be defined in terms of output to the world, irregardless of what goes on inside the mind of the intelligent agent. If it were possible for a directly and exhaustively programmed computer to do well in all intelligence tests we set, then that would be an AI as well.
In general, if more people find a mental task difficult or even impossible, that task is considered to require more intelligence. For example, solving algebra equations requires more intelligence than making small talk. Note here that computers are far better at the first than at the second task. There is of course the much debated IQ test, which is roughly equivalent to saying that intelligence is the potential to do well in school and university. The skills required in school have changed quite a lot over time, and also differ per country and culture, so it is not a directly applicable universal concept. Looking at how school requirements and the idea of intelligence has changed over time, we see the following trend:
Intelligence is what sets Humans apart
Originally, being able to remember a lot of facts was enough to be considered intelligent. As external technology like printing made it easier for objects to remember things for us, the focus shifted more towards being able to calculate. Calculation became the domain of computers, so we started defining intelligence more by the ability to understand and make connections between facts and calculations. Computers are also getting better and better at those tasks, so instead people talk more about 'Emotional intelligence' which is supposedly "far more useful than the old mechanistic intelligence that even stupid machines have".
This is known as the 'AI effect' and it that means that we are continuously moving the goal away from computers and closer to us. While this might be nice for our ego, it's not very useful if we want to develop AI. We need to set ourselves a clear goal and aim for that, but I would suggest that we need to first break down the problem a bit further.
When people talk about intelligence, they often imagine a single line, which goes from simple insects, past more complex animals, up to cats, dogs, then primates, dolphins and finally humans. They then talk about computers advancing along this line, and often fear that they will race past us and leave us second best, or worse.
I'd like to propose that this single line is too simplistic to be useful, instead I would think in terms of multiple lines, or dimensions, defining an intelligence space where each animal has a point in that space. For example, an elephant might have a better memory than us, while we are far better at logical reasoning. Whether the elephant or the human is more intelligent, can then be decided by picking a combination of our scores in each of these dimensions. Which of the above definitions of intelligence you choose is as simple as picking different weights for each dimension that you favour when you combine them. As a starting point, I would propose the following dimensions:
This is separate from long term memory, because even if you have some information stored somewhere, you may not be able to directly use it for logic and such. For a computer, this would be the RAM, or arguable the data that is currently going through the CPU. The amount of information available for direct processing. Humans are usually only able to have a handful of concepts 'in our consciousness', that we can directly reason about. I would say that computers also beat us in this respect.
This is the realm of pure mathematics and logical reasoning. Given a set of axioms and a basic rule system, how to reach a logical and unbiased conclusion. For a long time this was considered a very strong test of intelligence, but it has declined a fair bit. This is probably because computers are clearly besting us left and right in this dimension so we don't like to make it too important. There may still be a few logic systems where we come out on top, but for most cases, computers are victorious here.
Contrary to unbiased logic, this involves drawing conclusions from very limited information, but taking into account a bias from previous information. This includes common sense and the ability to roughly predict events without needing to understand every single step and detail.
Here humans still win from computers. We excel at common sense and are designed by our evolution to draw conclusions with the barest minimum of objective information available. This is also where machine learning is allowing computers to slowly catch up with us. While they still need far more experience (training data) than us, at least now computers don't keep making the exact same mistake every time. I think that intuition is an extreme case of this, where we use a lot of bias and very little logic.
Finally we have the ability to generalize. To take something learned in one situation, and apply it in another situation. Abstraction is where we truly shine, it is what gives us language (how else can 'chair' refer to both a bare wooden frame, and a comfy cloth armchair). It is also what allows us to solve a problem in mathematics (unbiased logic) and apply the solution to real (messy) objects in the world, we are able to abstract away the differences.
This is closely tied to biased logic, in fact abstraction is what allows us to draw such high quality conclusions from very sparse data. I would argue that this is also what allows us to make new connections, to 'create something new' but perhaps that is actually a separate dimension in itself.
First of all, it would be useful to have clear and concise tests for each of these dimensions separately that both humans and computers can score on. I have already given a qualitative indication of how we compare on each, but we obviously need a quantitative measurement.
Once we have these scores, we can combine them to create a single intelligence score. A simple weighted addition might sound appealing, but perhaps some kind of multiplication would be more reasonable. Most real mental tasks required a combination of all or most of these. For example, no matter how good your memory is, it's not much use without some form of logic to apply on it's contents. This is also why we would generally not consider computers more intelligent than humans yet. While we may have been overtaken on over half of these dimensions, we actually do have a nice well rounded score on each of them. We have enough of each to solve most tasks and reach most goals, which is what really sets us apart from machines.
In this entire story I have so far ignored the much debated original test of machine intelligence as proposed by Alan Turing. To paraphrase one interpretation of this test:
"If you can talk to a computer on any topic, for any length of time, and still you are unable to tell if you are talking to a computer or a human, then you know that computer is intelligent."
While this is a nice idea, both intuitive and easy to test, I think that this is actually a separate goal. I propose to make a distinction between the creation of:
It is the second kind that can pass the Turing test, also notice that a computer of the second kind can never surpass us by definition, only come closer and closer to being human. The advantage of making the distinction between these two, is that by defining the two separate goals in this way, an important new idea immediately comes up. To pass as human, a computer should actually decrease it's capacity in any dimension where it currently surpasses us. If my conversation partner in a Turing test would be able to literally remember every word it had ever heard, or calculate pi to a million digits in a few seconds, I would immediately know it was a computer. Decreasing memory, both long term and direct, as well as unbiased logic would be a requirement to create an Artificial Human.
This brings us to my final point. I believe that decreasing some abilities will actually be the key to increasing others. With so much memory and perfect logic, a computer has no need for abstractions and bias. Therefore it will not easily develop them in any way. We can already see that in certain deep learning setups like auto-encoders and word- or thought vector spaces. In these, one part of the neural network is intentionally small, forcing the training to develop generalizations. Neural nets in general are also far worse than regular computer programs at using memory and basic arithmetic, bringing them closer to human abilities. I would suggest that there are many ways in which we can intentionally limit these abilities in computers, and in that way encourage the development of the other dimensions. This can help us to develop the first true artificial human.
You may wonder what the use of an artificial human is if they have the same limitations we do. We have been able to make new humans for a very long time after all. However, the difference is of course that it is still artificial, and therefore easier to change and study. Once we have made real progress in the dimensions that are currently lacking, we can begin to carefully bring back some of the current power of computers. That way, machines can be equal or better than us in all dimensions of intelligence, and at that point we can say we have truly created General Artificial Intelligence.