What is one thing that human intelligence can do but artificial intelligence can’t?
July 25, 2018 / Ask Slater
Let me tackle this from a slightly more practical perspective. I think the standard issue with this kind of question is that people are very bad at knowing what human intelligence can do and focus on philosophical questions whose answers are unknown both for humans and for AI. A good example of this is the answer from David Philips. The nature of consciousness is a massive epistemological unknown. We don’t have a robust definition of consciousness, and so to argue about whether or not an AI is conscious is a fallacy.
So instead let me focus on two classes of things:
- Human tasks that AI is surprisingly good at
- Human tasks that AI is surprisingly bad at
The robots have this one. I’m starting here because it’s something very surprising to most people. We believe that understanding people’s emotions from their language, intonation, and speaking cadence is uniquely human and very difficult. It turns out that humans generally believe that they are very emotionally intelligent, or at least much more emotionally intelligent than a machine. It turns out that emotional intelligence has a normal distribution and that while some people are very emotionally intelligent, most people are not. Humans also have a particularly hard time working against their own biases across cultures, which is generally easier for an AI to deal with. You can read more about this here: Cogito (A company that specializes in building AI that helps humans understand the emotional state of the person on the other end of the phone)
Humans by a mile. This point and the previous one illustrate a common themes: things that are easy for humans are hard for machines and vice versa. So, back to walking. Look at this YouTube video:
See that robot: The dinky thing that looks like it would fall over in a stiff breeze, requires tens of thousands of dollars of equipment to get an awkward hobble. That is the state of the art here. Look at this video:
That’s a machine that took millions of dollars that still can’t walk like a person even on a flat, completely uniform surface. It can’t do that and it’s still so impressive that most people believe that these results are demo magic and not indicative of real performance. AI cannot walk.
Detecting and Measuring Gender and Ethnic Stereotypes
Robots on this one. Again, very surprising. Understanding the complex web of history that has led to the current cultural zeitgeist of stereotypes and archetypes seems intuitively to be a uniquely human problem. Beyond understanding the existence of these stereotypes there’s the issue of understanding that assuming someone of Hispanic origin is a housekeeper is offensive. It turns out that these “subtle” signals aren’t so subtle after all. Humans would generally need to go through years of gender studies and related fields before having a solid grasp on say, the last hundred years of evolving asian-american stereotypes.
Computers learn this by accident. It turns out that you get a really good view of this by just looking at human language and popular writings over time. In the paper I linked above we show a couple of researchers from Stanford using these writings to not only identify, but quantify bias over the years.
Knowing that people have names
Computers are so bad at this you could not imagine. To dig into this (and even explain why we know computers are bad at this) it’s worth watching another video:
That’s a “movie” composed by AI. It’s very cool, and very interesting, but importantly there is one thing it is not: coherent. Notice that there are no names at all, yet some of the lines are poignant, even beautiful, but at best we’re capable of retaining context across 2–3 lines. Certainly nothing beyond a paragraph. This highlights two critical gaps. Machines both lack baseline knowledge (such as knowing that they should give people names), and basic storytelling abilities (stringing together three related sentences, or telling a story with a start and end).
They are clearly capable of constructing grammatically correct sentences, and even capable of very flexibly rewording concepts to state a point in a way that’s more dramatically compelling, but once you look at those skills that we people take for granted they fall on their face hard.
Being inspired by great painters
Look at that painting (Credit: Exploding Tardis)
It’s beautiful, and uniquely human. Someone has taken a popular cultural icon and infused it with an homage to one of humanity’s greatest painters. A difficult task to be sure, but also one that humans seem well suited for. Knowing exactly what it is about Van Gogh that makes his style of painting so striking and applying it to something entirely out of context. Let’s look at a trailer for Loving Vincent:
Here, humans have managed to capture the ethereal sense of Van Gogh, and made an entire movie in that style. Except computers are much better at it:
I won’t say much more because I believe that video speaks for itself.
I could go on and on. There are countless examples of tasks on both sides of this spectrum, but I’ve only chosen a few of the most striking examples of cognitive dissonance to make what I believe is a critical point.
The human concept of difficult, and the machine concept of difficult are entirely distinct
We do not know what’s hard. We do not have good intuition for what is going to be easy or hard for a machine, just as we do not have good intuition for which problems are easy or hard for individuals. People often default to lazy arguments that make major assertions as to the nature of intelligence and consciousness, but there’s no need to do so. People will also often make baseless assertions about which jobs are going to be “hard” or “easy” to automate (Find Out If a Robot Will Take Your Job), but these are based on nothing but insulting and inaccurate stereotypes.
One of the best examples of dissonance here is around computer programming. People all over Quora (and within the Time quiz linked above) assert that computer programming will be relatively easy to automate, but have no idea what that actually means. They don’t understand that automating programming is so far beyond the edge of what computers are capable of today that it would be easier to automate authors than programmers (if you don’t think this is accurate then you have never maintained a large app stack).
There are countless tasks that humans can do that AI cannot. There are also countless tasks that AI can do that humans cannot. Just because some marketing person decided to call it AI doesn’t actually mean that it is “intelligent” in the same way a person is.