You’ve read the hype, now read the truth: How close is the AI apocalypse?
Share on Facebook
Share on Twitter
Share on Google+
Share on Reddit
Share on Email

Photo Credit: Shutterstock, Cartoon - "Is it harassment if I ask the new guy to make espresso?"

In a Geektime exclusive, Douglas Hofstadter, Jaron Lanier and Gary Marcus weigh in on how far AI has come – and how far it hasn’t

Marketing can sometimes be so persuasive that an entire industry believes its own hype.

Artificial intelligence is a case in point. For 60 years scientists have been announcing that the great AI breakthrough is just around the corner.  All of a sudden many tech journalists and tech business leaders appear convinced that, finally, AI has come into its own.

Just look at a few recent headlines.

Wired Magazine recently enthused that AI had achieved a “60-years-in-the-making overnight” breakthrough due to the convergence of parallel computation, bigger data and deeper algorithms.”

Last month, Tesla and SpaceX founder Elon Musk said that artificial intelligence is humanity’s “biggest existential threat.”

And earlier this week, the New York Times announced with great fanfare that “two groups of scientists, working independently, have created artificial intelligence software capable of recognizing and describing the content of photographs and videos with far greater accuracy than ever before, sometimes even mimicking human levels of understanding.”

Human levels of understanding? Really?

You can see why most readers of technology web sites might believe that computers are already smarter than humans, and that Ray Kurzweil’s Singularity is on our doorstep.

To find out if these claims are founded, Geektime contacted three of the world’s leading thinkers on artificial intelligence. We asked them how much of an AI breakthrough humanity has actually achieved, as opposed to  wishful thinking.

Gary Marcus, a psychology professor at New York University, who writes about artificial intelligence for the New Yorker, was the first to burst the balloon. He told Geektime that while the coalescence of parallel computation and big data has led to some exciting results, so-called ‘deeper algorithms’ aren’t really that much different from two decades ago.

In fact, several experts concurred that doing neat things with statistics and big data (which account for many of the recent AI “breakthroughs”) are no substitute for understanding how the human brain actually works.

“Current models of intelligence are still extremely far away from anything resembling human intelligence,” philosopher and scientist Douglas Hofstadter told Geektime.

But why is everyone so excited about computer systems like IBM’s Watson, which beat the best human players on Jeopardy! and has even more recently been  diagnosing disease?

“Watson doesn’t understand anything at all,” said Hofstadter.  “It is just good at grammatical parsing and then searching for text strings in a very large database.”

Similarly, Google Translate understands nothing whatsoever of the sentences that it converts from one language to another, “which is why it often makes horrendous messes of them,” said Hofstadter.

Meanwhile, in a recent talk at Ireland’s WebSummit, Marcus also came down  on Google Translate.

He asked the program to convert the following sentence into Gaelic and back: “Either the translation of sentences with complex sentence structures into Celtic languages remains remarkably difficult, or it doesn’t.”

What did he get back?  “Either continue the translation of sentences with complex sentence structures Celtic languages remarkably difficult, or does it.”

“That’s not even a sentence in English,” Marcus riffed.

“It’s complete garbage. If you do a five word sentence, Google Translate will probably work but if you have a 20-word sentence with an unusual syntax it might well not.”

First, know thyself

In Marcus’ view, the only route to true machine intelligence is to begin with a better understanding of human intelligence. Big data will only get you so far, because it’s all correlation and no causation.

When children begin learning about the world,  they don’t need big data. That’s because their brains are understanding why one thing causes another.  That process only requires “small data,” says Marcus.

“My 22-month old is already more sophisticated than the best robots in the world in digging through bad toys and finding something new.”

Marcus offers several examples of aspects of human intelligence that we need to gain a better understanding of if we are want to build intelligent machines. For instance, a human being who looks at the following picture (11:14 in  the Youtube video) will be able to guess what happens next.

Marcus gives another example of something humans can do better than software. A sentence like “dogs have four legs” is easy for humans to understand.  Because it is a general statement that does not apply to all cases (some dogs have 3 legs), computers have a hard time with it.

And what about Google’s latest hoopla over visual recognition?

“It’s an interesting paper,” says Marcus, referring to Google’s announcement that a computer could now semi-accurately describe events in photos.

“It may have some commercial application. But it is not published in a peer review journal, and I don’t think it is a game changer.”

So then what is this new technology and why is everyone talking about it?

“It’s the visual equivalent of what happens in Google Translate when you translate a complicated sentence from English to Gaelic and back again. Hit or miss approximations, not true comprehension. People have been working in this general direction for thirty years. It’s cute, and it’s interesting, but it’s not bringing us closer to genuinely intelligent systems.”

Be careful what you wish for

According to computer scientist and author Jaron Lanier, one of the  problems with all the hype around  AI is that programs like Google Translate are not autonomous but have to be continually fed examples of human translations to work. He argues that when we talk about intelligent computer programs taking human jobs, we pretend that the machines are smarter than they actually are. This allows us to “treat algorithms as persons,” and treat actual persons as if their work has no monetary value. In the long term, this results in a society where the wealth and power accrue to the keepers of the algorithms, while the middle class becomes impoverished.

“When a piece of software is deemed autonomous to some degree,” he has written, “the only test of its status is whether users believe it. ”

Lanier says that Elon Musk’s statement that AI presents an existential danger is the flip side of claims that computers becoming smarter than humans is a good thing, “because humans had their chance and now it’s the computers’ turn,” he told Edge.org.

“I propose that the whole basis of the conversation is itself askew,  it’s a kind of a non-optimal, silly way of expressing anxiety about where technology is going. ”

In response to Geektime’s query about Musk’s comments, Lanier said, “Elon is [in a sense] correct because if people believe in AI enough, then in practice, events can unfold as if AI is real.”

Is humanity in existential danger?

Due to the as yet immature state of artificial intelligence, Hofstadter told Geektime that Elon Musk’s dystopian predictions are a long way off.

“I would not worry about the near-term future of humanity, but in the long term, Mr. Musk may well be quite correct.”

But even if machines aren’t that advanced, can they still cause harm? Do they have to have a certain level of understanding, or consciousness, to turn against their creators?

“Machines don’t have to have free will, whatever that even means, to cause harm,” Marcus said,  “We have already seen flash crashes in the stock market, for example, induced by machines with no will and little ability to think with the flexibility of human cognition. Computers can already cause chaos if they are mis-programmed.”

The ideology of AI

If these three scientists are right,  and computers don’t actually even think in the human sense, then why do the media and high-tech business leaders seem so eager to jump the gun? Why would they have us believe that robots are about to surpass us, like Scarlett Johansson’s character in the movie Her?

Perhaps many of us actually want computers to be smarter than humans because it’s an appealing fantasy. If robots are at parity with humans, then we can define down what it means to be human — we’re just an obsolete computer program — and all the vexing, painful questions like why do we suffer, why do we die, how should we live? become irrelevant.

It also justifies a world in which we put algorithms on a pedestal and believe they will solve all our problems. Jaron Lanier compares it to a religion:

“In the history of organized religion,” he told Edge.org, “it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.”

“That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else…contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.”

Photo Credit: Shutterstock/ Cartoon – “Is it harassment if I ask the new guy to make espresso?”

 

Share on:Share
Share on Facebook
Share on Twitter
Share on Google+
Share on Reddit
Share on Email
Simona Weinglass

About Simona Weinglass


I’m an old-school journalist who recently decided to pivot into high-tech. I work in high-tech marketing as well as print and broadcast media covering politics, business culture and everything in between.

More Goodies From Big Data


How Cognitive Search Eliminates Common Struggles Website Users Face

How did Big Data transform the manufacturing industry?

10 ways to save money with AWS Redshift

  • Anonymous

    I wouldn’t call Lanier an “expert” in A.I. And the other two — Marcus and Hofstadter — have reasons to dislike the way A.I. is progressing, as their particular approaches are not currently in the spotlight. And, to be fair, if the “algebraic theory of mind” were ascendant, then you would see criticism being thrown their way from people doing the other approaches.

    This sounds like a story of money — for grants and such — and also of prestige, to me; but I could be wrong. “A.I. is going absolutely nowwhere!” Translation: my students or the students of my friends aren’t getting the best jobs, and people aren’t paying enough attention to my theories!

    A.I. will continue to progress as long as large corporations can see that it is making them money. Even if government funding of AI research completely dried up, large corporations would continue to fund it. No amount of criticism will matter (except to confuse the issue, and give people a false impression of where things truly stand).

    I think what is needed to cut through the rhetoric is objective measures of AI progress; not post-hoc-ing with “Well, but it can’t do this over here!” Competitions like ImageNet, various Kaggle competitions, and the upcoming Winograd Schema Challenge, where the measure of progress is decided in advance, are good examples of ways to measure progress.

  • Anonymous

    Saying that the article above is about grants and prestige issues misses the point. Marcus would like agree that translation and Jeopardy playing programs represent a clear progress in AI. The main point of the article was that these advances are being “oversold” and “over hyped”. Also, saying that AI research should be all about making money for corporation is a great example of the problem this article is pointing to. Science and research ought to be driven by questions that advance human knowledge as well as address important societal needs. The point was that we may be getting too overshadowed by immediate problem solving to the exclusion of exploring fundamental and deep advances in AI.

  • Sven

    I think the problem is people are confusing large databases of information with intelligence. When a computer can encounter a new problem, come up with a new way to solve problem, then write its own code to carry out the solution, then I’ll start being impressed. Until then, asking a computer to search the internet for “who voiced woody from toystory”, isn’t intelligence/learning/problem solving, its memory. And computers have been very good at storing/searching/sorting data since the beginning.