Prof. Gary Marcus of NYU on today’s limits of artificial intelligence

149

Interview by Alice Lloyd George, an investor at RRE Ventures and the host of Flux, a series of podcast conversations with leaders in frontier technology.

It’s hard to visit a tech site these days without seeing a headline about deep learning for X, and that AI is on the verge of solving all our problems. Gary Marcus remains skeptical.

Marcus, a best-selling author, entrepreneur and professor of psychology at NYU, has spent decades studying how children learn, and believes that throwing more data at problems won’t necessarily lead to progress in areas such as understanding language, not to speak of getting us to AGI — artificial general intelligence.

Marcus is the voice of anti-hype at a time when AI is all the hype, and in 2015 he translated his thinking into a startup, Geometric Intelligence, which uses insights from cognitive psychology to build better-performing, less-data-hungry machine learning systems. The team was acquired by Uber in December to run Uber’s AI labs, where his co-founder Zoubin Ghahramani has now been appointed chief scientist. So what did the tech giant see that was so important?

In an interview for Flux, I sat down with Marcus, who discussed why deep learning is the hammer that’s making all problems look like a nail and why his alternative sparse data approach is so valuable.

We also got into the challenges of being an AI startup competing with the resources of Google, how corporates aren’t focused on what society actually needs from AI, his proposal to revamp the outdated Turing test with a multi-disciplinary AI triathlon and why programming a robot to understand “harm” is so difficult.

AMLG: Gary, you are well-known as a critic of this technique, you’ve said that it’s over-hyped. That there’s low-hanging fruit that deep learning’s good at  —  specific narrow tasks like perception and categorization, and maybe beating humans at chess, but you felt that this deep learning mania was taking the field of AI in the wrong direction, that we’re not making progress on cognition and strong AI. Or as you’ve put it, “we wanted Rosie the robot, and instead we got the Roomba.” So you’ve advocated for bringing psychology back into the mix, because there’s a lot of things that humans do better, and that we should be studying humans to understand why they do things better. Is this still how you feel about the field?

GM: Pretty much. There was probably a little more low-hanging fruit than I anticipated. I saw somebody else say it more concisely, which is simply, deep learning does not equal AGI (AGI is “artificial general intelligence.”) There’s all the stuff you can do with deep learning, like it makes your speech recognition better. It makes your object recognition better. But that doesn’t mean it’s intelligence. Intelligence is a multi-dimensional variable. There are lots of things that go into it.

Read the source article at TechCrunch.