The decade of 2010 was huge for artificial intelligence, thanks to advances in deep learning, a branch of AI that has become viable due to the growing ability to collect, store and process large amounts of data. Today, deep learning is not only a topic of scientific research, but also a key component of many everyday applications.
But a decade of research and application has made it clear that, in its current state, deep learning is not the final solution to solving the ever elusive challenge of creating AI on a human level.
What do we need to take AI to the next level? More data and larger neural networks? New deep learning algorithms? Other approaches besides deep learning?
This is a topic that has been hotly debated in the AI community and was the focus of an online discussion that Montreal.AI held last week. Entitled “AI Debate 2: Advancing AI: an interdisciplinary approach”, the debate was attended by scientists from different areas and disciplines.
Hybrid artificial intelligence
Cognitive scientist Gary Marcus, who co-hosted the debate, reiterated some of the major shortcomings of deep learning, including excessive data requirements, low ability to transfer knowledge to other domains, opacity and lack of reasoning and knowledge representation.
Marcus, who is a staunch critic of deep learning-only approaches, published an article in early 2020 in which he suggested a hybrid approach that combines learning algorithms with rules-based software.
Other speakers also pointed to hybrid artificial intelligence as a possible solution to the challenges that deep learning faces.
“One of the key issues is to identify the building blocks of AI and how to make it more reliable, explainable and interpretable,” said computer scientist Luis Lamb.
Lamb, who is co-author of the book Neural-Symbolic Cognitive Reasoning, proposed a fundamental approach to neural-symbolic AI that is based on logical formalization and machine learning.
“We use logical and knowledge representation to represent the reasoning process that [it] it is integrated with machine learning systems so that we can reform neural learning effectively using deep learning machinery, ”said Lamb.
Evolution inspiration
Fei-fei Li, professor of computer science at Stanford University and former chief AI scientist at Google Cloud, pointed out that, in the history of evolution, vision has been one of the main catalysts for the emergence of intelligence in living beings . Likewise, the work with image classification and computer vision has helped to unleash the revolution of deep learning in the last decade. Li is the creator of ImageNet, a dataset of millions of labeled images used to train and evaluate computer vision systems.
“As scientists, we ask ourselves: what is the next northern star?” Li said. “There are more than one. I have been extremely inspired by evolution and development. “
Li pointed out that intelligence in humans and animals emerges from active perception and interaction with the world, a property that is lacking in today’s AI systems, which depend on data curated and labeled by humans.
“There is a fundamentally critical cycle between perception and performance that drives learning, understanding, planning and reasoning. And this loop can be better accomplished when our AI agent can be incorporated, can dial between exploratory and exploratory actions, it is multimodal, multitasking, generalizable and, often, social ”, she said.
In his Stanford lab, Li is currently working on building interactive agents that use perception and performance to understand the world.
OpenAI researcher Ken Stanley also discussed the lessons learned from evolution. “There are properties of evolution in nature that are deeply powerful and are not explained algorithmically yet because we cannot create phenomena like what was created in nature,” said Stanley. “These are properties that we must continue to pursue and understand, and these are properties not only in evolution, but also in ourselves.”
Reinforcement learning
Computer scientist Richard Sutton pointed out that, in large part, work with AI lacks a “computational theory”, a term coined by neuroscientist David Marr, who is known for his work on vision. Computational theory defines which objective an information processing system seeks and why it seeks that objective.
“In neuroscience, we are losing a high-level understanding of the purpose and purposes of the general mind. It is also true in artificial intelligence – perhaps more surprisingly in AI. There is very little computational theory in the sense of Marr in AI, ”said Sutton. Sutton added that books often define AI simply as “making machines do what people do” and most current conversations about AI, including the debate between neural networks and symbolic systems, are “about how you get something , as if we already know what it is and what we are trying to do. ”
“Reinforcement learning is the first computational theory of intelligence,” said Sutton, referring to the branch of AI where agents are given the basic rules of an environment and need to find ways to maximize their reward. “Reinforcement learning is explicit about the objective, about the which is and the why. In reinforcement learning, the goal is to maximize an arbitrary reward signal. For that, the agent must calculate a policy, a value function and a generating model ”, said Sutton.
He added that the field needs to further develop a consensual intelligence computational theory and said that reinforcement learning is currently the standout candidate, although he has recognized that it is worth exploring other candidates.
Sutton is a pioneer in reinforcement learning and the author of a seminal book on the subject. DeepMind, the AI lab where he works, is deeply invested in “deep reinforcement learning”, a variation of the technique that integrates neural networks into basic reinforcement learning techniques. In recent years, DeepMind has used deep reinforcement learning to master games like Go, chess and StarCraft 2.
While reinforcement learning has striking similarities to the learning mechanisms in human and animal brains, it also suffers from the same challenges that plague deep learning. Reinforcement learning models require extensive training to learn the simplest things and are strictly restricted to the narrow domain in which they are trained. For now, the development of deep reinforcement learning models requires very expensive computing resources, which makes research in the area limited to companies with many resources like Google, which owns DeepMind, and Microsoft, quasi-owner of OpenAI .
Integrating world knowledge and common sense with AI
Turing Prize-winning computer scientist Judea Pearl, best known for his work on Bayesian networks and causal inference, emphasized that AI systems need worldwide knowledge and common sense to make the most efficient use of the data they feed.
“I believe we should build systems that combine knowledge of the world with data,” said Pearl, adding that AI systems based solely on the accumulation and blind processing of large volumes of data are doomed to fail.
Knowledge doesn’t come from data, said Pearl. Instead, we use the innate structures in our brains to interact with the world and use the data to interrogate and learn from the world, as witnessed in newborns, who learn many things without being explicitly instructed.
“This type of structure must be implemented externally to the data. Even if we manage by some miracle to learn this structure from the data, we still need to have it in a form that is communicable to human beings ”, said Pearl.
University of Washington professor Yejin Choi also highlighted the importance of common sense and the challenges that its absence poses to current AI systems, which focus on mapping input data to results.
“Today we know how to solve a set of data without solving the underlying task with deep learning,” said Choi. “This is due to the significant difference between AI and human intelligence, especially knowledge of the world. And common sense is one of the fundamental missing pieces. “
Choi also pointed out that the reasoning space is infinite, and reasoning itself is a generative task and very different from the categorization tasks for which the deep learning algorithms and assessment benchmarks are suitable. “We never list too much. We just think quickly and this will be one of the fundamental intellectual challenges that we can think of in the future, ”said Choi.
But how do we achieve common sense and reasoning in AI? Choi suggests a wide range of parallel research areas, including combining symbolic and neural representations, integrating knowledge into reasoning and building references that are not just categorization.
We still don’t know the full path to common sense, said Choi, adding: “But one thing is certain is that we can’t just get there by making the tallest building in the world even taller. Therefore, GPT-4, -5 or -6 may not work. “
VentureBeat
VentureBeat’s mission is to be a square digital city for technical decision makers to gain insight into transformative technology and transact. Our website provides essential information on technologies and data strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on subjects of interest,
- our newsletters
- leading closed-minded content and discounted access to our award-winning events such as Transform
- network resources and more.
Become a member