Facebook AI Research – Machine Intelligence Roadmap


The development of intelligent machines is one of the biggest unsolved challenges in computer science. In this paper, we propose some fundamental properties these machines should have, focusing in particular on communication and learning. We discuss a simple environment that could be used to incrementally teach a machine the basics of natural-language-based communication, as a prerequisite to more complex interaction with human users. We also present some conjectures on the sort of algorithms the machine should support in order to profitably learn from the environment.

Tomas Mikolov, Armand Joulin, Marco Baroni
Facebook AI Research

1 Introduction

A machine capable of performing complex tasks without requiring laborious programming would be tremendously useful in almost any human endeavour, from performing menial jobs for us to helping the advancement of basic and applied research. Given the current availability of powerful hardware and large amounts of machine-readable data, as well as the widespread interest in sophisticated machine learning methods, the times should be ripe for the development of intelligent machines.

We think that one fundamental reasons for this is that, since “solving AI” at once seems too complex a task to be pursued all at once, the computational community has preferred to focus, in the last decades, on solving relatively narrow empirical problems that are important for specific applications, but do not address the overarching goal of developing general-purpose intelligent machines.

In this article, we propose an alternative approach: we first define the general characteristics we think intelligent machines should possess, and then we present a concrete roadmap to develop them in realistic, small steps, that are however incrementally structured in such a way that, jointly, they should lead us close to the ultimate goal of implementing a powerful AI. We realise that our vision of artificial intelligence and how to create it is just one among many. We focus here on a plan that, we hope, will lead to genuine progress, without by this implying that there are not other valid approaches to the task.

The article is structured as follows. In Section 2 we indicate the two fundamental characteristics that we consider crucial for developing intelligence– at least the sort of intelligence we are interested in–namely communication and learning. Our goal is to build a machine that can learn new concepts through communication at a similar rate as a human with similar prior knowledge. That is, if one can easily learn how subtraction works after mastering addition, the intelligent machine, after grasping the concept of addition, should not find it difficult to learn subtraction as well.

Since, as we said, achieving the long-term goal of building an intelligent machine equipped with the desired features at once seems too difficult, we need to define intermediate targets that can lead us in the right direction. We specify such targets in terms of simplified but self-contained versions of the final machine we want to develop. Our plan is to “educate” the target machine like a child: At any time in its development, the target machine should act like a stand-alone intelligent system, albeit one that will be initially very limited in what it can do. The bulk of our proposal (Section 3) thus consists in the plan for an interactive learning environment fostering the incremental development of progressively more intelligent behaviour.

Section 4 briefly discusses some of the algorithmic capabilities we think a machine should possess in order to profitably exploit the learning environment. Finally, Section 5 situates our proposal in the broader context of past and current attempts to develop intelligent machines.

Download the full paper here

Facebook Bolsters AI Research Team

[Announcement from Facebook AI Research]

The Facebook AI Research team is excited to announce additions joining from both academia and industry. Highlighting the newest members to this quickly growing team including the award-winning Léon Bottou and Laurens van der Maaten. Their work will focus on several aspects of machine learning, with applications to image, speech, and natural language understanding.

Léon Bottou joins us from Microsoft Research. After his PhD in Paris, he held research positions at AT&T Bell Laboratories, AT&T Labs-Research and NEC Labs. He is best known for his pioneering work on machine learning, structured prediction, stochastic optimization, and image compression. More recently, he worked on causal inference in learning systems. He is rejoining some of his long-time collaborators Jason Weston, Ronan Collobert, Antoine Bordes and Yann LeCun, the latter with whom he developed the widely used DjVu compression technology and the AT&T check reading system. Leon is a laureate of the 2007 Blavatnik Award for Young Scientists.

Nicolas Usunier was most recently a professor at Université de Technologie de Compiègne and also held a chair position from the “CNRS-Higher Education Chairs” program. Nicolas earned his PhD in machine learning in 2006 with specific focus areas in theory, ranking, and learning with multiples objectives. At FAIR he will work on text understanding tasks, especially question answering, and on the design of composite objective functions that can define complex learning problems from simpler ones.

Anitha Kannan comes to us from Microsoft Research where she worked on various applications in computer vision, Web and e-Commerce search, linking structured and unstructured data sources and computational education. Anitha received her PhD from the University of Toronto and will continue her research in machine learning and computer vision.

Laurens van der Maaten comes to us with an extensive history working on machine learning and computer vision. Prior to joining Facebook, Laurens was an Assistant Professor at Delft University of Technology, a post-doctoral researcher at UC San Diego and a Ph.D. student at Tilburg University. He will continue his research on learning embeddings for visualization and deep learning, time series classification, regularization, and cost-sensitive learning.

Michael Auli joins FAIR after completing a postdoc at Microsoft Research where he worked on improving language translation quality using recurrent neural networks. He earned a Ph.D. at the University of Edinburgh for his work on syntactic parsing with approximate inference.

Gabriel Synnaève was most recently a postdoctoral fellow at Ecole Normale Supérieure in Paris. Prior to that, he received his PhD in Bayesian modeling applied to real-time strategy games AI from University of Grenoble in 2012. Gabriel will initially be working on speech recognition and language understanding.

Having hired more than 40 people across our Menlo Park and New York labs — including some of the top AI researchers and engineers in the world, these new hires underscore our commitment to advancing the field of machine intelligence and developing technologies that give people better ways to communicate.

[End of announcement]