The Beginnings of Artificial Intelligence (AI) Research – In The 1950s

With the development of the electronic computer in 1941 and the stored program computer in 1949 the conditions for research in artificial intelligence (AI) were given. Still, the observation of a link between human intelligence and machines was not widely observed until the late 1950s.

A discovery that influenced much of the early development of AI was made by Norbert Wiener. He was one of the first to theorise that all intelligent behaviour was the result of feedback mechanisms. Mechanisms that could possibly be simulated by machines. A further step towards the development of modern AI was the creation of The Logic Theorist. Designed by Newell and Simon in 1955 it may be considered the first AI program.

The person who finally coined the term artificial intelligence and is regarded as the father of AI is John McCarthy. In 1956 he organised a conference “The Dartmouth summer research project on artificial intelligence” to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. In the following years AI research centres began forming at the Carnegie Mellon University as well as the Massachusetts Institute of Technology (MIT) and new challenges were faced: 1) the creation of systems that could efficiently solve problems by limiting the search and 2) the construction of systems that could learn by themselves.

One of the results of the intensified research in AI was a novel program called The General Problem Solver, developed by Newell and Simon in 1957 (the same people who had created The Logic Theorist). It was an extension of Wiener’s feedback principle and capable of solving a greater extent of common sense problems. While more programs were developed a major breakthrough in AI history was the creation of the LISP (LISt Processing) language by John McCarthy in 1958. It was soon adopted by many AI researchers and is still in use today.

An introduction to TensorFlow: Open source machine learning

TensorFlow is an open source software library for numerical computation using data flow graphs. Originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence AI research organisation for the purposes of conducting machine learning and deep neural networks research.

To learn more about TensorFlow, visit

Human Immortality Through AI

Will AI research some day lead to human immortality? There are a few groups and companies that believe that one day, the human race will merge with Artificially Intelligent machines.

One approach to this would be that of which Ray Kurzweil describes in his 2005 book The Singularity Is Near. Kurzweil pens a world where humans transcend biology by implanting AI nano-bots directly into the neural networks of the brain. The futurist and inventor also predicts that humans are going to develop emotions and characteristics of higher complexity as a result of this. Kurzweil’s prediction is that this will happen around the year 2030. That’s less than 15 years away!

Another path to human immortality could be by way of uploading your mind’s data or a ‘mind-file’ into a database, to later be imported or downloaded into an AI’s brain that will continue on life as you. This is something a company based in Los Angeles called Humai is claiming to be working on (however I’m still not 100% convinced that the recent blanket media coverage on Humai is not all part of an elaborate PR stunt for a new Hollywood movie!). Humai’s current website meta title reads: ‘Humai Life: Extended | Enhanced | Restored’. Their mission statement sounds very bold and ambitious for our current times and news headlines like ‘Humai wants to resurrect the dead with artificial intelligence’ do not help but the AI tech start-up does make a point of saying that the AI technology for the mind restoration part of the process will not be ready for another 30 years.

But do we really want to live forever and do non AI Researchers even care about this? In the heart of Silicon Valley, Joon Yun is a hedge fund manager doing US social security data calculations. Yun says, “the probability of a 25-year-old dying before their 26th birthday is 0.1%”. If we could keep that risk constant throughout life instead of it rising due to age-related disease, the average person would – statistically speaking – live 1,000 years. In December 2014, Yun announced a $1m prize fund to challenge scientists to “hack the code of life” and push the human lifespan past its apparent maximum of around 120 years.

Like or not, in one way or another, the immortality of human beings is probably something that is going to happen – unless the invention of AI and the singularity renders us extinct that is!

Facebook AI Research – Machine Intelligence Roadmap


The development of intelligent machines is one of the biggest unsolved challenges in computer science. In this paper, we propose some fundamental properties these machines should have, focusing in particular on communication and learning. We discuss a simple environment that could be used to incrementally teach a machine the basics of natural-language-based communication, as a prerequisite to more complex interaction with human users. We also present some conjectures on the sort of algorithms the machine should support in order to profitably learn from the environment.

Tomas Mikolov, Armand Joulin, Marco Baroni
Facebook AI Research

1 Introduction

A machine capable of performing complex tasks without requiring laborious programming would be tremendously useful in almost any human endeavour, from performing menial jobs for us to helping the advancement of basic and applied research. Given the current availability of powerful hardware and large amounts of machine-readable data, as well as the widespread interest in sophisticated machine learning methods, the times should be ripe for the development of intelligent machines.

We think that one fundamental reasons for this is that, since “solving AI” at once seems too complex a task to be pursued all at once, the computational community has preferred to focus, in the last decades, on solving relatively narrow empirical problems that are important for specific applications, but do not address the overarching goal of developing general-purpose intelligent machines.

In this article, we propose an alternative approach: we first define the general characteristics we think intelligent machines should possess, and then we present a concrete roadmap to develop them in realistic, small steps, that are however incrementally structured in such a way that, jointly, they should lead us close to the ultimate goal of implementing a powerful AI. We realise that our vision of artificial intelligence and how to create it is just one among many. We focus here on a plan that, we hope, will lead to genuine progress, without by this implying that there are not other valid approaches to the task.

The article is structured as follows. In Section 2 we indicate the two fundamental characteristics that we consider crucial for developing intelligence– at least the sort of intelligence we are interested in–namely communication and learning. Our goal is to build a machine that can learn new concepts through communication at a similar rate as a human with similar prior knowledge. That is, if one can easily learn how subtraction works after mastering addition, the intelligent machine, after grasping the concept of addition, should not find it difficult to learn subtraction as well.

Since, as we said, achieving the long-term goal of building an intelligent machine equipped with the desired features at once seems too difficult, we need to define intermediate targets that can lead us in the right direction. We specify such targets in terms of simplified but self-contained versions of the final machine we want to develop. Our plan is to “educate” the target machine like a child: At any time in its development, the target machine should act like a stand-alone intelligent system, albeit one that will be initially very limited in what it can do. The bulk of our proposal (Section 3) thus consists in the plan for an interactive learning environment fostering the incremental development of progressively more intelligent behaviour.

Section 4 briefly discusses some of the algorithmic capabilities we think a machine should possess in order to profitably exploit the learning environment. Finally, Section 5 situates our proposal in the broader context of past and current attempts to develop intelligent machines.

Download the full paper here

Facebook Bolsters AI Research Team

[Announcement from Facebook AI Research]

The Facebook AI Research team is excited to announce additions joining from both academia and industry. Highlighting the newest members to this quickly growing team including the award-winning Léon Bottou and Laurens van der Maaten. Their work will focus on several aspects of machine learning, with applications to image, speech, and natural language understanding.

Léon Bottou joins us from Microsoft Research. After his PhD in Paris, he held research positions at AT&T Bell Laboratories, AT&T Labs-Research and NEC Labs. He is best known for his pioneering work on machine learning, structured prediction, stochastic optimization, and image compression. More recently, he worked on causal inference in learning systems. He is rejoining some of his long-time collaborators Jason Weston, Ronan Collobert, Antoine Bordes and Yann LeCun, the latter with whom he developed the widely used DjVu compression technology and the AT&T check reading system. Leon is a laureate of the 2007 Blavatnik Award for Young Scientists.

Nicolas Usunier was most recently a professor at Université de Technologie de Compiègne and also held a chair position from the “CNRS-Higher Education Chairs” program. Nicolas earned his PhD in machine learning in 2006 with specific focus areas in theory, ranking, and learning with multiples objectives. At FAIR he will work on text understanding tasks, especially question answering, and on the design of composite objective functions that can define complex learning problems from simpler ones.

Anitha Kannan comes to us from Microsoft Research where she worked on various applications in computer vision, Web and e-Commerce search, linking structured and unstructured data sources and computational education. Anitha received her PhD from the University of Toronto and will continue her research in machine learning and computer vision.

Laurens van der Maaten comes to us with an extensive history working on machine learning and computer vision. Prior to joining Facebook, Laurens was an Assistant Professor at Delft University of Technology, a post-doctoral researcher at UC San Diego and a Ph.D. student at Tilburg University. He will continue his research on learning embeddings for visualization and deep learning, time series classification, regularization, and cost-sensitive learning.

Michael Auli joins FAIR after completing a postdoc at Microsoft Research where he worked on improving language translation quality using recurrent neural networks. He earned a Ph.D. at the University of Edinburgh for his work on syntactic parsing with approximate inference.

Gabriel Synnaève was most recently a postdoctoral fellow at Ecole Normale Supérieure in Paris. Prior to that, he received his PhD in Bayesian modeling applied to real-time strategy games AI from University of Grenoble in 2012. Gabriel will initially be working on speech recognition and language understanding.

Having hired more than 40 people across our Menlo Park and New York labs — including some of the top AI researchers and engineers in the world, these new hires underscore our commitment to advancing the field of machine intelligence and developing technologies that give people better ways to communicate.

[End of announcement]