The State of Artificial Intelligence – Davos 2016 Talk

How close are technologies to simulating or overtaking human intelligence and what are the implications for industry and society?

This talk took place on 20th January 2016 – at the World Economic Forum Annual Meeting (and was developed in partnership with Arirang).

Moderated by:
Connyoung Jennifer Moon, Chief Anchor and Editor-in-Chief, Arirang TV & Radio, Republic of Korea

Matthew Grob, Executive Vice-President and Chief Technology Officer, Qualcomm, USA
Andrew Moore, Dean, School of Computer Science, Carnegie Mellon University, USA
Stuart Russell, Professor of Computer Science, University of California, Berkeley, USA
Ya-Qin Zhang, President,, People’s Republic of China

The Beginnings of Artificial Intelligence (AI) Research – In The 1950s

With the development of the electronic computer in 1941 and the stored program computer in 1949 the conditions for research in artificial intelligence (AI) were given. Still, the observation of a link between human intelligence and machines was not widely observed until the late 1950s.

A discovery that influenced much of the early development of AI was made by Norbert Wiener. He was one of the first to theorise that all intelligent behaviour was the result of feedback mechanisms. Mechanisms that could possibly be simulated by machines. A further step towards the development of modern AI was the creation of The Logic Theorist. Designed by Newell and Simon in 1955 it may be considered the first AI program.

The person who finally coined the term artificial intelligence and is regarded as the father of AI is John McCarthy. In 1956 he organised a conference “The Dartmouth summer research project on artificial intelligence” to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. In the following years AI research centres began forming at the Carnegie Mellon University as well as the Massachusetts Institute of Technology (MIT) and new challenges were faced: 1) the creation of systems that could efficiently solve problems by limiting the search and 2) the construction of systems that could learn by themselves.

One of the results of the intensified research in AI was a novel program called The General Problem Solver, developed by Newell and Simon in 1957 (the same people who had created The Logic Theorist). It was an extension of Wiener’s feedback principle and capable of solving a greater extent of common sense problems. While more programs were developed a major breakthrough in AI history was the creation of the LISP (LISt Processing) language by John McCarthy in 1958. It was soon adopted by many AI researchers and is still in use today.

Is This C. Elegans Worm Simulation Alive?

C. elegans, aka Caenorhabditis elegans, is a free-living, transparent nematode of about 1 mm in length, that lives in temperate soil environments. What makes this roundworm so interesting is that the adult hermaphrodite has a total of only 302 neurons. Those 302 neurons belong to two distinct and independent nervous systems: the largest being a somatic nervous system of 282 neurons and a smaller one being a pharyngeal nervous system of just 20 neurons. This makes C. elegans a great starting ground for those studying the nervous system as all 7,000 connections, or synapses, between those neurons have been mapped.

In 2011 a project called OpenWorm launched with the goal of giving people access to their own digital worm called WormSim to study on their computers through the OpenWorm project. The project produced a complete wireframe of the C. elegans connectome, recreating all 302 neurons and 959 cells of the tiny nematode to virtually simulate the actions of the real-life worm. When simulated inputs are delivered to the nervous system, the worm sim performs a highly realistic worm-like motion.

Assuming that the behaviour of the virtual C. elegans is in-line with that of the real C. elegans, at what stage might it be reasonable to call it a living organism? The standard definition of living organisms is behavioural; they extract usable energy from their environment, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce and, through natural selection, adapt to their environment in successive generations.

If the simulation exhibits these behaviours, combined with realistic responses to its external environment, should we consider it to be alive?

This could depend on perspective. From the outer-world perspective, the worm is obviously a non-living simulation that mimics life inside a computer. In the inner-world perspective of the simulation, the worm is absolutely alive as it is obeying the laws of physics as presented by the simulation. One could argue that in comparison to the world in which we exist, there is nothing that can confirm for us that we too are not living in a world that is a simulation produced by an outer-world.

Here is a video of the OpenWorm: C. elegans simulation:

You can check out OpenWorm at

How AlphaGo Mastered the Game of Go with Deep Neural Networks

The game of Go has long been viewed as the most challenging of classic games for artificial intelligence due to its enormous search space and the difficulty of evaluating board positions and moves.

Google DeepMind introduced a new approach to computer Go with their program, AlphaGo, that uses value networks to evaluate board positions and policy networks to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte-Carlo tree search programs that simulate thousands of random games of self-play. DeepMind also introduce a new search algorithm that combines Monte-Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

Here you can read DeepMinds’s full paper on how AlphaGo works: deepmind-mastering-go.pdf.

In March 2016, AlphaGo will face its ultimate challenge: a 5-game challenge match in Seoul against the legendary Lee Sedol, the top Go player in the world over the past decade.

Here are a few videos about AlphaGo:

Intelligent Machines and Foolish Humans

[This Blog Articles post was written & submitted by J.D.F.]

We will eventually build machines so intelligent that they will be self-aware. When that happens, it will highlight two outstanding human traits: brilliance and foolhardiness. Of course, the kinds of people responsible for creating such machines would be exceptionally clever. The future, however, may show that those geniuses had blinkered vision and didn’t realise quite what they were creating. Many respected scientists believe that nothing threatens human existence more definitively than conscious machines, and that when humanity eventually takes the threat seriously, it may well be too late.

Other experts counter that warning and argue that since we build the machines we will always be able to control them. That argument seems reasonable, but it doesn’t stand up to close scrutiny. Conscious machines, those with self-awareness, could be a threat to humans for many reasons, but three in particular. First, we won’t be able to control them because we won’t know what they’re thinking. Second, machine intelligence will improve at a much faster rate than human intelligence. Scientists working in this area and in artificial intelligence (AI) in general, suggest that computers will become conscious and as intelligent as humans sometime this century, maybe even in less than two or three decades. So, machines will have achieved in about a century what took humans millions of years. Machine intelligence will continue to improve, and very quickly, we will find ourselves sharing the Earth with an intelligence form far superior to us. Third, machines can leverage their brainpower hugely by linking together. Humans can’t directly link their brains and must communicate with others by tedious written, visual, or aural messaging.

Some world-famous visionaries have sounded strong warnings about AI. Elon Musk, the billionaire entrepreneur and co-founder of PayPal, Tesla Motors, and SpaceX, described it as [we could be] “summoning the demon.” The risk is that as scientists relentlessly improve the capabilities of AI systems, at some indeterminate point, they may set off an unstoppable chain reaction where the machines wrest control from their creators. In April 2015, Stephen Hawkins, the renowned theoretical physicist, cosmologist, and author, gave a stark warning: “the development of full artificial intelligence could spell the end of the human race.” Luke Muehlhauser, director of MIRI (The Machine Intelligence Research Institute), was quoted in the Financial Times as saying that by building AI “we’re toying with the intelligence of the gods and there is no off switch.” Yet we seem to be willing to take the risk.

Perhaps most people are not too concerned because consciousness is such a nebulous concept. Even scientists working with AI may be working in the dark. We all know humans have consciousness, but nobody, not even the brightest minds, understands what it is. So we can only speculate about how or when machines might get it, if ever. Some scientists believe that when machines acquire the level of thinking power similar to that of the human brain, machines will be conscious and self-aware. In other words, those scientists believe that our consciousness is purely a physical phenomenon – a function of our brain’s complexity.

For millions of years, human beings have dominated the Earth and all other species on it. That didn’t happen because we are the largest, or the strongest, but because we are the most intelligent by far. If machines become more intelligent, we could well end up as their slaves. Worse still, they might regard us as surplus to their needs and annihilate us. That doomsday scenario has been predicted by countless science fiction writers.

Should we heed their prophetic vision as most current advanced technology was once science fiction?
Or do we have nothing to worry about?

For more on this subject, read Nick Bostrom’s highly recommended book, Superintelligence, listed in our books section.

Superintelligence by Nick Bostrom

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.

If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity’s cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.

This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom’s work nothing less than a reconceptualisation of the essential task of our time.

Stanford’s Open Course on Natural Language Processing (NLP)

If you are interested in doing Stanford’s Open Course on Natural Language Processing (NLP), Coursera ( have made the full course available on YouTube through 101 video lessons.

The full Stanford NLP Open Course can be found via the following YouTube playlist:

Here is the Course Introduction (1 – 1):

Presented by Professor Dan Jurafsky & Chris Manning (, the Natural Language Processing (NLP) course contains the following lessons:

1 – 1 – Course Introduction – Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 1 – Regular Expressions – Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 2 – Regular Expressions in Practical NLP – Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 3 – Word Tokenization- Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 4 – Word Normalization and Stemming – Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 5 – Sentence Segmentation – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 1 – Defining Minimum Edit Distance – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 2 – Computing Minimum Edit Distance – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 3 – Backtrace for Computing Alignments – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 4 – Weighted Minimum Edit Distance – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 5 – Minimum Edit Distance in Computational Biology-Stanford NLP-Dan Jurafsky & Chris Manning
4 – 1 – Introduction to N-grams- Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 2 – Estimating N-gram Probabilities – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 3 – Evaluation and Perplexity – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 4 – Generalization and Zeros – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 5 – Smoothing_ Add-One – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 6 – Interpolation – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 7 – Good-Turing Smoothing – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 8 – Kneser-Ney Smoothing – Stanford NLP – Professor Dan Jurafsky & Chris Manning
5 – 1 – The Spelling Correction Task – Stanford NLP – Professor Dan Jurafsky & Chris Manning
5 – 2 – The Noisy Channel Model of Spelling – Stanford NLP – Professor Dan Jurafsky & Chris Manning
5 – 3 – Real-Word Spelling Correction – Stanford NLP – Professor Dan Jurafsky & Chris Manning
5 – 4 – State of the Art Systems – Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 1 – What is Text Classification- Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 2 – Naive Bayes – Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 3 – Formalizing the Naive Bayes Classifier – Stanford NLP-Dan Jurafsky & Chris Manning
6 – 4 – Naive Bayes_ Learning – Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 5 – Naive Bayes_ Relationship to Language Modeling-Stanford NLP-Dan Jurafsky & Chris Manning
6 – 6 – Multinomial Naive Bayes_ A Worked Example – Stanford NLP-Dan Jurafsky & Chris Manning
6 – 7 – Precision, Recall, and the F measure – Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 8 – Text Classification_ Evaluation- Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 9 – Practical Issues in Text Classification – Stanford NLP-Dan Jurafsky & Chris Manning
7 – 1 – What is Sentiment Analysis- Stanford NLP – Professor Dan Jurafsky & Chris Manning
7 – 2 – Sentiment Analysis_ A baseline algorithm- NLP-Dan Jurafsky & Chris Manning
7 – 3 – Sentiment Lexicons – Stanford NLP – Professor Dan Jurafsky & Chris Manning
7 – 4 – Learning Sentiment Lexicons – Stanford NLP – Professor Dan Jurafsky & Chris Manning
7 – 5 – Other Sentiment Tasks – Stanford NLP – Professor Dan Jurafsky & Chris Manning
8 – 1 – Generative vs. Discriminative Models- Stanford NLP – Professor Dan Jurafsky & Chris Manning
8 – 2 – Making features from text for discriminative NLP models-Dan Jurafsky & Chris Manning
8 – 3 – Feature-Based Linear Classifiers – Stanford NLP – Professor Dan Jurafsky & Chris Manning
8 – 4 – Building a Maxent Model_ The Nuts and Bolts-Dan Jurafsky & Chris Manning
8 – 5 – Generative vs. Discriminative models_ The problem of overcounting evidence- Stanford NLP
8 – 6 – Maximizing the Likelihood- Stanford NLP – Professor Dan Jurafsky & Chris Manning
9 – 1 – Introduction to Information Extraction- Stanford NLP-Dan Jurafsky & Chris Manning
9 – 2 – Evaluation of Named Entity Recognition- Stanford NLP-Dan Jurafsky & Chris Manning
9 – 3 – Sequence Models for Named Entity Recognition-NLP-Professor Dan Jurafsky & Chris Manning
9 – 4 – Maximum Entropy Sequence Models- Stanford NLP – Professor Dan Jurafsky & Chris Manning
10 – 1 – What is Relation Extraction- Stanford NLP – Professor Dan Jurafsky & Chris Manning
10 – 2 – Using Patterns to Extract Relations – Stanford NLP – Professor Dan Jurafsky & Chris Manning
10 – 3 – Supervised Relation Extraction – Stanford NLP – Professor Dan Jurafsky & Chris Manning
10 – 4 – Semi-Supervised and Unsupervised Relation Extraction-Dan Jurafsky & Chris Manning
11 – 1 – The Maximum Entropy Model Presentation-NLP-Dan Jurafsky & Chris Manning
11 – 2 – Feature Overlap_Feature Interaction-Stanford NLP-Professor Dan Jurafsky & Chris Manning
11 – 3 – Conditional Maxent Models for Classification–NLP-Dan Jurafsky & Chris Manning
11 – 4 – Smoothing_Regularization_Priors for Maxent Models-NLP-Dan Jurafsky & Chris Manning
12 – 1 – An Intro to Parts of Speech and POS Tagging -NLP-Dan Jurafsky & Chris Manning
12 – 2 – Some Methods and Results on Sequence Models for POS Tagging -Dan Jurafsky Chris Manning
13 – 1 – Syntactic Structure_ Constituency vs Dependency -NLP-Dan Jurafsky & Chris Manning
13 – 2 – Empirical_Data-Driven Approach to Parsing-NLP-Dan Jurafsky & Chris Manning
14 – 1 – Instructor Chat –NLP-Dan Jurafsky & Chris Manning
15 – 1 – CFGs and PCFGs -Stanford NLP-Professor Dan Jurafsky & Chris Manning
15 – 2 – Grammar Transforms-Stanford NLP-Professor Dan Jurafsky & Chris Manning
15 – 3 – CKY Parsing -Stanford NLP-Professor Dan Jurafsky & Chris Manning
15 – 4 – CKY Example-Stanford NLP-Professor Dan Jurafsky & Chris Manning
15 – 5 – Constituency Parser Evaluation -Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 1 – Lexicalization of PCFGs-Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 2 – Charniak’s Model-Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 3 – PCFG Independence Assumptions-Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 4 – The Return of Unlexicalized PCFGs-Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 5 – Latent Variable PCFGs-Stanford NLP-Professor Dan Jurafsky & Chris Manning
17 – 1 – Dependency Parsing Introduction-Stanford NLP-Professor Dan Jurafsky & Chris Manning
17 – 2 – Greedy Transition-Based Parsing-Stanford NLP-Professor Dan Jurafsky & Chris Manning
17 – 3 – Dependencies Encode Relational Structure-Stanford NLP-Dan Jurafsky & Chris Manning
18 – 1 – Introduction to Information Retrieval-Stanford NLP-Professor Dan Jurafsky & Chris Manning
18 – 2 – Term-Document Incidence Matrices -Stanford NLP-Professor Dan Jurafsky & Chris Manning
18 – 3 – The Inverted Index-Stanford NLP-Professor Dan Jurafsky & Chris Manning
18 – 4 – Query Processing with the Inverted Index-Stanford NLP-Dan Jurafsky & Chris Manning
18 – 5 – Phrase Queries and Positional Indexes-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 1 – Introducing Ranked Retrieval-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 2 – Scoring with the Jaccard Coefficient-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 3 – Term Frequency Weighting-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 4 – Inverse Document Frequency Weighting-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 5 – TF-IDF Weighting-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 6 – The Vector Space Model -Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 7 – Calculating TF-IDF Cosine Scores-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 8 – Evaluating Search Engines -Stanford NLP-Professor Dan Jurafsky & Chris Manning
20 – 1 – Word Senses and Word Relations-NLP-Dan Jurafsky & Chris Manning
20 – 2 – WordNet and Other Online Thesauri -NLP-Dan Jurafsky & Chris Manning
20 – 3 – Word Similarity and Thesaurus Methods -NLP-Dan Jurafsky & Chris Manning
20 – 4 – Word Similarity_ Distributional Similarity I –NLP-Dan Jurafsky & Chris Manning
20 – 5 – Word Similarity_ Distributional Similarity II -NLP-Dan Jurafsky & Chris Manning
21 – 1 – What is Question Answering-NLP-Dan Jurafsky & Chris Manning
21 – 2 – Answer Types and Query Formulation-NLP-Dan Jurafsky & Chris Manning
21 – 3 – Passage Retrieval and Answer Extraction-NLP-Dan Jurafsky & Chris Manning
21 – 4 – Using Knowledge in QA -NLP-Dan Jurafsky & Chris Manning
21 – 5 – Advanced_ Answering Complex Questions-NLP-Dan Jurafsky & Chris Manning
22 – 1 – Introduction to Summarization-NLP-Dan Jurafsky & Chris Manning
22 – 2 – Generating Snippets-NLP-Dan Jurafsky & Chris Manning
22 – 3 – Evaluating Summaries_ ROUGE-NLP-Dan Jurafsky & Chris Manning
22 – 4 – Summarizing Multiple Documents-NLP-Dan Jurafsky & Chris Manning
23 – 1 – Instructor Chat II -Stanford NLP-Professor Dan Jurafsky & Chris Manning

[Stanford NLP Open Course video playlist:]

An introduction to TensorFlow: Open source machine learning

TensorFlow is an open source software library for numerical computation using data flow graphs. Originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence AI research organisation for the purposes of conducting machine learning and deep neural networks research.

To learn more about TensorFlow, visit

Human Immortality Through AI

Will AI research some day lead to human immortality? There are a few groups and companies that believe that one day, the human race will merge with Artificially Intelligent machines.

One approach to this would be that of which Ray Kurzweil describes in his 2005 book The Singularity Is Near. Kurzweil pens a world where humans transcend biology by implanting AI nano-bots directly into the neural networks of the brain. The futurist and inventor also predicts that humans are going to develop emotions and characteristics of higher complexity as a result of this. Kurzweil’s prediction is that this will happen around the year 2030. That’s less than 15 years away!

Another path to human immortality could be by way of uploading your mind’s data or a ‘mind-file’ into a database, to later be imported or downloaded into an AI’s brain that will continue on life as you. This is something a company based in Los Angeles called Humai is claiming to be working on (however I’m still not 100% convinced that the recent blanket media coverage on Humai is not all part of an elaborate PR stunt for a new Hollywood movie!). Humai’s current website meta title reads: ‘Humai Life: Extended | Enhanced | Restored’. Their mission statement sounds very bold and ambitious for our current times and news headlines like ‘Humai wants to resurrect the dead with artificial intelligence’ do not help but the AI tech start-up does make a point of saying that the AI technology for the mind restoration part of the process will not be ready for another 30 years.

But do we really want to live forever and do non AI Researchers even care about this? In the heart of Silicon Valley, Joon Yun is a hedge fund manager doing US social security data calculations. Yun says, “the probability of a 25-year-old dying before their 26th birthday is 0.1%”. If we could keep that risk constant throughout life instead of it rising due to age-related disease, the average person would – statistically speaking – live 1,000 years. In December 2014, Yun announced a $1m prize fund to challenge scientists to “hack the code of life” and push the human lifespan past its apparent maximum of around 120 years.

Like or not, in one way or another, the immortality of human beings is probably something that is going to happen – unless the invention of AI and the singularity renders us extinct that is!

Facebook AI Research – Machine Intelligence Roadmap


The development of intelligent machines is one of the biggest unsolved challenges in computer science. In this paper, we propose some fundamental properties these machines should have, focusing in particular on communication and learning. We discuss a simple environment that could be used to incrementally teach a machine the basics of natural-language-based communication, as a prerequisite to more complex interaction with human users. We also present some conjectures on the sort of algorithms the machine should support in order to profitably learn from the environment.

Tomas Mikolov, Armand Joulin, Marco Baroni
Facebook AI Research

1 Introduction

A machine capable of performing complex tasks without requiring laborious programming would be tremendously useful in almost any human endeavour, from performing menial jobs for us to helping the advancement of basic and applied research. Given the current availability of powerful hardware and large amounts of machine-readable data, as well as the widespread interest in sophisticated machine learning methods, the times should be ripe for the development of intelligent machines.

We think that one fundamental reasons for this is that, since “solving AI” at once seems too complex a task to be pursued all at once, the computational community has preferred to focus, in the last decades, on solving relatively narrow empirical problems that are important for specific applications, but do not address the overarching goal of developing general-purpose intelligent machines.

In this article, we propose an alternative approach: we first define the general characteristics we think intelligent machines should possess, and then we present a concrete roadmap to develop them in realistic, small steps, that are however incrementally structured in such a way that, jointly, they should lead us close to the ultimate goal of implementing a powerful AI. We realise that our vision of artificial intelligence and how to create it is just one among many. We focus here on a plan that, we hope, will lead to genuine progress, without by this implying that there are not other valid approaches to the task.

The article is structured as follows. In Section 2 we indicate the two fundamental characteristics that we consider crucial for developing intelligence– at least the sort of intelligence we are interested in–namely communication and learning. Our goal is to build a machine that can learn new concepts through communication at a similar rate as a human with similar prior knowledge. That is, if one can easily learn how subtraction works after mastering addition, the intelligent machine, after grasping the concept of addition, should not find it difficult to learn subtraction as well.

Since, as we said, achieving the long-term goal of building an intelligent machine equipped with the desired features at once seems too difficult, we need to define intermediate targets that can lead us in the right direction. We specify such targets in terms of simplified but self-contained versions of the final machine we want to develop. Our plan is to “educate” the target machine like a child: At any time in its development, the target machine should act like a stand-alone intelligent system, albeit one that will be initially very limited in what it can do. The bulk of our proposal (Section 3) thus consists in the plan for an interactive learning environment fostering the incremental development of progressively more intelligent behaviour.

Section 4 briefly discusses some of the algorithmic capabilities we think a machine should possess in order to profitably exploit the learning environment. Finally, Section 5 situates our proposal in the broader context of past and current attempts to develop intelligent machines.

Download the full paper here