Define Intelligence

Intelligence – Goal-directed Adaptive Behaviour. This was the definition presented to the audience at the A.I.B.E. summit in Westminster, London [4th Jan 2017] by Dr. Daniel J Hulme. The definition derives from the scholarly articles of Professors Sternberg & Salter. The summit was focused on a series of talks about Artificial Intelligence in business and entrepreneurship. The event featured twelve speakers ranging from a variety of groups and businesses from an early stage AI music start-up called JukeDeck to Microsoft.

For me, the most interesting talks came from Calum Chace (author of The Economic Singularity and Surviving AI) who successfully delivered a concise presentation of the potential risks AI could bring to the economy and Dr. Hulme (CEO of Satalia – a data-science, technology Co) who engaged the audience with a thought provoking discussion that explored how humans are able to perceive contextual understanding from a small amount of ambiguous data and the difficult challenge of getting computers to do the same.

Dr. Hulme opened with Professors’ Sternberg & Salter’s definition of the term Intelligence, as did I with this post, and the Professor’s words have resonated with me over the last twenty-four hours. You see, while there is little ambiguity regarding the definition of Artificial, the same can not be said for that of the cognitive ability we call Intelligence.

The English, Oxford Dictionary holds the following meaning: “The ability to acquire and apply knowledge and skills” while the Wikipedia entry for Intelligence reads: “Intelligence has been defined in many different ways including as one’s capacity for logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity and problem solving.”

In Sternberg & Salter’s quote I find interest the word ‘Adaptive’, for one’s ability to adapt to move closer to a personal goal demonstrates a perceived understanding of one’s current circumstance and the ability to generate a set of predictions for one’s future circumstances.

Predicting a potential future is the job of the neocortex and it is this impressive ability that has elevated humans to the highest hierarchical rank of Intelligence within all the species found on Earth. Studies on the brain’s cognitive abilities have often described the organ as a prediction machine, using pattern recognition to predict future outcomes and select actions to achieve favourable outcomes over less favourable outcomes. This adaptive behaviour is what keeps us safe from danger and continuously directs us towards the progressive journey of achieving the goals of a human being. Predictions like whether or not touching fire will cause pain, eating food today will keep us alive tomorrow and which partner will successfully provide safety are just some of primeval cognitive processes that drive human decisions and actions. Greater intelligence allows for more accurate predictions and with that, we adapt with greater success to achieve our goals.

There are many notable theories on Intelligence, Charles Spearman’s ‘General Intelligence’ and Louis L. Thurstone’s ‘Primary Mental Abilities’ to name a couple and without delving deep into low-level theory I’d like to end this post with my own high-level, alternative definition of intelligence in the context of goal-oriented pattern recognition.

Intelligence – Ability to Predict Futures for Optimum Progress.

[by Jason Hadjioannou]

WaveNet: A Generative Model for Raw Audio


This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.

Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu
{avdnoord, sedielem, heigazen, simonyan, vinyals, gravesa, nalk, andrewsenior, korayk}
Google DeepMind

Download the full paper here

AI XPRIZE – AI competition with IBM Watson

The IBM Watson AI XPRIZE is a $5 million AI and cognitive computing competition challenging teams globally to develop and demonstrate how humans can collaborate with powerful AI technologies to tackle the world’s grand challenges. This prize will focus on creating advanced and scalable applications that benefit consumers and businesses across a multitude of disciplines. The solutions will contribute to the enrichment of available tools and data sets for the usage of innovators everywhere. The goal is also to accelerate the understanding and adoption of AI’s most promising breakthroughs.

Every year leading up to TED 2020, the teams will compete for interim prizes and the opportunity to advance to the next year’s competition. The three finalist teams will take the TED stage in 2020 to deliver jaw-dropping, awe-inspiring TED Talks demonstrating what they have achieved.

Typical of all XPRIZE competitions, the IBM Watson AI XPRIZE will crowdsource solutions from some of the most brilliant thinkers and entrepreneurs around the world, creating true exponential impact.

To compete in the IBM Watson AI XPRIZE you must be a fully registered team. To complete your registration, you must create a Team profile, sign the Competitor’s Agreement and pay the registration fee.

AI Xprize Timeline


Grand Prizes

The $3,000,000 Grand Prize, $1,000,000 2nd Place, and $500,000 3rd Place purses will be awarded at the end of competition at TED2020, for a total of $4.5 million.

Milestone and Special Prizes

Two Milestone Competition prize purses will be awarded at the end of each of the first two rounds of the competition, and the Judges may award additional special prizes to recognize special accomplishments. A total of $500,000 will be available for these prizes and will be allocated by the Judges for special accomplishments.


The progress in AI research and applications in the past 20 years makes it timely to focus attention not only on making AI more capable, but also on maximizing the societal benefit of AI. The democratization of exponential technology enables AI and cognitive computing to put empowerment into the hands of innovators everywhere. Driven by long term capabilities of AI impact, and to better understand the prospects of human and AI collaboration, the IBM Watson AI XPRIZE provides an interdisciplinary platform for domain experts, developers and innovators to, through collaboration, push the boundaries of AI to new heights. The competition will bring the AI community together and accelerate the development of scalable, hybrid solutions and audacious breakthroughs to address humanity’s grandest challenges.

You can register for the competition at:

Watch AlphaGo take on Lee Sedol, the world’s top Go player

Watch AlphaGo take on Lee Sedol, the world’s top Go player, in the final match of the Google DeepMind challenge.

Match score: AlphaGo 3 – Lee Sedol 1.
[Game five: Seoul, South Korea, 15th March at 13:00 KST; 04:00 GMT; for US at -1 day (14th March) 21:00 PT, 00:00 ET.]

The Game of Go 

The game of Go originated in China more than 2,500 years ago. The rules of the game are simple: Players take turns to place black or white stones on a board, trying to capture the opponent’s stones or surround empty space to make points of territory. As simple as the rules are, Go is a game of profound complexity. There are more possible positions in Go than there are atoms in the universe. That makes Go a googol times more complex than chess. Go is played primarily through intuition and feel, and because of its beauty, subtlety and intellectual depth it has captured the human imagination for centuries. AlphaGo is the first computer program to ever beat a professional, human player. Read more about the game of Go and how AlphaGo is using machine learning to master this ancient game.

Match Details 

In October 2015, the program AlphaGo won 5-0 in a formal match against the reigning 3-times European Champion, Fan Hui, to become the first program to ever beat a professional Go player in an even game. Now AlphaGo will face its ultimate challenge: a 5-game challenge match in Seoul against the legendary Lee Sedol, the top Go player in the world over the past decade, for a $1M prize. For full details, see the press release.

The matches were held at the Four Seasons Hotel, Seoul, South Korea, starting at 13:00 local time (04:00 GMT; day before 20:00 PT, 23:00 ET) on March 9th, 10th, 12th, 13th and 15th.

The matches were livestreamed on DeepMind’s YouTube channel as well as broadcast on TV throughout Asia through Korea’s Baduk TV, as well as in China, Japan, and elsewhere.Match commentators included Michael Redmond, the only professional Western Go player to achieve 9 dan status. Redmond commentated in English, and Yoo Changhyuk professional 9 dan, Kim Sungryong professional 9 dan, Song Taegon professional 9 dan, and Lee Hyunwook professional 8 dan commentated in Korean alternately.The matches were played under Chinese rules with a komi of 7.5 (the compensation points the player who goes second receives at the end of the match). Each player received two hours per match with three lots of 60-second byoyomi (countdown periods after they have finished their allotted time).

Singularity Or Bust [Documentary]

In 2009, film-maker and former AI programmer Raj Dye spent his summer following futurist AI researchers Ben Goertzel and Hugo DeGaris around Hong Kong and Xiamen, documenting their doings and gathering their perspectives. The result, after some work by crack film editor Alex MacKenzie, was the 45 minute documentary Singularity or Bust — a uniquely edgy, experimental Singularitarian road movie, featuring perhaps the most philosophical three-foot-tall humanoid robot ever, a glance at the fast-growing Chinese research scene in the late aughts, and even a bit of a real-life love story. The film was screened in theaters around the world, and won the Best Documentary award at the 2013 LA Cinema Festival of Hollywood and the LA Lift Off Festival. And now it is online, free of charge, for your delectation.

Singularity or Bust is a true story pertaining to events occurring in the year 2009. It captures a fascinating slice of reality, but bear in mind that things move fast these days. For more recent updates on Goertzel and DeGaris’s quest for transhuman AI, you’ll have to consult the Internet, or your imagination.

[Full Documentary]

Machine Learning With Stanford University

Stanford University are holding a Machine Learning course with

Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI. In this class, you will learn about the most effective machine learning techniques, and gain practice implementing them and getting them to work for yourself. More importantly, you’ll learn about not only the theoretical underpinnings of learning, but also gain the practical know-how needed to quickly and powerfully apply these techniques to new problems. Finally, you’ll learn about some of Silicon Valley’s best practices in innovation as it pertains to machine learning and AI.

This course provides a broad introduction to machine learning, datamining, and statistical pattern recognition. Topics include: (i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks). (ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning). (iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI). The course will also draw from numerous case studies and applications, so that you’ll also learn how to apply learning algorithms to building smart robots (perception, control), text understanding (web search, anti-spam), computer vision, medical informatics, audio, database mining, and other areas.

Can I earn a Course Certificate if I completed this course before they were available?
In order to verify one’s identity and maintain academic integrity, learners who completed assignments or quizzes for Machine Learning prior to November 1st will need to redo and resubmit these assessments in order to earn a Course Certificate. To clarify, both quizzes and programming assignments need to be resubmitted. Though your deadlines may have technically passed, please be assured that you may resubmit both types of assessments at any time. We apologise for the inconvenience and appreciate your patience as we strive to ensure the integrity and value of our certificates.

Please note that, in order to earn a Course Certificate, you must complete the course within 180 days of payment, or by May 1, 2016, whichever is earlier.

Enrolment ends February 27

The State of Artificial Intelligence – Davos 2016 Talk

How close are technologies to simulating or overtaking human intelligence and what are the implications for industry and society?

This talk took place on 20th January 2016 – at the World Economic Forum Annual Meeting (and was developed in partnership with Arirang).

Moderated by:
Connyoung Jennifer Moon, Chief Anchor and Editor-in-Chief, Arirang TV & Radio, Republic of Korea

Matthew Grob, Executive Vice-President and Chief Technology Officer, Qualcomm, USA
Andrew Moore, Dean, School of Computer Science, Carnegie Mellon University, USA
Stuart Russell, Professor of Computer Science, University of California, Berkeley, USA
Ya-Qin Zhang, President,, People’s Republic of China

The Beginnings of Artificial Intelligence (AI) Research – In The 1950s

With the development of the electronic computer in 1941 and the stored program computer in 1949 the conditions for research in artificial intelligence (AI) were given. Still, the observation of a link between human intelligence and machines was not widely observed until the late 1950s.

A discovery that influenced much of the early development of AI was made by Norbert Wiener. He was one of the first to theorise that all intelligent behaviour was the result of feedback mechanisms. Mechanisms that could possibly be simulated by machines. A further step towards the development of modern AI was the creation of The Logic Theorist. Designed by Newell and Simon in 1955 it may be considered the first AI program.

The person who finally coined the term artificial intelligence and is regarded as the father of AI is John McCarthy. In 1956 he organised a conference “The Dartmouth summer research project on artificial intelligence” to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. In the following years AI research centres began forming at the Carnegie Mellon University as well as the Massachusetts Institute of Technology (MIT) and new challenges were faced: 1) the creation of systems that could efficiently solve problems by limiting the search and 2) the construction of systems that could learn by themselves.

One of the results of the intensified research in AI was a novel program called The General Problem Solver, developed by Newell and Simon in 1957 (the same people who had created The Logic Theorist). It was an extension of Wiener’s feedback principle and capable of solving a greater extent of common sense problems. While more programs were developed a major breakthrough in AI history was the creation of the LISP (LISt Processing) language by John McCarthy in 1958. It was soon adopted by many AI researchers and is still in use today.

Is This C. Elegans Worm Simulation Alive?

C. elegans, aka Caenorhabditis elegans, is a free-living, transparent nematode of about 1 mm in length, that lives in temperate soil environments. What makes this roundworm so interesting is that the adult hermaphrodite has a total of only 302 neurons. Those 302 neurons belong to two distinct and independent nervous systems: the largest being a somatic nervous system of 282 neurons and a smaller one being a pharyngeal nervous system of just 20 neurons. This makes C. elegans a great starting ground for those studying the nervous system as all 7,000 connections, or synapses, between those neurons have been mapped.

In 2011 a project called OpenWorm launched with the goal of giving people access to their own digital worm called WormSim to study on their computers through the OpenWorm project. The project produced a complete wireframe of the C. elegans connectome, recreating all 302 neurons and 959 cells of the tiny nematode to virtually simulate the actions of the real-life worm. When simulated inputs are delivered to the nervous system, the worm sim performs a highly realistic worm-like motion.

Assuming that the behaviour of the virtual C. elegans is in-line with that of the real C. elegans, at what stage might it be reasonable to call it a living organism? The standard definition of living organisms is behavioural; they extract usable energy from their environment, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce and, through natural selection, adapt to their environment in successive generations.

If the simulation exhibits these behaviours, combined with realistic responses to its external environment, should we consider it to be alive?

This could depend on perspective. From the outer-world perspective, the worm is obviously a non-living simulation that mimics life inside a computer. In the inner-world perspective of the simulation, the worm is absolutely alive as it is obeying the laws of physics as presented by the simulation. One could argue that in comparison to the world in which we exist, there is nothing that can confirm for us that we too are not living in a world that is a simulation produced by an outer-world.

Here is a video of the OpenWorm: C. elegans simulation:

You can check out OpenWorm at

How AlphaGo Mastered the Game of Go with Deep Neural Networks

The game of Go has long been viewed as the most challenging of classic games for artificial intelligence due to its enormous search space and the difficulty of evaluating board positions and moves.

Google DeepMind introduced a new approach to computer Go with their program, AlphaGo, that uses value networks to evaluate board positions and policy networks to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte-Carlo tree search programs that simulate thousands of random games of self-play. DeepMind also introduce a new search algorithm that combines Monte-Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

Here you can read DeepMinds’s full paper on how AlphaGo works: deepmind-mastering-go.pdf.

In March 2016, AlphaGo will face its ultimate challenge: a 5-game challenge match in Seoul against the legendary Lee Sedol, the top Go player in the world over the past decade.

Here are a few videos about AlphaGo: