Define Intelligence

Intelligence – Goal-directed Adaptive Behaviour. This was the definition presented to the audience at the A.I.B.E. summit in Westminster, London [4th Jan 2017] by Dr. Daniel J Hulme. The definition derives from the scholarly articles of Professors Sternberg & Salter. The summit was focused on a series of talks about Artificial Intelligence in business and entrepreneurship. The event featured twelve speakers ranging from a variety of groups and businesses from an early stage AI music start-up called JukeDeck to Microsoft.

For me, the most interesting talks came from Calum Chace (author of The Economic Singularity and Surviving AI) who successfully delivered a concise presentation of the potential risks AI could bring to the economy and Dr. Hulme (CEO of Satalia – a data-science, technology Co) who engaged the audience with a thought provoking discussion that explored how humans are able to perceive contextual understanding from a small amount of ambiguous data and the difficult challenge of getting computers to do the same.

Dr. Hulme opened with Professors’ Sternberg & Salter’s definition of the term Intelligence, as did I with this post, and the Professor’s words have resonated with me over the last twenty-four hours. You see, while there is little ambiguity regarding the definition of Artificial, the same can not be said for that of the cognitive ability we call Intelligence.

The English, Oxford Dictionary holds the following meaning: “The ability to acquire and apply knowledge and skills” while the Wikipedia entry for Intelligence reads: “Intelligence has been defined in many different ways including as one’s capacity for logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity and problem solving.”

In Sternberg & Salter’s quote I find interest the word ‘Adaptive’, for one’s ability to adapt to move closer to a personal goal demonstrates a perceived understanding of one’s current circumstance and the ability to generate a set of predictions for one’s future circumstances.

Predicting a potential future is the job of the neocortex and it is this impressive ability that has elevated humans to the highest hierarchical rank of Intelligence within all the species found on Earth. Studies on the brain’s cognitive abilities have often described the organ as a prediction machine, using pattern recognition to predict future outcomes and select actions to achieve favourable outcomes over less favourable outcomes. This adaptive behaviour is what keeps us safe from danger and continuously directs us towards the progressive journey of achieving the goals of a human being. Predictions like whether or not touching fire will cause pain, eating food today will keep us alive tomorrow and which partner will successfully provide safety are just some of primeval cognitive processes that drive human decisions and actions. Greater intelligence allows for more accurate predictions and with that, we adapt with greater success to achieve our goals.

There are many notable theories on Intelligence, Charles Spearman’s ‘General Intelligence’ and Louis L. Thurstone’s ‘Primary Mental Abilities’ to name a couple and without delving deep into low-level theory I’d like to end this post with my own high-level, alternative definition of intelligence in the context of goal-oriented pattern recognition.

Intelligence – Ability to Predict Futures for Optimum Progress.

[by Jason Hadjioannou]

Professor Stuart Russell – The Long-Term Future of (Artificial) Intelligence

This was published on YouTube on May 22, 2015 by username: CRASSH Cambridge


The Centre for the Study of Existential Risk is delighted to host Professor Stuart J. Russell (University of California, Berkeley) for a public lecture on Friday 15th May 2015.

AI / Artificial Intelligence

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also the name of the academic field of study which studies how to create computers and computer software that are capable of intelligent behaviour. Major AI researchers and textbooks define this field as “the study and design of intelligent agents”, in which an intelligent agent is a system that perceives its environment and takes actions that maximise its chances of success. John McCarthy, who coined the term in 1955, defines it as “the science and engineering of making intelligent machines”.

AI research is highly technical and specialised, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is still among the field’s long-term goals. Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are a large number of tools used in AI, including versions of search and mathematical optimisation, logic, methods based on probability and economics, and many others. The AI field is interdisciplinary, in which a number of sciences and professions converge, including computer science, mathematics, psychology, linguistics, philosophy and neuroscience, as well as other specialised fields such as artificial psychology.

The field was founded on the claim that a central property of humans, human intelligence—the sapience of Homo sapiens—”can be so precisely described that a machine can be made to simulate it.” This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth,fiction and philosophy since antiquity. Artificial intelligence has been the subject of tremendous optimism but has also suffered stunning setbacks. Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science.

[Snippet updated from – 16th September 2015:]