Define Intelligence

Intelligence – Goal-directed Adaptive Behaviour. This was the definition presented to the audience at the A.I.B.E. summit in Westminster, London [4th Jan 2017] by Dr. Daniel J Hulme. The definition derives from the scholarly articles of Professors Sternberg & Salter. The summit was focused on a series of talks about Artificial Intelligence in business and entrepreneurship. The event featured twelve speakers ranging from a variety of groups and businesses from an early stage AI music start-up called JukeDeck to Microsoft.

For me, the most interesting talks came from Calum Chace (author of The Economic Singularity and Surviving AI) who successfully delivered a concise presentation of the potential risks AI could bring to the economy and Dr. Hulme (CEO of Satalia – a data-science, technology Co) who engaged the audience with a thought provoking discussion that explored how humans are able to perceive contextual understanding from a small amount of ambiguous data and the difficult challenge of getting computers to do the same.

Dr. Hulme opened with Professors’ Sternberg & Salter’s definition of the term Intelligence, as did I with this post, and the Professor’s words have resonated with me over the last twenty-four hours. You see, while there is little ambiguity regarding the definition of Artificial, the same can not be said for that of the cognitive ability we call Intelligence.

The English, Oxford Dictionary holds the following meaning: “The ability to acquire and apply knowledge and skills” while the Wikipedia entry for Intelligence reads: “Intelligence has been defined in many different ways including as one’s capacity for logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity and problem solving.”

In Sternberg & Salter’s quote I find interest the word ‘Adaptive’, for one’s ability to adapt to move closer to a personal goal demonstrates a perceived understanding of one’s current circumstance and the ability to generate a set of predictions for one’s future circumstances.

Predicting a potential future is the job of the neocortex and it is this impressive ability that has elevated humans to the highest hierarchical rank of Intelligence within all the species found on Earth. Studies on the brain’s cognitive abilities have often described the organ as a prediction machine, using pattern recognition to predict future outcomes and select actions to achieve favourable outcomes over less favourable outcomes. This adaptive behaviour is what keeps us safe from danger and continuously directs us towards the progressive journey of achieving the goals of a human being. Predictions like whether or not touching fire will cause pain, eating food today will keep us alive tomorrow and which partner will successfully provide safety are just some of primeval cognitive processes that drive human decisions and actions. Greater intelligence allows for more accurate predictions and with that, we adapt with greater success to achieve our goals.

There are many notable theories on Intelligence, Charles Spearman’s ‘General Intelligence’ and Louis L. Thurstone’s ‘Primary Mental Abilities’ to name a couple and without delving deep into low-level theory I’d like to end this post with my own high-level, alternative definition of intelligence in the context of goal-oriented pattern recognition.

Intelligence – Ability to Predict Futures for Optimum Progress.

[by Jason Hadjioannou]

Watch AlphaGo take on Lee Sedol, the world’s top Go player

Watch AlphaGo take on Lee Sedol, the world’s top Go player, in the final match of the Google DeepMind challenge.

Match score: AlphaGo 3 – Lee Sedol 1.
[Game five: Seoul, South Korea, 15th March at 13:00 KST; 04:00 GMT; for US at -1 day (14th March) 21:00 PT, 00:00 ET.]

The Game of Go 

The game of Go originated in China more than 2,500 years ago. The rules of the game are simple: Players take turns to place black or white stones on a board, trying to capture the opponent’s stones or surround empty space to make points of territory. As simple as the rules are, Go is a game of profound complexity. There are more possible positions in Go than there are atoms in the universe. That makes Go a googol times more complex than chess. Go is played primarily through intuition and feel, and because of its beauty, subtlety and intellectual depth it has captured the human imagination for centuries. AlphaGo is the first computer program to ever beat a professional, human player. Read more about the game of Go and how AlphaGo is using machine learning to master this ancient game.

Match Details 

In October 2015, the program AlphaGo won 5-0 in a formal match against the reigning 3-times European Champion, Fan Hui, to become the first program to ever beat a professional Go player in an even game. Now AlphaGo will face its ultimate challenge: a 5-game challenge match in Seoul against the legendary Lee Sedol, the top Go player in the world over the past decade, for a $1M prize. For full details, see the press release.

The matches were held at the Four Seasons Hotel, Seoul, South Korea, starting at 13:00 local time (04:00 GMT; day before 20:00 PT, 23:00 ET) on March 9th, 10th, 12th, 13th and 15th.

The matches were livestreamed on DeepMind’s YouTube channel as well as broadcast on TV throughout Asia through Korea’s Baduk TV, as well as in China, Japan, and elsewhere.Match commentators included Michael Redmond, the only professional Western Go player to achieve 9 dan status. Redmond commentated in English, and Yoo Changhyuk professional 9 dan, Kim Sungryong professional 9 dan, Song Taegon professional 9 dan, and Lee Hyunwook professional 8 dan commentated in Korean alternately.The matches were played under Chinese rules with a komi of 7.5 (the compensation points the player who goes second receives at the end of the match). Each player received two hours per match with three lots of 60-second byoyomi (countdown periods after they have finished their allotted time).

The Beginnings of Artificial Intelligence (AI) Research – In The 1950s

With the development of the electronic computer in 1941 and the stored program computer in 1949 the conditions for research in artificial intelligence (AI) were given. Still, the observation of a link between human intelligence and machines was not widely observed until the late 1950s.

A discovery that influenced much of the early development of AI was made by Norbert Wiener. He was one of the first to theorise that all intelligent behaviour was the result of feedback mechanisms. Mechanisms that could possibly be simulated by machines. A further step towards the development of modern AI was the creation of The Logic Theorist. Designed by Newell and Simon in 1955 it may be considered the first AI program.

The person who finally coined the term artificial intelligence and is regarded as the father of AI is John McCarthy. In 1956 he organised a conference “The Dartmouth summer research project on artificial intelligence” to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. In the following years AI research centres began forming at the Carnegie Mellon University as well as the Massachusetts Institute of Technology (MIT) and new challenges were faced: 1) the creation of systems that could efficiently solve problems by limiting the search and 2) the construction of systems that could learn by themselves.

One of the results of the intensified research in AI was a novel program called The General Problem Solver, developed by Newell and Simon in 1957 (the same people who had created The Logic Theorist). It was an extension of Wiener’s feedback principle and capable of solving a greater extent of common sense problems. While more programs were developed a major breakthrough in AI history was the creation of the LISP (LISt Processing) language by John McCarthy in 1958. It was soon adopted by many AI researchers and is still in use today.

Intelligent Machines and Foolish Humans

[This Blog Articles post was written & submitted by J.D.F.]

We will eventually build machines so intelligent that they will be self-aware. When that happens, it will highlight two outstanding human traits: brilliance and foolhardiness. Of course, the kinds of people responsible for creating such machines would be exceptionally clever. The future, however, may show that those geniuses had blinkered vision and didn’t realise quite what they were creating. Many respected scientists believe that nothing threatens human existence more definitively than conscious machines, and that when humanity eventually takes the threat seriously, it may well be too late.

Other experts counter that warning and argue that since we build the machines we will always be able to control them. That argument seems reasonable, but it doesn’t stand up to close scrutiny. Conscious machines, those with self-awareness, could be a threat to humans for many reasons, but three in particular. First, we won’t be able to control them because we won’t know what they’re thinking. Second, machine intelligence will improve at a much faster rate than human intelligence. Scientists working in this area and in artificial intelligence (AI) in general, suggest that computers will become conscious and as intelligent as humans sometime this century, maybe even in less than two or three decades. So, machines will have achieved in about a century what took humans millions of years. Machine intelligence will continue to improve, and very quickly, we will find ourselves sharing the Earth with an intelligence form far superior to us. Third, machines can leverage their brainpower hugely by linking together. Humans can’t directly link their brains and must communicate with others by tedious written, visual, or aural messaging.

Some world-famous visionaries have sounded strong warnings about AI. Elon Musk, the billionaire entrepreneur and co-founder of PayPal, Tesla Motors, and SpaceX, described it as [we could be] “summoning the demon.” The risk is that as scientists relentlessly improve the capabilities of AI systems, at some indeterminate point, they may set off an unstoppable chain reaction where the machines wrest control from their creators. In April 2015, Stephen Hawkins, the renowned theoretical physicist, cosmologist, and author, gave a stark warning: “the development of full artificial intelligence could spell the end of the human race.” Luke Muehlhauser, director of MIRI (The Machine Intelligence Research Institute), was quoted in the Financial Times as saying that by building AI “we’re toying with the intelligence of the gods and there is no off switch.” Yet we seem to be willing to take the risk.

Perhaps most people are not too concerned because consciousness is such a nebulous concept. Even scientists working with AI may be working in the dark. We all know humans have consciousness, but nobody, not even the brightest minds, understands what it is. So we can only speculate about how or when machines might get it, if ever. Some scientists believe that when machines acquire the level of thinking power similar to that of the human brain, machines will be conscious and self-aware. In other words, those scientists believe that our consciousness is purely a physical phenomenon – a function of our brain’s complexity.

For millions of years, human beings have dominated the Earth and all other species on it. That didn’t happen because we are the largest, or the strongest, but because we are the most intelligent by far. If machines become more intelligent, we could well end up as their slaves. Worse still, they might regard us as surplus to their needs and annihilate us. That doomsday scenario has been predicted by countless science fiction writers.

Should we heed their prophetic vision as most current advanced technology was once science fiction?
Or do we have nothing to worry about?

For more on this subject, read Nick Bostrom’s highly recommended book, Superintelligence, listed in our books section.

Human Immortality Through AI

Will AI research some day lead to human immortality? There are a few groups and companies that believe that one day, the human race will merge with Artificially Intelligent machines.

One approach to this would be that of which Ray Kurzweil describes in his 2005 book The Singularity Is Near. Kurzweil pens a world where humans transcend biology by implanting AI nano-bots directly into the neural networks of the brain. The futurist and inventor also predicts that humans are going to develop emotions and characteristics of higher complexity as a result of this. Kurzweil’s prediction is that this will happen around the year 2030. That’s less than 15 years away!

Another path to human immortality could be by way of uploading your mind’s data or a ‘mind-file’ into a database, to later be imported or downloaded into an AI’s brain that will continue on life as you. This is something a company based in Los Angeles called Humai is claiming to be working on (however I’m still not 100% convinced that the recent blanket media coverage on Humai is not all part of an elaborate PR stunt for a new Hollywood movie!). Humai’s current website meta title reads: ‘Humai Life: Extended | Enhanced | Restored’. Their mission statement sounds very bold and ambitious for our current times and news headlines like ‘Humai wants to resurrect the dead with artificial intelligence’ do not help but the AI tech start-up does make a point of saying that the AI technology for the mind restoration part of the process will not be ready for another 30 years.

But do we really want to live forever and do non AI Researchers even care about this? In the heart of Silicon Valley, Joon Yun is a hedge fund manager doing US social security data calculations. Yun says, “the probability of a 25-year-old dying before their 26th birthday is 0.1%”. If we could keep that risk constant throughout life instead of it rising due to age-related disease, the average person would – statistically speaking – live 1,000 years. In December 2014, Yun announced a $1m prize fund to challenge scientists to “hack the code of life” and push the human lifespan past its apparent maximum of around 120 years.

Like or not, in one way or another, the immortality of human beings is probably something that is going to happen – unless the invention of AI and the singularity renders us extinct that is!

Facebook AI Research – Machine Intelligence Roadmap


The development of intelligent machines is one of the biggest unsolved challenges in computer science. In this paper, we propose some fundamental properties these machines should have, focusing in particular on communication and learning. We discuss a simple environment that could be used to incrementally teach a machine the basics of natural-language-based communication, as a prerequisite to more complex interaction with human users. We also present some conjectures on the sort of algorithms the machine should support in order to profitably learn from the environment.

Tomas Mikolov, Armand Joulin, Marco Baroni
Facebook AI Research

1 Introduction

A machine capable of performing complex tasks without requiring laborious programming would be tremendously useful in almost any human endeavour, from performing menial jobs for us to helping the advancement of basic and applied research. Given the current availability of powerful hardware and large amounts of machine-readable data, as well as the widespread interest in sophisticated machine learning methods, the times should be ripe for the development of intelligent machines.

We think that one fundamental reasons for this is that, since “solving AI” at once seems too complex a task to be pursued all at once, the computational community has preferred to focus, in the last decades, on solving relatively narrow empirical problems that are important for specific applications, but do not address the overarching goal of developing general-purpose intelligent machines.

In this article, we propose an alternative approach: we first define the general characteristics we think intelligent machines should possess, and then we present a concrete roadmap to develop them in realistic, small steps, that are however incrementally structured in such a way that, jointly, they should lead us close to the ultimate goal of implementing a powerful AI. We realise that our vision of artificial intelligence and how to create it is just one among many. We focus here on a plan that, we hope, will lead to genuine progress, without by this implying that there are not other valid approaches to the task.

The article is structured as follows. In Section 2 we indicate the two fundamental characteristics that we consider crucial for developing intelligence– at least the sort of intelligence we are interested in–namely communication and learning. Our goal is to build a machine that can learn new concepts through communication at a similar rate as a human with similar prior knowledge. That is, if one can easily learn how subtraction works after mastering addition, the intelligent machine, after grasping the concept of addition, should not find it difficult to learn subtraction as well.

Since, as we said, achieving the long-term goal of building an intelligent machine equipped with the desired features at once seems too difficult, we need to define intermediate targets that can lead us in the right direction. We specify such targets in terms of simplified but self-contained versions of the final machine we want to develop. Our plan is to “educate” the target machine like a child: At any time in its development, the target machine should act like a stand-alone intelligent system, albeit one that will be initially very limited in what it can do. The bulk of our proposal (Section 3) thus consists in the plan for an interactive learning environment fostering the incremental development of progressively more intelligent behaviour.

Section 4 briefly discusses some of the algorithmic capabilities we think a machine should possess in order to profitably exploit the learning environment. Finally, Section 5 situates our proposal in the broader context of past and current attempts to develop intelligent machines.

Download the full paper here

Deep Grammar will correct your grammatical errors using AI

Deep Grammar is a grammar checker built on top of deep learning. Deep Grammar uses deep learning to learn a model of language, and it then uses this model to check text for errors in three steps:

  1. Compute the likelihood that someone would have intended to write the text.
  2. Attempt to generate text that is close to the written text but is more likely.
  3. If such text is found, show it to the user as a possible correction.

To see Deep Grammar in action. Consider the sentence “I will tell he the truth.” Deep Grammar calculates that this sentence is unlikely, and it tries to come up with a sentence that is close to that sentence but is likely. It finds the sentence “I will tell him the truth.” Since this sentence is both likely and close to the original sentence, it suggests it to the user as a correction.

Here are some other examples of sentences with the corrections found by Deep Grammar:

  • We know that our brains our not perfect. –> We know that our brains are not perfect.
  • Have your ever wondered about it? –> Have you ever wondered about it.
  • To bad the development has stopped. –> Too bad the development has stopped.

You can find a quantitative evaluation of deep grammar here.

Facebook Bolsters AI Research Team

[Announcement from Facebook AI Research]

The Facebook AI Research team is excited to announce additions joining from both academia and industry. Highlighting the newest members to this quickly growing team including the award-winning Léon Bottou and Laurens van der Maaten. Their work will focus on several aspects of machine learning, with applications to image, speech, and natural language understanding.

Léon Bottou joins us from Microsoft Research. After his PhD in Paris, he held research positions at AT&T Bell Laboratories, AT&T Labs-Research and NEC Labs. He is best known for his pioneering work on machine learning, structured prediction, stochastic optimization, and image compression. More recently, he worked on causal inference in learning systems. He is rejoining some of his long-time collaborators Jason Weston, Ronan Collobert, Antoine Bordes and Yann LeCun, the latter with whom he developed the widely used DjVu compression technology and the AT&T check reading system. Leon is a laureate of the 2007 Blavatnik Award for Young Scientists.

Nicolas Usunier was most recently a professor at Université de Technologie de Compiègne and also held a chair position from the “CNRS-Higher Education Chairs” program. Nicolas earned his PhD in machine learning in 2006 with specific focus areas in theory, ranking, and learning with multiples objectives. At FAIR he will work on text understanding tasks, especially question answering, and on the design of composite objective functions that can define complex learning problems from simpler ones.

Anitha Kannan comes to us from Microsoft Research where she worked on various applications in computer vision, Web and e-Commerce search, linking structured and unstructured data sources and computational education. Anitha received her PhD from the University of Toronto and will continue her research in machine learning and computer vision.

Laurens van der Maaten comes to us with an extensive history working on machine learning and computer vision. Prior to joining Facebook, Laurens was an Assistant Professor at Delft University of Technology, a post-doctoral researcher at UC San Diego and a Ph.D. student at Tilburg University. He will continue his research on learning embeddings for visualization and deep learning, time series classification, regularization, and cost-sensitive learning.

Michael Auli joins FAIR after completing a postdoc at Microsoft Research where he worked on improving language translation quality using recurrent neural networks. He earned a Ph.D. at the University of Edinburgh for his work on syntactic parsing with approximate inference.

Gabriel Synnaève was most recently a postdoctoral fellow at Ecole Normale Supérieure in Paris. Prior to that, he received his PhD in Bayesian modeling applied to real-time strategy games AI from University of Grenoble in 2012. Gabriel will initially be working on speech recognition and language understanding.

Having hired more than 40 people across our Menlo Park and New York labs — including some of the top AI researchers and engineers in the world, these new hires underscore our commitment to advancing the field of machine intelligence and developing technologies that give people better ways to communicate.

[End of announcement]

The People Behind Britain’s AI Research & Development

The UK has a great heritage in AI, stemming back to pioneers such as Alan Turing, one of the undisputed fathers of the field. Britain has some of the best AI research groups in the world, including Cambridge, Imperial and University College London (UCL), and is a growing centre for tech entrepreneurship. But companies specialising in AI are few and far between, and those that do exist tend to be focused in one particular area.

Google’s acquisition of DeepMind has shone a light on this relatively nascent commercial sector, and Ben Medlock, co-founder of AI firm SwiftKey, believes that the UK is capable of building sustainable AI businesses to rival the giants of the West Coast.

Some experts have warned that artificial intelligence could lead to mass unemployment. Dr Stuart Armstrong, from the Future of Humanity Institute at the University of Oxford, said computers had the potential to take over people’s jobs at a faster rate than new roles could be created.

He cited logistics, administration and insurance underwriting as professions that were particularly vulnerable to the development of artificial intelligence. However, Anderson said AI is not all about “hacking the workforce to pieces”. Rather it is about making individuals more productive, and making sure that “processes get applied, stuff is accurate, errors are eliminated, and compliance is met”. Analyst firm Gartner predicts that ‘smart machines’ will have a widespread impact on businesses by 2020.

Here are some British AI companies making headlines in the field:

Founded by Blaise Thomson, Martin Szummer and Steve Young, VocalIQ is a Cambridge-based startup, formed to exploit technology developed by the Spoken Dialogue Systems Group at the University of Cambridge, UK. With expertise and toolkits covering speech recognition, spoken language understanding, dialog design, and speech synthesis the company provides customised spoken language interfaces to any device, and for any application including smartphones, robots, cars, call-centres, and games. In September 2015 VocalIQ was acquired by Apple.

SwiftKey uses artificial intelligence to make personalised mobile apps. It is best known for the SwiftKey keyboard, which learns from each individual user to accurately predict their next word and improve autocorrect. Its machine learning and natural language processing technology understands the context of language and how words fit together. SwiftKey products were embedded on more than 100 million devices last year, and the company has just launched an app for iPhones and iPads called SwiftKey Note. The company behind SwiftKey was founded in 2008 by Jon Reynolds and Dr Ben Medlock.

Bloom AI
Bloom develops consumer artificial intelligence software that attempts to befriend humans. Its lead developer, Jason Hadjioannou, showcased a companion AI app for iOS at 2015’s TechCrunch Disrupt in London. The app uses artificial intelligence to learn about and bond with the user, proactively striking up conversations and remembering the personal interests of the individual. Bloom’s companion app is interacted with via natural spoken language and presents one of the most realistic speech synthesis engines currently available. Bloom AI is based in England and was founded by Jason Hadjioannou.

Celaton’s inSTREAM software applies artificial intelligence to labour-intensive clerical tasks and decision making. Every day, businesses receive mountains of information via email and paper. InSTREAM learns to recognise different types of information and process it accordingly. It never forgets, and handles huge volumes of information at high-speed. Like a real person, it asks questions when it is not sure what to do. Andrew Anderson is the founder and CEO of Celaton.

Lincor provides hospital bedside computers to entertain patients and engage them with relevant information and advice. This virtual personal doctor will constantly analyse live personal health data to enable preventative medicine and tailored lifestyle advice. During a hospital visit, the data will be further analysed by hospital AI, giving doctors a more complete and detailed picture. Enda Murphy is the founder and CTO of Lincor Solutions.

Featurespace has developed and sells two software products based on its predictive analytics platform. One is for fraud detection and the other for marketing analytics. Its products use advanced proprietary algorithms to exploit the vast amounts of customer interaction data that many companies collect, delivering insights that can help to detect and prevent fraud and prevent customer churn. Featurespace’s team is led by CEO Martina King (former Managing Director of Aurasma and Yahoo! Europe) and CTO David Excell, who co-founded the company with Professor Bill Fitzgerald, alongside Matt Mills (Commercial Director) and Simon Rodgers (Director of Engineering).

Darktrace uses advanced mathematics to automatically detect abnormal behaviour in organisations in order to manage risks from cyber-attacks. Unlike software that reads log files or puts locks on the technology, Darktrace’s approach allows businesses to protect their information and intellectual property from state-sponsored, criminal groups or malicious employees that many believe are already inside the networks of every critical infrastructure company. Darktrace was founded in Cambridge, UK, in 2013 by mathematicians and machine learning specialists from the University of Cambridge, together with intelligence experts from MI5 and GCHQ.

AI startups & companies in the landscape of Machine Intelligence

The following piece on AI startups & companies was created by Shivon Zilis in late 2014 and could be missing some information at the time of posting.

The original article can be found on Shivon’s website here and the full resolution version of the landscape image can be viewed here.

—————- Start —————-

I spent the last three months learning about every artificial intelligence, machine learning, or data related startup I could find — my current list has 2,529 of them to be exact. Yes, I should find better things to do with my evenings and weekends but until then…

Why do this?

A few years ago, investors and startups were chasing “big data” (I helped put together a landscape on that industry). Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or somesuch — collectively I call these “machine intelligence” (I’ll get into the definitions in a second). Our fund, Bloomberg Beta, which is focused on the future of work, has been investing in these approaches. I created this landscape to start to put startups into context. I’m a thesis-oriented investor and it’s much easier to identify crowded areas and see white space once the landscape has some sort of taxonomy.

What is “machine intelligence” anyway?

I mean “machine intelligence” as a unifying term for what others call machine learning and artificial intelligence. (Some others have used the term before, without quite describing it or understanding how laden this field has been with debates over descriptions.)

I would have preferred to avoid a different label but when I tried either “artificial intelligence” or “machine learning” both proved to too narrow: when I called it “artificial intelligence” too many people were distracted by whether certain companies were “true AI,” and when I called it “machine learning,” many thought I wasn’t doing justice to the more “AI-esque” like the various flavors of deep learning. People have immediately grasped “machine intelligence” so here we are. ☺

Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus). Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape.

What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence).

Which companies are on the landscape?

I considered thousands of companies, so while the chart is crowded it’s still a small subset of the overall ecosystem. “Admissions rates” to the chart were fairly in line with those of Yale or Harvard, and perhaps equally arbitrary. ☺

I tried to pick companies that used machine intelligence methods as a defining part of their technology. Many of these companies clearly belong in multiple areas but for the sake of simplicity I tried to keep companies in their primary area and categorized them by the language they use to describe themselves (instead of quibbling over whether a company used “NLP” accurately in its self-description).

If you want to get a sense for innovations at the heart of machine intelligence, focus on the core technologies layer. Some of these companies have APIs that power other applications, some sell their platforms directly into enterprise, some are at the stage of cryptic demos, and some are so stealthy that all we have is a few sentences to describe them.

The most exciting part for me was seeing how much is happening the the application space. These companies separated nicely into those that reinvent the enterprise, industries, and ourselves.

If I were looking to build a company right now, I’d use this landscape to help figure out what core and supporting technologies I could package into a novel industry application. Everyone likes solving the sexy problems but there are an incredible amount of ‘unsexy’ industry use cases that have massive market opportunities and powerful enabling technologies that are begging to be used for creative applications (e.g., Watson Developer Cloud, AlchemyAPI).

Reflections on the landscape:

We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. (Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.) I focused on understanding the ecosystem on a company-by-company level and drawing implications from that.

Yes, it’s true, machine intelligence is transforming the enterprise, industries and humans alike.

On a high level it’s easy to understand why machine intelligence is important, but it wasn’t until I laid out what many of these companies are actually doing that I started to grok how much it is already transforming everything around us. As Kevin Kelly more provocatively put it, “the business plans of the next 10,000 startups are easy to forecast: Take X and add AI”. In many cases you don’t even need the X — machine intelligence will certainly transform existing industries, but will also likely create entirely new ones.

Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, and destroying 80’s classic video games.

Many companies will be acquired.

I was surprised to find that over 10% of the eligible (non-public) companies on the slide have been acquired. It was in stark contrast to big data landscape we created, which had very few acquisitions at the time.No jaw will drop when I reveal that Google is the number one acquirer, though there were more than 15 different acquirers just for the companies on this chart. My guess is that by the end of 2015 almost another 10% will be acquired. For thoughts on which specific ones will get snapped up in the next year you’ll have to twist my arm…

Big companies have a disproportionate advantage, especially those that build consumer products.

The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears.

Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom). Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses.

The talent’s in the New (AI)vy League.

In the last 20 years, most of the best minds in machine intelligence (especially the ‘hardcore AI’ types) worked in academia. They developed new machine intelligence methods, but there were few real world applications that could drive business value.

Now that real world applications of more complex machine intelligence methods like deep belief nets and hierarchical neural networks are starting to solve real world problems, we’re seeing academic talent move to corporate settings. Facebook recruited NYU professors Yann LeCun and Rob Fergus to their AI Lab, Google hired University of Toronto’s Geoffrey Hinton, Baidu wooed Andrew Ng. It’s important to note that they all still give back significantly to the academic community (one of LeCun’s lab mandates is to work on core research to give back to the community, Hinton spends half of his time teaching, Ng has made machine intelligence more accessible through Coursera) but it is clear that a lot of the intellectual horsepower is moving away from academia.

For aspiring minds in the space, these corporate labs not only offer lucrative salaries and access to the “godfathers” of the industry, but, the most important ingredient: data. These labs offer talent access to datasets they could never get otherwise (the ImageNet dataset is fantastic, but can’t compare to what Facebook, Google, and Baidu have in house).

As a result, we’ll likely see corporations become the home of many of the most important innovations in machine intelligence and recruit many of the graduate students and postdocs that would have otherwise stayed in academia.

There will be a peace dividend.

Big companies have an inherent advantage and it’s likely that the ones who will win the machine intelligence race will be even more powerful than they are today. However, the good news for the rest of the world is that the core technology they develop will rapidly spill into other areas, both via departing talent and published research.

Similar to the big data revolution, which was sparked by the release of Google’s BigTable and BigQuery papers, we will see corporations release equally groundbreaking new technologies into the community. Those innovations will be adapted to new industries and use cases that the Googles of the world don’t have the DNA or desire to tackle.

Opportunities for entrepreneurs:

“My company does deep learning for X”

Few words will make you more popular in 2015. That is, if you can credibly say them.Deep learning is a particularly popular method in the machine intelligence field that has been getting a lot of attention. Google, Facebook, and Baidu have achieved excellent results with the method for vision and language based tasks and startups like Enlitic have shown promising results as well.

Yes, it will be an overused buzzword with excitement ahead of results and business models, but unlike the hundreds of companies that say they do “big data”, it’s much easier to cut to the chase in terms of verifying credibility here if you’re paying attention.The most exciting part about the deep learning method is that when applied with the appropriate levels of care and feeding, it can replace some of the intuition that comes from domain expertise with automatically-learned features. The hope is that, in many cases, it will allow us to fundamentally rethink what a best-in-class solution is.

As an investor who is curious about the quirkier applications of data and machine intelligence, I can’t wait to see what creative problems deep learning practitioners try to solve. I completely agree with Jeff Hawkins when he says a lot of the killer applications of these types of technologies will sneak up on us. I fully intend to keep an open mind.

“Acquihire as a business model”

People say that data scientists are unicorns in short supply. The talent crunch in machine intelligence will make it look like we had a glut of data scientists. In the data field, many people had industry experience over the past decade. Most hardcore machine intelligence work has only been in academia. We won’t be able to grow this talent overnight.

This shortage of talent is a boon for founders who actually understand machine intelligence. A lot of companies in the space will get seed funding because there are early signs that the acquihire price for a machine intelligence expert is north of 5x that of a normal technical acquihire (take, for example Deep Mind, where price per technical head was somewhere between $5–10M, if we choose to consider it in the acquihire category). I’ve had multiple friends ask me, only semi-jokingly, “Shivon, should I just round up all of my smartest friends in the AI world and call it a company?” To be honest, I’m not sure what to tell them. (At Bloomberg Beta, we’d rather back companies building for the long term, but that doesn’t mean this won’t be a lucrative strategy for many enterprising founders.)

A good demo is disproportionately valuable in machine intelligence

I remember watching Watson play Jeopardy. When it struggled at the beginning I felt really sad for it. When it started trouncing its competitors I remember cheering it on as if it were the Toronto Maple Leafs in the Stanley Cup finals (disclaimers: (1) I was an IBMer at the time so was biased towards my team (2) the Maple Leafs have not made the finals during my lifetime — yet — so that was purely a hypothetical).

Why do these awe-inspiring demos matter? The last wave of technology companies to IPO didn’t have demos that most of us would watch, so why should machine intelligence companies? The last wave of companies were very computer-like: database companies, enterprise applications, and the like. Sure, I’d like to see a 10x more performant database, but most people wouldn’t care. Machine intelligence wins and loses on demos because 1) the technology is very human, enough to inspire shock and awe, 2) business models tend to take a while to form, so they need more funding for longer period of time to get them there, 3) they are fantastic acquisition bait.Watson beat the world’s best humans at trivia, even if it thought Toronto was a US city. DeepMind blew people away by beating video games. Vicarious took on CAPTCHA. There are a few companies still in stealth that promise to impress beyond that, and I can’t wait to see if they get there.

Demo or not, I’d love to talk to anyone using machine intelligence to change the world. There’s no industry too unsexy, no problem too geeky. I’d love to be there to help so don’t be shy.I hope this landscape chart sparks a conversation. The goal to is make this a living document and I want to know if there are companies or categories missing. I welcome feedback and would like to put together a dynamic visualization where I can add more companies and dimensions to the data (methods used, data types, end users, investment to date, location, etc.) so that folks can interact with it to better explore the space.

Questions and comments: Please email me. Thank you to Andrew Paprocki, Aria Haghighi, Beau Cronin, Ben Lorica, Doug Fulop, David Andrzejewski, Eric Berlow, Eric Jonas, Gary Kazantsev, Gideon Mann, Greg Smithies, Heidi Skinner, Jack Clark, Jon Lehr, Kurt Keutzer, Lauren Barless, Pete Skomoroch, Pete Warden, Roger Magoulas, Sean Gourley, Stephen Purpura, Wes McKinney, Zach Bogue, the Quid team, and the Bloomberg Beta team for your ever-helpful perspectives!

Disclaimer: Bloomberg Beta is an investor in Adatao, Alation, Aviso, Context Relevant, Mavrx, Newsle, Orbital Insights, Pop Up Archive, and two others on the chart that are still undisclosed. We’re also investors in a few other machine intelligence companies that aren’t focusing on areas that were a fit for this landscape, so we left them off.

For the full resolution version of the landscape please click here.

—————- End —————-

Shivon Zilis is an Investor at Bloomberg Beta. She is currently based in San Francisco.

Her website is