What is Offline AI?

[Originally posted by Jason Hadjioannou on Medium – 30th June 2017]

Offline AI refers to Artificial Intelligence programs that run on-device, as opposed to server-side APIs that run programs to perform AI tasks remotely. Why is this a thing? Well there are three big benefits to using Offline AI.

Speed

The first is operation speed. If a device has all the data it needs and possess the ability to perform intelligent tasks such as image recognition and natural language processing without needing to send/receive data processed on a remote server somewhere, then the speed of the operation is greatly improved due to the lack of reliance on network connectivity and/or server hardware performance.

An on-device AI program can run trained Machine Learning models and Neural Networks using nothing but the device hardware and software. Not having to rely on network connectivity greatly improves the speed of operation and has a positive impact on user experience. (Core ML for macOS and iOS is a framework by Apple that will allow such programs to run on a Mac or an iOS device. I’ll be talking more about Core ML in future posts).

Cost

The second benefit to using Offline AI comes hand in hand with the first. If Online is False then Network costs are Zero! To give you an example of how much of an impact this can have on an AI business..

The company I work for is an Artificial Intelligence and Augmented Reality company with a consumer facing mobile App, and it’s not uncommon for even medium-sized tech companies to spend millions of dollars per month on the server-side technology needed to perform the AI tasks that make an App’s feature-set possible.

The majority of this huge cost goes towards server hosting and data bandwidth fees that occur whenever the App sends image data from a user’s device camera and up to our online Neural Nets for processing. If you want really fast image-recognition performance for example, you’ll need to send up a lot of image data, multiple times per second. The promise of Offline AI eliminates this process altogether.

Privacy

The third benefit is one of increasing importance to society as consumer technology and the social media industry matures, so is it that the responsibility to protect people’s data is more greatly required. User data privacy is an ethically important practice made possible by Offline AI.

Processing all data on-device means that it is sandboxed and better protected against data abuse and server hacking. Yes the device could still be hacked or stolen for that matter, but the risk of user data abuse is greatly reduced as the data is never sent to a remote network or stored server-side. User data can be processed, used for current tasks and then purged without leaving digital breadcrumbs when the data is no longer needed.

In time, as AI applications become more intwined with our daily lives, the need for this type of responsibility will increase and the onus is on us, program developers, software engineers and computer scientists to build such applications that behave respectively towards the personal security of the people that use them.

For more talks on Offline AI and specifically the use of Core ML in iOS mobile Apps, check out my posts on Medium: https://medium.com/@jason.io

Define Intelligence

Intelligence – Goal-directed Adaptive Behaviour. This was the definition presented to the audience at the A.I.B.E. summit in Westminster, London [4th Jan 2017] by Dr. Daniel J Hulme. The definition derives from the scholarly articles of Professors Sternberg & Salter. The summit was focused on a series of talks about Artificial Intelligence in business and entrepreneurship. The event featured twelve speakers ranging from a variety of groups and businesses from an early stage AI music start-up called JukeDeck to Microsoft.

For me, the most interesting talks came from Calum Chace (author of The Economic Singularity and Surviving AI) who successfully delivered a concise presentation of the potential risks AI could bring to the economy and Dr. Hulme (CEO of Satalia – a data-science, technology Co) who engaged the audience with a thought provoking discussion that explored how humans are able to perceive contextual understanding from a small amount of ambiguous data and the difficult challenge of getting computers to do the same.

Dr. Hulme opened with Professors’ Sternberg & Salter’s definition of the term Intelligence, as did I with this post, and the Professor’s words have resonated with me over the last twenty-four hours. You see, while there is little ambiguity regarding the definition of Artificial, the same can not be said for that of the cognitive ability we call Intelligence.

The English, Oxford Dictionary holds the following meaning: “The ability to acquire and apply knowledge and skills” while the Wikipedia entry for Intelligence reads: “Intelligence has been defined in many different ways including as one’s capacity for logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity and problem solving.”

In Sternberg & Salter’s quote I find interest the word ‘Adaptive’, for one’s ability to adapt to move closer to a personal goal demonstrates a perceived understanding of one’s current circumstance and the ability to generate a set of predictions for one’s future circumstances.

Predicting a potential future is the job of the neocortex and it is this impressive ability that has elevated humans to the highest hierarchical rank of Intelligence within all the species found on Earth. Studies on the brain’s cognitive abilities have often described the organ as a prediction machine, using pattern recognition to predict future outcomes and select actions to achieve favourable outcomes over less favourable outcomes. This adaptive behaviour is what keeps us safe from danger and continuously directs us towards the progressive journey of achieving the goals of a human being. Predictions like whether or not touching fire will cause pain, eating food today will keep us alive tomorrow and which partner will successfully provide safety are just some of primeval cognitive processes that drive human decisions and actions. Greater intelligence allows for more accurate predictions and with that, we adapt with greater success to achieve our goals.

There are many notable theories on Intelligence, Charles Spearman’s ‘General Intelligence’ and Louis L. Thurstone’s ‘Primary Mental Abilities’ to name a couple and without delving deep into low-level theory I’d like to end this post with my own high-level, alternative definition of intelligence in the context of goal-oriented pattern recognition.

Intelligence – Ability to Predict Futures for Optimum Progress.

[by Jason Hadjioannou]

Is This C. Elegans Worm Simulation Alive?

C. elegans, aka Caenorhabditis elegans, is a free-living, transparent nematode of about 1 mm in length, that lives in temperate soil environments. What makes this roundworm so interesting is that the adult hermaphrodite has a total of only 302 neurons. Those 302 neurons belong to two distinct and independent nervous systems: the largest being a somatic nervous system of 282 neurons and a smaller one being a pharyngeal nervous system of just 20 neurons. This makes C. elegans a great starting ground for those studying the nervous system as all 7,000 connections, or synapses, between those neurons have been mapped.

In 2011 a project called OpenWorm launched with the goal of giving people access to their own digital worm called WormSim to study on their computers through the OpenWorm project. The project produced a complete wireframe of the C. elegans connectome, recreating all 302 neurons and 959 cells of the tiny nematode to virtually simulate the actions of the real-life worm. When simulated inputs are delivered to the nervous system, the worm sim performs a highly realistic worm-like motion.

Assuming that the behaviour of the virtual C. elegans is in-line with that of the real C. elegans, at what stage might it be reasonable to call it a living organism? The standard definition of living organisms is behavioural; they extract usable energy from their environment, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce and, through natural selection, adapt to their environment in successive generations.

If the simulation exhibits these behaviours, combined with realistic responses to its external environment, should we consider it to be alive?

This could depend on perspective. From the outer-world perspective, the worm is obviously a non-living simulation that mimics life inside a computer. In the inner-world perspective of the simulation, the worm is absolutely alive as it is obeying the laws of physics as presented by the simulation. One could argue that in comparison to the world in which we exist, there is nothing that can confirm for us that we too are not living in a world that is a simulation produced by an outer-world.

Here is a video of the OpenWorm: C. elegans simulation:

You can check out OpenWorm at http://openworm.org

Intelligent Machines and Foolish Humans

[This Blog Articles post was written & submitted by J.D.F.]

We will eventually build machines so intelligent that they will be self-aware. When that happens, it will highlight two outstanding human traits: brilliance and foolhardiness. Of course, the kinds of people responsible for creating such machines would be exceptionally clever. The future, however, may show that those geniuses had blinkered vision and didn’t realise quite what they were creating. Many respected scientists believe that nothing threatens human existence more definitively than conscious machines, and that when humanity eventually takes the threat seriously, it may well be too late.

Other experts counter that warning and argue that since we build the machines we will always be able to control them. That argument seems reasonable, but it doesn’t stand up to close scrutiny. Conscious machines, those with self-awareness, could be a threat to humans for many reasons, but three in particular. First, we won’t be able to control them because we won’t know what they’re thinking. Second, machine intelligence will improve at a much faster rate than human intelligence. Scientists working in this area and in artificial intelligence (AI) in general, suggest that computers will become conscious and as intelligent as humans sometime this century, maybe even in less than two or three decades. So, machines will have achieved in about a century what took humans millions of years. Machine intelligence will continue to improve, and very quickly, we will find ourselves sharing the Earth with an intelligence form far superior to us. Third, machines can leverage their brainpower hugely by linking together. Humans can’t directly link their brains and must communicate with others by tedious written, visual, or aural messaging.

Some world-famous visionaries have sounded strong warnings about AI. Elon Musk, the billionaire entrepreneur and co-founder of PayPal, Tesla Motors, and SpaceX, described it as [we could be] “summoning the demon.” The risk is that as scientists relentlessly improve the capabilities of AI systems, at some indeterminate point, they may set off an unstoppable chain reaction where the machines wrest control from their creators. In April 2015, Stephen Hawkins, the renowned theoretical physicist, cosmologist, and author, gave a stark warning: “the development of full artificial intelligence could spell the end of the human race.” Luke Muehlhauser, director of MIRI (The Machine Intelligence Research Institute), was quoted in the Financial Times as saying that by building AI “we’re toying with the intelligence of the gods and there is no off switch.” Yet we seem to be willing to take the risk.

Perhaps most people are not too concerned because consciousness is such a nebulous concept. Even scientists working with AI may be working in the dark. We all know humans have consciousness, but nobody, not even the brightest minds, understands what it is. So we can only speculate about how or when machines might get it, if ever. Some scientists believe that when machines acquire the level of thinking power similar to that of the human brain, machines will be conscious and self-aware. In other words, those scientists believe that our consciousness is purely a physical phenomenon – a function of our brain’s complexity.

For millions of years, human beings have dominated the Earth and all other species on it. That didn’t happen because we are the largest, or the strongest, but because we are the most intelligent by far. If machines become more intelligent, we could well end up as their slaves. Worse still, they might regard us as surplus to their needs and annihilate us. That doomsday scenario has been predicted by countless science fiction writers.

Should we heed their prophetic vision as most current advanced technology was once science fiction?
Or do we have nothing to worry about?

For more on this subject, read Nick Bostrom’s highly recommended book, Superintelligence, listed in our books section.

Human Immortality Through AI

Will AI research some day lead to human immortality? There are a few groups and companies that believe that one day, the human race will merge with Artificially Intelligent machines.

One approach to this would be that of which Ray Kurzweil describes in his 2005 book The Singularity Is Near. Kurzweil pens a world where humans transcend biology by implanting AI nano-bots directly into the neural networks of the brain. The futurist and inventor also predicts that humans are going to develop emotions and characteristics of higher complexity as a result of this. Kurzweil’s prediction is that this will happen around the year 2030. That’s less than 15 years away!

Another path to human immortality could be by way of uploading your mind’s data or a ‘mind-file’ into a database, to later be imported or downloaded into an AI’s brain that will continue on life as you. This is something a company based in Los Angeles called Humai is claiming to be working on (however I’m still not 100% convinced that the recent blanket media coverage on Humai is not all part of an elaborate PR stunt for a new Hollywood movie!). Humai’s current website meta title reads: ‘Humai Life: Extended | Enhanced | Restored’. Their mission statement sounds very bold and ambitious for our current times and news headlines like ‘Humai wants to resurrect the dead with artificial intelligence’ do not help but the AI tech start-up does make a point of saying that the AI technology for the mind restoration part of the process will not be ready for another 30 years.

But do we really want to live forever and do non AI Researchers even care about this? In the heart of Silicon Valley, Joon Yun is a hedge fund manager doing US social security data calculations. Yun says, “the probability of a 25-year-old dying before their 26th birthday is 0.1%”. If we could keep that risk constant throughout life instead of it rising due to age-related disease, the average person would – statistically speaking – live 1,000 years. In December 2014, Yun announced a $1m prize fund to challenge scientists to “hack the code of life” and push the human lifespan past its apparent maximum of around 120 years.

Like or not, in one way or another, the immortality of human beings is probably something that is going to happen – unless the invention of AI and the singularity renders us extinct that is!

Deep Grammar will correct your grammatical errors using AI

Deep Grammar is a grammar checker built on top of deep learning. Deep Grammar uses deep learning to learn a model of language, and it then uses this model to check text for errors in three steps:

  1. Compute the likelihood that someone would have intended to write the text.
  2. Attempt to generate text that is close to the written text but is more likely.
  3. If such text is found, show it to the user as a possible correction.

To see Deep Grammar in action. Consider the sentence “I will tell he the truth.” Deep Grammar calculates that this sentence is unlikely, and it tries to come up with a sentence that is close to that sentence but is likely. It finds the sentence “I will tell him the truth.” Since this sentence is both likely and close to the original sentence, it suggests it to the user as a correction.

Here are some other examples of sentences with the corrections found by Deep Grammar:

  • We know that our brains our not perfect. –> We know that our brains are not perfect.
  • Have your ever wondered about it? –> Have you ever wondered about it.
  • To bad the development has stopped. –> Too bad the development has stopped.

You can find a quantitative evaluation of deep grammar here.

AI startups & companies in the landscape of Machine Intelligence

The following piece on AI startups & companies was created by Shivon Zilis in late 2014 and could be missing some information at the time of posting.

The original article can be found on Shivon’s website here and the full resolution version of the landscape image can be viewed here.

—————- Start —————-

I spent the last three months learning about every artificial intelligence, machine learning, or data related startup I could find — my current list has 2,529 of them to be exact. Yes, I should find better things to do with my evenings and weekends but until then…

Why do this?

A few years ago, investors and startups were chasing “big data” (I helped put together a landscape on that industry). Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or somesuch — collectively I call these “machine intelligence” (I’ll get into the definitions in a second). Our fund, Bloomberg Beta, which is focused on the future of work, has been investing in these approaches. I created this landscape to start to put startups into context. I’m a thesis-oriented investor and it’s much easier to identify crowded areas and see white space once the landscape has some sort of taxonomy.

What is “machine intelligence” anyway?

I mean “machine intelligence” as a unifying term for what others call machine learning and artificial intelligence. (Some others have used the term before, without quite describing it or understanding how laden this field has been with debates over descriptions.)

I would have preferred to avoid a different label but when I tried either “artificial intelligence” or “machine learning” both proved to too narrow: when I called it “artificial intelligence” too many people were distracted by whether certain companies were “true AI,” and when I called it “machine learning,” many thought I wasn’t doing justice to the more “AI-esque” like the various flavors of deep learning. People have immediately grasped “machine intelligence” so here we are. ☺

Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus). Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape.

What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence).

Which companies are on the landscape?

I considered thousands of companies, so while the chart is crowded it’s still a small subset of the overall ecosystem. “Admissions rates” to the chart were fairly in line with those of Yale or Harvard, and perhaps equally arbitrary. ☺

I tried to pick companies that used machine intelligence methods as a defining part of their technology. Many of these companies clearly belong in multiple areas but for the sake of simplicity I tried to keep companies in their primary area and categorized them by the language they use to describe themselves (instead of quibbling over whether a company used “NLP” accurately in its self-description).

If you want to get a sense for innovations at the heart of machine intelligence, focus on the core technologies layer. Some of these companies have APIs that power other applications, some sell their platforms directly into enterprise, some are at the stage of cryptic demos, and some are so stealthy that all we have is a few sentences to describe them.

The most exciting part for me was seeing how much is happening the the application space. These companies separated nicely into those that reinvent the enterprise, industries, and ourselves.

If I were looking to build a company right now, I’d use this landscape to help figure out what core and supporting technologies I could package into a novel industry application. Everyone likes solving the sexy problems but there are an incredible amount of ‘unsexy’ industry use cases that have massive market opportunities and powerful enabling technologies that are begging to be used for creative applications (e.g., Watson Developer Cloud, AlchemyAPI).

Reflections on the landscape:

We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. (Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.) I focused on understanding the ecosystem on a company-by-company level and drawing implications from that.

Yes, it’s true, machine intelligence is transforming the enterprise, industries and humans alike.

On a high level it’s easy to understand why machine intelligence is important, but it wasn’t until I laid out what many of these companies are actually doing that I started to grok how much it is already transforming everything around us. As Kevin Kelly more provocatively put it, “the business plans of the next 10,000 startups are easy to forecast: Take X and add AI”. In many cases you don’t even need the X — machine intelligence will certainly transform existing industries, but will also likely create entirely new ones.

Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, and destroying 80’s classic video games.

Many companies will be acquired.

I was surprised to find that over 10% of the eligible (non-public) companies on the slide have been acquired. It was in stark contrast to big data landscape we created, which had very few acquisitions at the time.No jaw will drop when I reveal that Google is the number one acquirer, though there were more than 15 different acquirers just for the companies on this chart. My guess is that by the end of 2015 almost another 10% will be acquired. For thoughts on which specific ones will get snapped up in the next year you’ll have to twist my arm…

Big companies have a disproportionate advantage, especially those that build consumer products.

The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears.

Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom). Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses.

The talent’s in the New (AI)vy League.

In the last 20 years, most of the best minds in machine intelligence (especially the ‘hardcore AI’ types) worked in academia. They developed new machine intelligence methods, but there were few real world applications that could drive business value.

Now that real world applications of more complex machine intelligence methods like deep belief nets and hierarchical neural networks are starting to solve real world problems, we’re seeing academic talent move to corporate settings. Facebook recruited NYU professors Yann LeCun and Rob Fergus to their AI Lab, Google hired University of Toronto’s Geoffrey Hinton, Baidu wooed Andrew Ng. It’s important to note that they all still give back significantly to the academic community (one of LeCun’s lab mandates is to work on core research to give back to the community, Hinton spends half of his time teaching, Ng has made machine intelligence more accessible through Coursera) but it is clear that a lot of the intellectual horsepower is moving away from academia.

For aspiring minds in the space, these corporate labs not only offer lucrative salaries and access to the “godfathers” of the industry, but, the most important ingredient: data. These labs offer talent access to datasets they could never get otherwise (the ImageNet dataset is fantastic, but can’t compare to what Facebook, Google, and Baidu have in house).

As a result, we’ll likely see corporations become the home of many of the most important innovations in machine intelligence and recruit many of the graduate students and postdocs that would have otherwise stayed in academia.

There will be a peace dividend.

Big companies have an inherent advantage and it’s likely that the ones who will win the machine intelligence race will be even more powerful than they are today. However, the good news for the rest of the world is that the core technology they develop will rapidly spill into other areas, both via departing talent and published research.

Similar to the big data revolution, which was sparked by the release of Google’s BigTable and BigQuery papers, we will see corporations release equally groundbreaking new technologies into the community. Those innovations will be adapted to new industries and use cases that the Googles of the world don’t have the DNA or desire to tackle.

Opportunities for entrepreneurs:

“My company does deep learning for X”

Few words will make you more popular in 2015. That is, if you can credibly say them.Deep learning is a particularly popular method in the machine intelligence field that has been getting a lot of attention. Google, Facebook, and Baidu have achieved excellent results with the method for vision and language based tasks and startups like Enlitic have shown promising results as well.

Yes, it will be an overused buzzword with excitement ahead of results and business models, but unlike the hundreds of companies that say they do “big data”, it’s much easier to cut to the chase in terms of verifying credibility here if you’re paying attention.The most exciting part about the deep learning method is that when applied with the appropriate levels of care and feeding, it can replace some of the intuition that comes from domain expertise with automatically-learned features. The hope is that, in many cases, it will allow us to fundamentally rethink what a best-in-class solution is.

As an investor who is curious about the quirkier applications of data and machine intelligence, I can’t wait to see what creative problems deep learning practitioners try to solve. I completely agree with Jeff Hawkins when he says a lot of the killer applications of these types of technologies will sneak up on us. I fully intend to keep an open mind.

“Acquihire as a business model”

People say that data scientists are unicorns in short supply. The talent crunch in machine intelligence will make it look like we had a glut of data scientists. In the data field, many people had industry experience over the past decade. Most hardcore machine intelligence work has only been in academia. We won’t be able to grow this talent overnight.

This shortage of talent is a boon for founders who actually understand machine intelligence. A lot of companies in the space will get seed funding because there are early signs that the acquihire price for a machine intelligence expert is north of 5x that of a normal technical acquihire (take, for example Deep Mind, where price per technical head was somewhere between $5–10M, if we choose to consider it in the acquihire category). I’ve had multiple friends ask me, only semi-jokingly, “Shivon, should I just round up all of my smartest friends in the AI world and call it a company?” To be honest, I’m not sure what to tell them. (At Bloomberg Beta, we’d rather back companies building for the long term, but that doesn’t mean this won’t be a lucrative strategy for many enterprising founders.)

A good demo is disproportionately valuable in machine intelligence

I remember watching Watson play Jeopardy. When it struggled at the beginning I felt really sad for it. When it started trouncing its competitors I remember cheering it on as if it were the Toronto Maple Leafs in the Stanley Cup finals (disclaimers: (1) I was an IBMer at the time so was biased towards my team (2) the Maple Leafs have not made the finals during my lifetime — yet — so that was purely a hypothetical).

Why do these awe-inspiring demos matter? The last wave of technology companies to IPO didn’t have demos that most of us would watch, so why should machine intelligence companies? The last wave of companies were very computer-like: database companies, enterprise applications, and the like. Sure, I’d like to see a 10x more performant database, but most people wouldn’t care. Machine intelligence wins and loses on demos because 1) the technology is very human, enough to inspire shock and awe, 2) business models tend to take a while to form, so they need more funding for longer period of time to get them there, 3) they are fantastic acquisition bait.Watson beat the world’s best humans at trivia, even if it thought Toronto was a US city. DeepMind blew people away by beating video games. Vicarious took on CAPTCHA. There are a few companies still in stealth that promise to impress beyond that, and I can’t wait to see if they get there.

Demo or not, I’d love to talk to anyone using machine intelligence to change the world. There’s no industry too unsexy, no problem too geeky. I’d love to be there to help so don’t be shy.I hope this landscape chart sparks a conversation. The goal to is make this a living document and I want to know if there are companies or categories missing. I welcome feedback and would like to put together a dynamic visualization where I can add more companies and dimensions to the data (methods used, data types, end users, investment to date, location, etc.) so that folks can interact with it to better explore the space.

Questions and comments: Please email me. Thank you to Andrew Paprocki, Aria Haghighi, Beau Cronin, Ben Lorica, Doug Fulop, David Andrzejewski, Eric Berlow, Eric Jonas, Gary Kazantsev, Gideon Mann, Greg Smithies, Heidi Skinner, Jack Clark, Jon Lehr, Kurt Keutzer, Lauren Barless, Pete Skomoroch, Pete Warden, Roger Magoulas, Sean Gourley, Stephen Purpura, Wes McKinney, Zach Bogue, the Quid team, and the Bloomberg Beta team for your ever-helpful perspectives!

Disclaimer: Bloomberg Beta is an investor in Adatao, Alation, Aviso, Context Relevant, Mavrx, Newsle, Orbital Insights, Pop Up Archive, and two others on the chart that are still undisclosed. We’re also investors in a few other machine intelligence companies that aren’t focusing on areas that were a fit for this landscape, so we left them off.

For the full resolution version of the landscape please click here.

—————- End —————-

Shivon Zilis is an Investor at Bloomberg Beta. She is currently based in San Francisco.

Her website is www.shivonzilis.com

The Singularity: A Philosophical Analysis

David Chalmers is a leading philosopher of mind, and the first to publish a major philosophy journal article on the singularity:

Chalmers, D. (2010). “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17:7-65.

Chalmers’ article is a “survey” article in that it doesn’t cover any arguments in depth, but quickly surveys a large number of positions and arguments in order to give the reader a “lay of the land.” Because of this, Chalmers’ paper is a remarkably broad and clear introduction to the singularity.

Singularitarian authors will also be pleased that they can now cite a peer-reviewed article by a leading philosopher of mind who takes the singularity seriously.

Below is a CliffsNotes of the paper for those who don’t have time to read all 58 pages of it.

The Singularity: Is It Likely?

Chalmers focuses on the “intelligence explosion” kind of singularity, and his first project is to formalise and defend I.J. Good’s 1965 argument. Defining AI as being “of human level intelligence,” AI+ as AI “of greater than human level” and AI++ as “AI of far greater than human level” (super intelligence), Chalmers updates Good’s argument to the following:

  1. There will be AI (before long, absent defeaters).
  2. If there is AI, there will be AI+ (soon after, absent defeaters).
  3. If there is AI+, there will be AI++ (soon after, absent defeaters).
  4. Therefore, there will be AI++ (before too long, absent defeaters).

By “defeaters,” Chalmers means global catastrophes like nuclear war or a major asteroid impact. One way to satisfy premise (1) is to achieve AI through brain emulation (Sandberg & Bostrom, 2008). Against this suggestion, Lucas (1961), Dreyfus (1972), and Penrose (1994) argue that human cognition is not the sort of thing that could be emulated. Chalmers (1995; 1996, chapter 9) has responded to these criticisms at length. Briefly, Chalmers notes that even if the brain is not a rule-following algorithmic symbol system, we can still emulate it if it is mechanical. (Some say the brain is not mechanical, but Chalmers dismisses this as being discordant with the evidence.)
Searle (1980) and Block (1981) argue instead that even if we can emulate the human brain, it doesn’t follow that the emulation is intelligent or has a mind. Chalmers says we can set these concerns aside by stipulating that when discussing the singularity, AI need only be measured in terms of behavior. The conclusion that there will be AI++ at least in this sense would still be massively important.

Another consideration in favor of premise (1) is that evolution produced human-level intelligence, so we should be able to build it, too. Perhaps we will even achieve human-level AI by evolving a population of dumber AIs through variation and selection in virtual worlds. We might also achieve human-level AI by direct programming or, more likely, systems of machine learning.

Premise (2) is plausible because AI will probably be produced by an extendible method, and so extending that method will yield AI+. Brain emulation might turn out not to be extendible, but the other methods are. Even if human-level AI is first created by a non-extendible method, this method itself would soon lead to an extendible method, and in turn enable AI+. AI+ could also be achieved by direct brain enhancement.

Premise (3) is the amplification argument from Good: an AI+ would be better than we are at designing intelligent machines, and could thus improve its own intelligence. Having done that, it would be even better at improving its intelligence. And so on, in a rapid explosion of intelligence.

In section 3 of his paper, Chalmers argues that there could be an intelligence explosion without there being such a thing as “general intelligence” that could be measured, but I won’t cover that here.

In section 4, Chalmers lists several possible obstacles to the singularity.

Constraining AI

Next, Chalmers considers how we might design an AI+ that helps to create a desirable future and not a horrifying one. If we achieve AI+ by extending the method of human brain emulation, the AI+ will at least begin with something like our values. Directly programming friendly values into an AI+ (Yudkowsky, 2004) might also be feasible, though an AI+ arrived at by evolutionary algorithms is worrying.

Most of this assumes that values are independent of intelligence, as Hume argued. But if Hume was wrong and Kant was right, then we will be less able to constrain the values of a superintelligent machine, but the more rational the machine is, the better values it will have.

Another way to constrain an AI is not internal but external. For example, we could lock it in a virtual world from which it could not escape, and in this way create a leakproof singularity. But there is a problem. For the AI to be of use to us, some information must leak out of the virtual world for us to observe it. But then, the singularity is not leakproof. And if the AI can communicate us, it could reverse-engineer human psychology from within its virtual world and persuade us to let it out of its box – into the internet, for example.

Our Place in a Post-Singularity World

Chalmers says there are four options for us in a post-singularity world: extinction, isolation, inferiority, and integration.

The first option is undesirable. The second option would keep us isolated from the AI, a kind of technological isolationism in which one world is blind to progress in the other. The third option may be infeasible because an AI++ would operate so much faster than us that inferiority is only a blink of time on the way to extinction.

For the fourth option to work, we would need to become superintelligent machines ourselves. One path to this mind bemind uploading, which comes in several varieties and has implications for our notions of consciousness and personal identity that Chalmers discusses but I will not. (Short story: Chalmers prefers gradual uploading, and considers it a form of survival.)

Conclusion

Chalmers concludes:

Will there be a singularity? I think that it is certainly not out of the question, and that the main obstacles are likely to be obstacles of motivation rather than obstacles of capacity.

How should we negotiate the singularity? Very carefully, by building appropriate values into machines, and by building the first AI and AI+ systems in virtual worlds.

How can we integrate into a post-singularity world? By gradual uploading followed by enhancement if we are still around then, and by reconstructive uploading followed by enhancement if we are not.

References

Block (1981). “Psychologism and behaviorism.” Philosophical Review 90:5-43.

Chalmers (1995). “Minds, machines, and mathematics.” Psyche 2:11-20.

Chalmers (1996). The Conscious Mind. Oxford University Press.

Dreyfus (1972). What Computers Can’t Do. Harper & Row.

Lucas (1961). “Minds, machines, and Godel.” Philosophy 36:112-27.

Penrose (1994). Shadows of the Mind. Oxford University Press.

Sandberg & Bostrom (2008). “Whole brain emulation: A roadmap.” Technical report 2008-3, Future for Humanity Institute, Oxford University.

Searle (1980). “Minds, brains, and programs.” Behavioral and Brain Sciences 3:417-57.

Yudkowsky (2004). “Coherent Extrapolated Volition.”

AI / Artificial Intelligence

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also the name of the academic field of study which studies how to create computers and computer software that are capable of intelligent behaviour. Major AI researchers and textbooks define this field as “the study and design of intelligent agents”, in which an intelligent agent is a system that perceives its environment and takes actions that maximise its chances of success. John McCarthy, who coined the term in 1955, defines it as “the science and engineering of making intelligent machines”.

AI research is highly technical and specialised, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is still among the field’s long-term goals. Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are a large number of tools used in AI, including versions of search and mathematical optimisation, logic, methods based on probability and economics, and many others. The AI field is interdisciplinary, in which a number of sciences and professions converge, including computer science, mathematics, psychology, linguistics, philosophy and neuroscience, as well as other specialised fields such as artificial psychology.

The field was founded on the claim that a central property of humans, human intelligence—the sapience of Homo sapiens—”can be so precisely described that a machine can be made to simulate it.” This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth,fiction and philosophy since antiquity. Artificial intelligence has been the subject of tremendous optimism but has also suffered stunning setbacks. Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science.

[Snippet updated from wikipedia.org – 16th September 2015: https://en.wikipedia.org/wiki/Artificial_intelligence]