AI XPRIZE – AI competition with IBM Watson

The IBM Watson AI XPRIZE is a $5 million AI and cognitive computing competition challenging teams globally to develop and demonstrate how humans can collaborate with powerful AI technologies to tackle the world’s grand challenges. This prize will focus on creating advanced and scalable applications that benefit consumers and businesses across a multitude of disciplines. The solutions will contribute to the enrichment of available tools and data sets for the usage of innovators everywhere. The goal is also to accelerate the understanding and adoption of AI’s most promising breakthroughs.

Every year leading up to TED 2020, the teams will compete for interim prizes and the opportunity to advance to the next year’s competition. The three finalist teams will take the TED stage in 2020 to deliver jaw-dropping, awe-inspiring TED Talks demonstrating what they have achieved.

Typical of all XPRIZE competitions, the IBM Watson AI XPRIZE will crowdsource solutions from some of the most brilliant thinkers and entrepreneurs around the world, creating true exponential impact.

To compete in the IBM Watson AI XPRIZE you must be a fully registered team. To complete your registration, you must create a Team profile, sign the Competitor’s Agreement and pay the registration fee.

AI Xprize Timeline

PRIZE PURSE

Grand Prizes

The $3,000,000 Grand Prize, $1,000,000 2nd Place, and $500,000 3rd Place purses will be awarded at the end of competition at TED2020, for a total of $4.5 million.

Milestone and Special Prizes

Two Milestone Competition prize purses will be awarded at the end of each of the first two rounds of the competition, and the Judges may award additional special prizes to recognize special accomplishments. A total of $500,000 will be available for these prizes and will be allocated by the Judges for special accomplishments.

THE NEED FOR THE PRIZE

The progress in AI research and applications in the past 20 years makes it timely to focus attention not only on making AI more capable, but also on maximizing the societal benefit of AI. The democratization of exponential technology enables AI and cognitive computing to put empowerment into the hands of innovators everywhere. Driven by long term capabilities of AI impact, and to better understand the prospects of human and AI collaboration, the IBM Watson AI XPRIZE provides an interdisciplinary platform for domain experts, developers and innovators to, through collaboration, push the boundaries of AI to new heights. The competition will bring the AI community together and accelerate the development of scalable, hybrid solutions and audacious breakthroughs to address humanity’s grandest challenges.

You can register for the competition at: https://aiportal.xprize.org/en/registration

Watch AlphaGo take on Lee Sedol, the world’s top Go player

Watch AlphaGo take on Lee Sedol, the world’s top Go player, in the final match of the Google DeepMind challenge.

Match score: AlphaGo 3 – Lee Sedol 1.
[Game five: Seoul, South Korea, 15th March at 13:00 KST; 04:00 GMT; for US at -1 day (14th March) 21:00 PT, 00:00 ET.]

The Game of Go 

The game of Go originated in China more than 2,500 years ago. The rules of the game are simple: Players take turns to place black or white stones on a board, trying to capture the opponent’s stones or surround empty space to make points of territory. As simple as the rules are, Go is a game of profound complexity. There are more possible positions in Go than there are atoms in the universe. That makes Go a googol times more complex than chess. Go is played primarily through intuition and feel, and because of its beauty, subtlety and intellectual depth it has captured the human imagination for centuries. AlphaGo is the first computer program to ever beat a professional, human player. Read more about the game of Go and how AlphaGo is using machine learning to master this ancient game.

Match Details 

In October 2015, the program AlphaGo won 5-0 in a formal match against the reigning 3-times European Champion, Fan Hui, to become the first program to ever beat a professional Go player in an even game. Now AlphaGo will face its ultimate challenge: a 5-game challenge match in Seoul against the legendary Lee Sedol, the top Go player in the world over the past decade, for a $1M prize. For full details, see the press release.

The matches were held at the Four Seasons Hotel, Seoul, South Korea, starting at 13:00 local time (04:00 GMT; day before 20:00 PT, 23:00 ET) on March 9th, 10th, 12th, 13th and 15th.

The matches were livestreamed on DeepMind’s YouTube channel as well as broadcast on TV throughout Asia through Korea’s Baduk TV, as well as in China, Japan, and elsewhere.Match commentators included Michael Redmond, the only professional Western Go player to achieve 9 dan status. Redmond commentated in English, and Yoo Changhyuk professional 9 dan, Kim Sungryong professional 9 dan, Song Taegon professional 9 dan, and Lee Hyunwook professional 8 dan commentated in Korean alternately.The matches were played under Chinese rules with a komi of 7.5 (the compensation points the player who goes second receives at the end of the match). Each player received two hours per match with three lots of 60-second byoyomi (countdown periods after they have finished their allotted time).

Singularity Or Bust [Documentary]

In 2009, film-maker and former AI programmer Raj Dye spent his summer following futurist AI researchers Ben Goertzel and Hugo DeGaris around Hong Kong and Xiamen, documenting their doings and gathering their perspectives. The result, after some work by crack film editor Alex MacKenzie, was the 45 minute documentary Singularity or Bust — a uniquely edgy, experimental Singularitarian road movie, featuring perhaps the most philosophical three-foot-tall humanoid robot ever, a glance at the fast-growing Chinese research scene in the late aughts, and even a bit of a real-life love story. The film was screened in theaters around the world, and won the Best Documentary award at the 2013 LA Cinema Festival of Hollywood and the LA Lift Off Festival. And now it is online, free of charge, for your delectation.

Singularity or Bust is a true story pertaining to events occurring in the year 2009. It captures a fascinating slice of reality, but bear in mind that things move fast these days. For more recent updates on Goertzel and DeGaris’s quest for transhuman AI, you’ll have to consult the Internet, or your imagination.

[Full Documentary]

Machine Learning With Stanford University

Stanford University are holding a Machine Learning course with coursera.org.

Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI. In this class, you will learn about the most effective machine learning techniques, and gain practice implementing them and getting them to work for yourself. More importantly, you’ll learn about not only the theoretical underpinnings of learning, but also gain the practical know-how needed to quickly and powerfully apply these techniques to new problems. Finally, you’ll learn about some of Silicon Valley’s best practices in innovation as it pertains to machine learning and AI.

This course provides a broad introduction to machine learning, datamining, and statistical pattern recognition. Topics include: (i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks). (ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning). (iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI). The course will also draw from numerous case studies and applications, so that you’ll also learn how to apply learning algorithms to building smart robots (perception, control), text understanding (web search, anti-spam), computer vision, medical informatics, audio, database mining, and other areas.

Can I earn a Course Certificate if I completed this course before they were available?
In order to verify one’s identity and maintain academic integrity, learners who completed assignments or quizzes for Machine Learning prior to November 1st will need to redo and resubmit these assessments in order to earn a Course Certificate. To clarify, both quizzes and programming assignments need to be resubmitted. Though your deadlines may have technically passed, please be assured that you may resubmit both types of assessments at any time. We apologise for the inconvenience and appreciate your patience as we strive to ensure the integrity and value of our certificates.

Please note that, in order to earn a Course Certificate, you must complete the course within 180 days of payment, or by May 1, 2016, whichever is earlier.

Enrolment ends February 27

The State of Artificial Intelligence – Davos 2016 Talk

How close are technologies to simulating or overtaking human intelligence and what are the implications for industry and society?

This talk took place on 20th January 2016 – at the World Economic Forum Annual Meeting (and was developed in partnership with Arirang).

Moderated by:
Connyoung Jennifer Moon, Chief Anchor and Editor-in-Chief, Arirang TV & Radio, Republic of Korea

Matthew Grob, Executive Vice-President and Chief Technology Officer, Qualcomm, USA
Andrew Moore, Dean, School of Computer Science, Carnegie Mellon University, USA
Stuart Russell, Professor of Computer Science, University of California, Berkeley, USA
Ya-Qin Zhang, President, Baidu.com, People’s Republic of China

The Beginnings of Artificial Intelligence (AI) Research – In The 1950s

With the development of the electronic computer in 1941 and the stored program computer in 1949 the conditions for research in artificial intelligence (AI) were given. Still, the observation of a link between human intelligence and machines was not widely observed until the late 1950s.

A discovery that influenced much of the early development of AI was made by Norbert Wiener. He was one of the first to theorise that all intelligent behaviour was the result of feedback mechanisms. Mechanisms that could possibly be simulated by machines. A further step towards the development of modern AI was the creation of The Logic Theorist. Designed by Newell and Simon in 1955 it may be considered the first AI program.

The person who finally coined the term artificial intelligence and is regarded as the father of AI is John McCarthy. In 1956 he organised a conference “The Dartmouth summer research project on artificial intelligence” to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. In the following years AI research centres began forming at the Carnegie Mellon University as well as the Massachusetts Institute of Technology (MIT) and new challenges were faced: 1) the creation of systems that could efficiently solve problems by limiting the search and 2) the construction of systems that could learn by themselves.

One of the results of the intensified research in AI was a novel program called The General Problem Solver, developed by Newell and Simon in 1957 (the same people who had created The Logic Theorist). It was an extension of Wiener’s feedback principle and capable of solving a greater extent of common sense problems. While more programs were developed a major breakthrough in AI history was the creation of the LISP (LISt Processing) language by John McCarthy in 1958. It was soon adopted by many AI researchers and is still in use today.

Is This C. Elegans Worm Simulation Alive?

C. elegans, aka Caenorhabditis elegans, is a free-living, transparent nematode of about 1 mm in length, that lives in temperate soil environments. What makes this roundworm so interesting is that the adult hermaphrodite has a total of only 302 neurons. Those 302 neurons belong to two distinct and independent nervous systems: the largest being a somatic nervous system of 282 neurons and a smaller one being a pharyngeal nervous system of just 20 neurons. This makes C. elegans a great starting ground for those studying the nervous system as all 7,000 connections, or synapses, between those neurons have been mapped.

In 2011 a project called OpenWorm launched with the goal of giving people access to their own digital worm called WormSim to study on their computers through the OpenWorm project. The project produced a complete wireframe of the C. elegans connectome, recreating all 302 neurons and 959 cells of the tiny nematode to virtually simulate the actions of the real-life worm. When simulated inputs are delivered to the nervous system, the worm sim performs a highly realistic worm-like motion.

Assuming that the behaviour of the virtual C. elegans is in-line with that of the real C. elegans, at what stage might it be reasonable to call it a living organism? The standard definition of living organisms is behavioural; they extract usable energy from their environment, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce and, through natural selection, adapt to their environment in successive generations.

If the simulation exhibits these behaviours, combined with realistic responses to its external environment, should we consider it to be alive?

This could depend on perspective. From the outer-world perspective, the worm is obviously a non-living simulation that mimics life inside a computer. In the inner-world perspective of the simulation, the worm is absolutely alive as it is obeying the laws of physics as presented by the simulation. One could argue that in comparison to the world in which we exist, there is nothing that can confirm for us that we too are not living in a world that is a simulation produced by an outer-world.

Here is a video of the OpenWorm: C. elegans simulation:

You can check out OpenWorm at http://openworm.org

How AlphaGo Mastered the Game of Go with Deep Neural Networks

The game of Go has long been viewed as the most challenging of classic games for artificial intelligence due to its enormous search space and the difficulty of evaluating board positions and moves.

Google DeepMind introduced a new approach to computer Go with their program, AlphaGo, that uses value networks to evaluate board positions and policy networks to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte-Carlo tree search programs that simulate thousands of random games of self-play. DeepMind also introduce a new search algorithm that combines Monte-Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

Here you can read DeepMinds’s full paper on how AlphaGo works: deepmind-mastering-go.pdf.

In March 2016, AlphaGo will face its ultimate challenge: a 5-game challenge match in Seoul against the legendary Lee Sedol, the top Go player in the world over the past decade.

Here are a few videos about AlphaGo:

Intelligent Machines and Foolish Humans

[This Blog Articles post was written & submitted by J.D.F.]

We will eventually build machines so intelligent that they will be self-aware. When that happens, it will highlight two outstanding human traits: brilliance and foolhardiness. Of course, the kinds of people responsible for creating such machines would be exceptionally clever. The future, however, may show that those geniuses had blinkered vision and didn’t realise quite what they were creating. Many respected scientists believe that nothing threatens human existence more definitively than conscious machines, and that when humanity eventually takes the threat seriously, it may well be too late.

Other experts counter that warning and argue that since we build the machines we will always be able to control them. That argument seems reasonable, but it doesn’t stand up to close scrutiny. Conscious machines, those with self-awareness, could be a threat to humans for many reasons, but three in particular. First, we won’t be able to control them because we won’t know what they’re thinking. Second, machine intelligence will improve at a much faster rate than human intelligence. Scientists working in this area and in artificial intelligence (AI) in general, suggest that computers will become conscious and as intelligent as humans sometime this century, maybe even in less than two or three decades. So, machines will have achieved in about a century what took humans millions of years. Machine intelligence will continue to improve, and very quickly, we will find ourselves sharing the Earth with an intelligence form far superior to us. Third, machines can leverage their brainpower hugely by linking together. Humans can’t directly link their brains and must communicate with others by tedious written, visual, or aural messaging.

Some world-famous visionaries have sounded strong warnings about AI. Elon Musk, the billionaire entrepreneur and co-founder of PayPal, Tesla Motors, and SpaceX, described it as [we could be] “summoning the demon.” The risk is that as scientists relentlessly improve the capabilities of AI systems, at some indeterminate point, they may set off an unstoppable chain reaction where the machines wrest control from their creators. In April 2015, Stephen Hawkins, the renowned theoretical physicist, cosmologist, and author, gave a stark warning: “the development of full artificial intelligence could spell the end of the human race.” Luke Muehlhauser, director of MIRI (The Machine Intelligence Research Institute), was quoted in the Financial Times as saying that by building AI “we’re toying with the intelligence of the gods and there is no off switch.” Yet we seem to be willing to take the risk.

Perhaps most people are not too concerned because consciousness is such a nebulous concept. Even scientists working with AI may be working in the dark. We all know humans have consciousness, but nobody, not even the brightest minds, understands what it is. So we can only speculate about how or when machines might get it, if ever. Some scientists believe that when machines acquire the level of thinking power similar to that of the human brain, machines will be conscious and self-aware. In other words, those scientists believe that our consciousness is purely a physical phenomenon – a function of our brain’s complexity.

For millions of years, human beings have dominated the Earth and all other species on it. That didn’t happen because we are the largest, or the strongest, but because we are the most intelligent by far. If machines become more intelligent, we could well end up as their slaves. Worse still, they might regard us as surplus to their needs and annihilate us. That doomsday scenario has been predicted by countless science fiction writers.

Should we heed their prophetic vision as most current advanced technology was once science fiction?
Or do we have nothing to worry about?

For more on this subject, read Nick Bostrom’s highly recommended book, Superintelligence, listed in our books section.

Superintelligence by Nick Bostrom

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.

If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity’s cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.

This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom’s work nothing less than a reconceptualisation of the essential task of our time.