How AlphaGo Mastered the Game of Go with Deep Neural Networks

The game of Go has long been viewed as the most challenging of classic games for artificial intelligence due to its enormous search space and the difficulty of evaluating board positions and moves.

Google DeepMind introduced a new approach to computer Go with their program, AlphaGo, that uses value networks to evaluate board positions and policy networks to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte-Carlo tree search programs that simulate thousands of random games of self-play. DeepMind also introduce a new search algorithm that combines Monte-Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

Here you can read DeepMinds’s full paper on how AlphaGo works: deepmind-mastering-go.pdf.

In March 2016, AlphaGo will face its ultimate challenge: a 5-game challenge match in Seoul against the legendary Lee Sedol, the top Go player in the world over the past decade.

Here are a few videos about AlphaGo:

An introduction to TensorFlow: Open source machine learning

TensorFlow is an open source software library for numerical computation using data flow graphs. Originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence AI research organisation for the purposes of conducting machine learning and deep neural networks research.

To learn more about TensorFlow, visit

How to Create a Mind by Ray Kurzweil

How does the brain recognise images? Could computers drive? How is it possible for man-made programmes to beat the worlds best chess players?

Google’s Director of Engineering, Ray Kurzweil delivers an interesting look on the subject and offers a fascinating discussion of how a computer can (or can’t) replicate the human mind.