Games throughout the years have been a great means of testing whether or not a computer system is capable of performing a task as well as a human would – also known as Artificial Intelligence (A.I). It’s a clever way for assessing whether or not algorithms are working to their full potential. Noughts and Crosses (a.k.a tic-tac-toe) was the first game a player lost against A.I in 1952, then Checkers in 1994, Chess in 1997, Jeopardy in 2011, and most recently the game of Go. Last month (March 2016) Google’s DeepMind program AlphaGo defeated Lee Sedol, a top-ranked professional Go player.
At first glance the game of Go appears to be a simple game, however it’s actually extremely complex. Go originated in China over 2,500 years ago. It’s a game of two players who take turns in placing black or white stones onto a 19 x 19 board. The goal of the game is to grab territory without getting your stones captured. After the first round there are 129,960 possible board positions; compared with 400 in Chess. On any turn there are 250 moves in Go and only 35 in Chess. The Google Research blog explains that there are more possible board positions than there are atoms in the universe. Making Go a googol times more complex than Chess. Last year computers could only play Go to an amateur level, it was said to take at least 10 years before it would be able to beat a professional. However, Google’s DeepMind created a way of making this possible before its time with the relatively new A.I technique: deep learning.
AlphaGo was not programed to learn the ‘good’ or ‘bad’ moves of the game; instead its algorithms absorbed a database of online Go matches that essentially gave it the ability to learn all the moves from each match; providing it the equivalent practice of playing Go for 80 years straight. It used to be that even the best A.I. operating systems were programed to deal with specific problems and needed to be thoroughly tweaked to function successfully. The way deep learning operates is, in a way, similar to how the human brain works. Basically it’s ‘learning’ all the time. It relies on simulating multilayered webs of virtual neurons that enables the computer to learn and distinguish conceptual patterns. In other words deep learning can become a master of any craft as long as it has access to enough data.
At ObEN we’ve created our own deep learning speech technology. We’ve built a unique and proprietary dataset that enables us to better train our algorithms, just as DeepMind did with AlphaGo. As our dataset grows, the quality of our product expands and so does the program’s ability to pinpoint the problems to solve. To help us create the perfect technology and increase our dataset, share your voice-print with us!
About the author: Georgina Bunn is a Corporate Communications Associate at ObEN.