The researchers at DeepMind keep advancing the state of the art on the utilization of deep learning to master ever more complex games. After recently reporting a system that learns how to play a number of different and very complex board games, including Go and Chess, the company announced a system that is able to beat the best players in the world at a complex strategy game, Startcraft.
AlphaStar, the system designed to learn to play Starcraft, one of the most challenging Real-Time Strategy (RTS) games, by playing against other versions of itself, represents a significant advance in the application of machine learning. In Starcraft, a significant amount of information is hidden from the players, and each player has to balance short term and long term objectives, just like in the real world. Players have to master fast-paced battle techniques and, at the same time, develop their own armies and economies.
This result is important because it shows that deep reinforcement learning, which has already shown remarkable results in all sorts of board games, can scale up to complex environments with multiple time scales and hidden information. It opens the way to the application of machine learning to real-world problems, until now deemed to difficult to be tackled by machine learning.