Starcraft 2 – Ultimate Esports Strategy

Professional Starcraft II players crushed by an artificial intelligence program

The program developed by DeepMind, which led to the victory of the machine over the go game, won ten straight games of this game considered a great challenge for the AI. And lost one.

By Morgane Tual and Corentin Lamy published on 25 January 2019 at 15h50 – updated on 28 January 2019 at 08h36

“Starcraft II” is a strategy game that takes three peoples in an interplanetary war. Blizzard entertainment

After go’s game, would DeepMind’s technologies be about to defeat the human at Starcraft II? This video game, released in 2011, represents a major challenge for the artificial intelligence (AI) industry, on which several specialists such as DeepMind are working. This London – based company, bought by Google in 2014, was made famous in 2016, when its AlphaGo program defeated Lee Sedol, one of the world’s top players, in the game of go – a challenge that AI researchers didn’t think they’d met for a decade or two.

After this feat, DeepMind had announced at the end of 2016 his intention to attack Starcraft II. Two years later, it would appear that the company is on track to repeat its feat. On Thursday, January 24, she announced that her AlphaStar program had managed to beat professional players in ten consecutive games. These games, played in December 2018 against Grzegorz “MaNa” Komincz and Dario ” TLO ” Wünsch (two high-level professional players), were released on Thursday. A final game opposing the machine to MaNa was played, and broadcast lives, at the end of which the human won this time.

Complex challenges for an AI

Developed by Blizzard, Starcraft II is one of the most played titles in video games competition, especially in South Korea. This game is a strategy game between armies of alien races who have to extract resources such as ore or gas to build military buildings and attack their enemies. It requires players to manage tens or even hundreds of units simultaneously in real-time.

An AI system must, therefore, manage a lot of data from which to establish a strategy against an opponent, all in real-time. What is more, unlike chess or the game of go, for example, the player does not see the entire area of play: he does not know, for example, all the positions of his enemies, and must send units to discover it — an additional difficulty for AI.

Based on a network of artificial neurons, the AlphaStar program “trained” by observing games played by excellent players, then playing against itself, to improve continuously. The program thus accumulated in two weeks the equivalent of 200 years of play, according to DeepMind in his blog post explaining the operation.

Some advantages of AI

In the competition broadcast on Youtube, AlphaStar nevertheless left with advantages. For starters, the matches are between two teams of Protoss-one of the three alien races in the Starcraft universe, each with its characteristics and requiring different strategies. However, while AlphaStar was trained to play a match in this configuration, his first opponent, TLO, admits that he does not know how to play “at a professional level” with an army of Protoss. On the other hand, Mana, AlphaStar’s second opponent, is one of the best Protoss players in the world.

Then, unlike a human player who has to move the camera to observe the entire battlefield, the omniscient AlphaStar can theoretically ” see ” the entire battlefield at a glance. However, he does not cheat and has no access to more information than a human: enemy units outside the field of view of his troops remain invisible.

“You might think AlphaStar has an advantage because, unlike a player, he doesn’t have to worry about moving the camera,” DeepMind explains. “Except that according to our analyses, AlphaStar focuses his attention successively in different parts of the battlefield. We found that she changes her concentration point about 30 times a minute, which is very similar to what a human would do with a camera.”

And, the DeepMind developers say, AlphaStar does not take advantage of its computing power to multiply orders and clicks. Because she learns by analyzing human behaviors, she actually replicates their way of playing and finally performs “only” 277 actions per minute on average. It’s even half as much as a professional human player. It also has a reaction time (350 milliseconds) slightly longer than that of a human.