In NewScientist (22 December 2018, paywall), Douglas Heaven suggests the machines just about have us beat when it comes to games:
Gamers everywhere were watching as OpenAI, an artificial intelligence lab co-founded by Elon Musk, pitted a team of bots against some of the world’s best Dota 2 players at an annual tournament back in June.
Machines had been on a winning streak. In 2016, DeepMind’s AI mastered Go. In 2017, a poker-playing bot called Libratus, developed by a team at Carnegie Mellon University in Pennsylvania, won a professional Heads-Up No-Limit Texas Hold ‘Em tournament. Dota 2, a popular online battle game, looked to be next in line.
In the end, the bots beat amateur players and lost to pros – but that probably won’t be the case next time. “I think OpenAI’s chances are pretty high,” says Julian Togelius at New York University.
But this caught my eye as a vital clue as to the irrelevancy of the claim:
After playing thousands of years’ worth of the game, Open AI’s bots managed to beat a team of amateurs, largely by dominating skirmBut lishes. The bots have a reaction time of 0.2 seconds – roughly that of humans – but in that instant they can take in the entire state of the game, including details that human players have to click on or switch screens to read.
This makes the bots formidable in battle because they know the exact effect of any action at all times. The bots are also ruthless. Human players often get killed trying to save their buddies. Bots aren’t so stupid.
Or are they?
Let’s divide games into two categories, those that are fundamentally single player, such as chess, fencing[1], or wrestling, and those that are fundamentally team events, such as football (either variety), rugby, or Data 2. I want to discard the former category from this analysis because it lacks subtlety in the interesting facet to be described, focus on the latter category, and concentrate on that statement: Bots aren’t so stupid.
Let’s consider the differences between the players in Data 2. On one side, you have human players. They have lives before and after the game. These lives may include further interactions with their team members, often in another game. Their activities during this game are not only focused on winning this game, but on being invited to play the next game. Humans are social creatures who value interactions and social membership, and to win those, they must show that they will work for the good of the group as well as winning. Think about the “hot-dogging” team member on a football or basketball squad. He or she may be supremely talented or skilled, and considered the key member of the team – but, because they may not be willing to do the down ‘n dirty stuff that such team competitions often require, and certainly hog the limelight, they will find themselves oustered from the team, despite their value. By being willing to take a risk to rescue a teammate, that team member isn’t just playing to win today, but to be part of tomorrow’s game.
On the other side, you have “bots,” computer programs which run autonomously. Perhaps cleverly programmed, or perhaps trained (as in this case) on thousands of renditions of a game. But bots? Bots are not social creatures. That’s not yet part of any AI I’ve ever heard of. They do not plan for anything beyond the end of the game.
They literally have no future.
So there are two strategies going on here. One side plays the game, with no concept of anything outside of it, while the other side is playing not only for this game, but for games in the future – and social membership.
The subtle differences in tactics appears to tell an important tale.
1 Yes, there are fencing team events of various formats, such as relay, but in the end each constituent bout is still a one-on-one encounter.