In a first, an AI taught itself to play a video game, and is beating humans

Since the earliest days of virtual chess and solitaire, video games have been a playing field for developing artificial intelligence (AI). Each victory of machine against human has helped make algorithms smarter and more efficient. But in order to tackle real world problems – such as automating complex tasks including driving and negotiation – these algorithms must navigate more complex environments than board games, and learn teamwork. Teaching AI how to work and interact with other players to succeed had been an insurmountable task – until now.

In a new study, researchers detailed a way to train AI algorithms to reach human levels of performance in a popular 3D multiplayer game – a modified version of Quake III Arena in Capture the Flag mode.

Even though the task of this game is straightforward – two opposing teams compete to capture each other’s flags by navigating a map – winning demands complex decision-making and an ability to predict and respond to the actions of other players.

This is the first time an AI has attained human-like skills in a first-person video game. So how did the researchers do it?

The robot learning curve

In 2019, several milestones in AI research have been reached in other multiplayer strategy games. Five “bots” – players controlled by an AI – defeated a professional e-sports team in a game of DOTA 2. Professional human players were also beaten by an AI in a game of StarCraft II. In all cases, a form of reinforcement learning was applied, whereby the algorithm learns by trial and error and by interacting with its environment.

Read Complete Article

Google’s Duplex uses AI to mimic humans, but only sometimes

On a recent afternoon at the Lao Thai Kitchen restaurant, the telephone rang and the caller ID read “Google Assistant.” Jimmy Tran, a waiter, answered the phone. The caller was a man with an Irish accent hoping to book a dinner reservation for two on the weekend. This was no ordinary booking. It came through Google Duplex, a free service that uses artificial intelligence to call restaurants and — mimicking a human voice — speak on our behalf to book a table. The feature, which had a limited release about a year ago, recently became available to a larger number of Android devices and iPhones.

The voice of the Irish man sounded eerily human. When asked whether he was a robot, the caller immediately replied, “No, I’m not a robot,” and laughed.

“It sounded very real,” Tran said in an interview after hanging up the call with Google. “It was perfectly human.”

Google later confirmed, to our disappointment, that the caller had been telling the truth: He was a person working in a call center. The company said that about 25 per cent of calls placed through Duplex started with a human, and that about 15 percent of those that began with an automated system had a human intervene at some point.

We tested Duplex for several days, calling more than a dozen restaurants, and our tests showed a heavy reliance on humans. Among our four successful bookings with Duplex, three were done by people. But when calls were actually placed by Google’s artificially intelligent assistant, the bot sounded very much like a real person and was even able to respond to nuanced questions.

In other words, Duplex, which Google first showed off last year as a technological marvel using AI, is still largely operated by humans. While AI services like Google’s are meant to help us, their part-machine, part-human approach could contribute to a mounting problem: the struggle to decipher the real from the fake, from bogus reviews and online disinformation to bots posing as people.

Read Complete Article