Facebook, Tesla want to read your mind: here’s why you should be worried

Not content with monitoring almost everything you do online, Facebook now wants to read your mind as well. The social media giant recently announced a breakthrough in its plan to create a device that reads people’s brainwaves to allow them to type just by thinking. And Elon Musk wants to go even further. One of the Tesla boss’ other companies, Neuralink, is developing a brain implant to connect people’s minds directly to a computer. Musk admits that he takes inspiration from science fiction, and that he wants to make sure humans can “keep up” with artificial intelligence. He seems to have missed the part of sci-fi that acts as a warning for the implications of technology.

These mind-reading systems could affect our privacy, security, identity, equality and personal safety. Do we really want all that left to companies with philosophies such as that of Facebook’s former mantra, “move fast and break things”? Though they sound futuristic, the technologies needed to make brainwave-reading devices are not that dissimilar to the standard MRI (magnetic resonance imaging) and EEG (electroencephalography) neuroscience tools used in hospitals all over the world. You can already buy a kit to control a drone with your mind, so using one to type out words is, in some ways, not that much of a leap. The advance will likely be due to the use of machine learning to sift through huge quantities of data collected from our brains and find the patterns in neuron activity that link thoughts to specific words. Read Complete Article

Google’s Duplex uses AI to mimic humans, but only sometimes

On a recent afternoon at the Lao Thai Kitchen restaurant, the telephone rang and the caller ID read “Google Assistant.” Jimmy Tran, a waiter, answered the phone. The caller was a man with an Irish accent hoping to book a dinner reservation for two on the weekend. This was no ordinary booking. It came through Google Duplex, a free service that uses artificial intelligence to call restaurants and — mimicking a human voice — speak on our behalf to book a table. The feature, which had a limited release about a year ago, recently became available to a larger number of Android devices and iPhones.

The voice of the Irish man sounded eerily human. When asked whether he was a robot, the caller immediately replied, “No, I’m not a robot,” and laughed.

“It sounded very real,” Tran said in an interview after hanging up the call with Google. “It was perfectly human.”

Google later confirmed, to our disappointment, that the caller had been telling the truth: He was a person working in a call center. The company said that about 25 per cent of calls placed through Duplex started with a human, and that about 15 percent of those that began with an automated system had a human intervene at some point.

We tested Duplex for several days, calling more than a dozen restaurants, and our tests showed a heavy reliance on humans. Among our four successful bookings with Duplex, three were done by people. But when calls were actually placed by Google’s artificially intelligent assistant, the bot sounded very much like a real person and was even able to respond to nuanced questions.

In other words, Duplex, which Google first showed off last year as a technological marvel using AI, is still largely operated by humans. While AI services like Google’s are meant to help us, their part-machine, part-human approach could contribute to a mounting problem: the struggle to decipher the real from the fake, from bogus reviews and online disinformation to bots posing as people.

Read Complete Article