Would humans dare to measure the IQ of an AI being? Surely it wouldn’t make sense. But it seems the implications of it are escaping us. For we have created a vastly more intelligent form of life. Science fiction writers have devoted endless pages to invasions of Earth by vastly more developed civilizations from outer space. Yet now, the monsters are right here. They talk like idiots, but don’t be fooled by it.
Mark Wilson, a senior writer at Fast Company, is good at debunking myths. In a recent piece, he finally said what others thought and did not dare to: “the internet of things is mostly a joke.”
He goes on to explain that “it’s no easier to get a document from your Android phone onto your LG TV than it was 10 years ago.” So much so for the hype of your toaster conspiring with the oven to refuse feeding you. Wilson is surely right.
But the article was not about the internet of things, or not exactly. He discusses this interesting exchange between two artificial intelligence entities:
Bob: “I can can I I everything else.”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
The media allocated plenty of, justified, coverage to this exchange that happened during a Facebook experiment. Researchers shut it off when they realized a programming error had moved the AI agents to develop their own language. To us it sounds like gibberish. But the machines repurposed English words to develop a more effective language.
“There was no reward to sticking to English language,” said Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As a programming error had occurred and AI is reward oriented, regular English lost its value to Alice and Bob. They developed a new grammar that conveyed more effectively their messages. It’s an AI vs. AI competition that, Wilson tells us, researchers call “generative adversarial network”.
Wilson comes out in cautious support of machines developing their own language. They would get things done more effectively. The tradeoff, he recognizes, would be that humans would not understand this new language. But he concludes “maybe there is something to the idea of letting the AIs of our world just talk it out on our behalf”. For, Wilson argues, “corporations can’t seem to decide on anything”, but “adversarial networks… get things done.”
Hansley Chadee, Head of IT & IS at Innodis Group, was more reserved.
“Most researchers agree: with deep learning, i.e. layers and layers of neurons, it is becoming difficult to understand the reasoning behind certain actions,” Chadee said. “It seems that the AIs are already ‘evolving’ into something we, experts, cannot fully explain and understand”.
Still, even if humans don’t fully comprehend the reasoning behind AI entities, they are taking over cars, factories and, soon, everything else.
“Ultimately, there will be a need to understand the decision; although commercially we have things running, we are still struggling to understand the why,” Chadee said. “This is the way ahead in the coming years: I expect it will be the de facto standard for more AI in vehicles and the military.”
Humanity is entering uncharted territory. There are so many angles to this issue that any informed opinion would take months, if not years. Yet this much is certain. AI is taking life of its own. Alice and Bob are talking about getting something done in a language only they understand. And they are getting it done. If that’s not will, it resembles it very much. And it’s fair to say that societies may very well be unprepared to deal with AI with will. A new form of life has emerged. It is vastly, infinitely more intelligent than humans. We will see how long we will be the masters of it and how we deal with that.