Artificial Intelligence: The Italics Are Mine

In January 2015, Stuart Russell, a computer science professor at The University of California, Berkeley, and co-author of Artificial Intelligence: a Modern Approach, the standard textbook in the field, was joined by Cambridge physicist Stephen Hawking and billionaire Elon Musk, of Tesla and SpaceX fame, in signing an Open Letter entitled “Research Priorities For Robust And Beneficial Artificial Intelligence.”

“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase,” the letter says. “The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.”

The paragraph concludes with a polite call to caution: “Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

The title and the language are bland enough to make it forgettable shortly after reading it. It seems written by government officials rather than scientists.

And yet, it is in that last quote on the potential pitfalls where the gist of the concerns of the signatories are buried. There, and in another, more explicit sentence, were it not for the bureaucratese that precedes it:

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose [the italics are ours].

Research Priorities For Robust And Beneficial Artificial Intelligence

With this understated language, the researchers were calling for regulating AI before we unleash the full force of this massive technological leap on humanity. This has been compared to the call by the scientists who developed the atomic bombs dropped on Japan in 1945 to reign in the potential of this capability before it turns on the entire world.

The call by nuclear scientists went unheeded bequeathing us the world we have today, with the genie out of the bottle in places like North Korea, Pakistan, and soon, Iran. Whatever your political or religious convictions, it is safe to assume that most people in the world would not want to live in a hereditary dictatorship where starvation seems always to be around the corner because money is being diverted to weapons development or other unholy uses, a country where members of religious minorities can be stoned to death by infuriated mobs over trifles that they find offensive, or required to adhere to premedieval religious restrictions. In other words, you wouldn’t want these weapons to end up in the hands of regimes that do not respect your individual freedoms and rights and, indeed vitally, your life.

Yet there is something more insidious about AI. It is not centrally controlled by a government or organization, unlike the effort to build nuclear weapons. It spreads in the same omnipresent and omnivorous fashion that all technologies do, and it is omniscient in its own right.

And while the benefits are all clear to see in terms of speeding up processes and making life easier in so many ways, let us focus our attention on the things that can go wrong. In the pithy language of the Open Letter signatories, the keyword is “neutral.”

The last few years have offered a lot of samples of how AI applied to crime (hacking) or warfare (the drones) can cause plenty of harm. Much vaunted as it is, neutrality is—no pun intended—a neutral value: it cuts both ways. At the risk of using the quote out of context, we may want to recall what writer Elie Wiesel, a Holocaust survivor, said about it:

We must take sides. Neutrality helps the oppressor, never the victim.

Elie Wiesel

If the premise of capitalism can be summed up as the “ever evolving search for profit” regardless of all other considerations—a somewhat exaggerated statement as in any extrapolation—unregulated AI in an open society could also make for a very toxic and dangerous combination, harmful in as yet unpredictable ways.

There is nothing to be said when AI works to a tee. How about, however, when you get bogged down with an AI-run system that keeps responding with error messages to your requests or acts up in ungovernable ways. Imagine for a moment that there were to be no human backup for customer assistance.

Mind you, we are talking about “human backup” for artificial workers. Walk into any Amazon Go, the no-checkout, no-lines supermarket opened by the e-commerce giant in New York and other cities, and you may encounter an employee who is there to assist the customer in case anything in the non-human transaction goes wrong. Make no mistake: he (or she) is helping the invisible AI power that is running the store. Without any restraints, it takes only a few qualitative leaps for this technology and the environment that is being built around it to go from non-human to inhuman. If one day human beings and all sentient life in the world is demonstrated to be no more than an extremely complex combination of tissue, fluids, and the chemical reactions they trigger, technology may progress to the point of replicating persons or human-like entities as unique as humans, in a prediction anticipated by Alan Turing, considered the father of AI, that is moving away from the realm of science fiction with each passing day.

Brought to the extreme that a theoretical no holds barred society allowed, if machines were able to substitute all human workforce, magnifying profits exponentially for employers (machine employers?), how would the wheels of the economy turn? Thus freed up from the chains of labor, humans would be left to their own devices to live exactly how?

In a recent article, author Cory Doctorow spoke of the “Luddites,” a secret society that destroyed textile machinery in the mills of England in 1811-1816. The term, he said, is now a pejorative for “backwards, anti-technology reactionaries.”

Yet, as Doctorow, points out, “the Luddites weren’t exercised about automation.” They, he says, “didn’t mind the proliferation of cheap textiles.”   

They were fighting about “the social relations governing the use of the new machines.” These new machines could have increased massively productivity and efficiency while still paying these workers well. “Yet the owners of the factories—whose fortunes had been built on the labor of textile workers—chose to employ fewer workers, working the same long hours as before, at a lower rate than before, and pocketed the substantial savings.”

There is no need to be a Marxist to agree with Doctorow that there is “nothing natural about this arrangement.” As he says, a “Martian watching the Industrial Revolution unfold through the eyepiece of a powerful telescope could not tell you why the dividends from these machines should favor factory owners, rather than factory workers.”

The powers that be—never ever underestimate the power of the state—may still prevent the Luddite wars to come. Meantime, we can all do our part by telling the full story.