Uber’s Pilot Test of Self-Driving Cars and the Point of Automation

A reporter from The Verge tested one of the self-driving vehicles that the ride-sharing company Uber began to test in Pittsburgh. As he describes it in a gripping article, it was both thrilling and mundane. It also included a few hair-raising moments as well, such as a pedestrian appearing out of nowhere in front of the car.

Smart at is, the computer did order the car to brake behind an SUV that was not moving. Still, the self-navigating system did not understand the other motorist’s gestures to drive around his vehicle. Human intervention was needed. The automaton would also unexpectedly return the car to human control. Without access to the car’s logs, however, it will not be possible to know why.

For its pilot test, two trained Uber employees sat in the cars, one behind the wheel and the other on the passenger side. As the technology is still in its infancy, human supervision is needed. Regardless, this is no less of a landmark. For the first time, a taxi service will be offering rides in self-driving cars to passengers who explicitly opt in.

Which brings us to the question we want to address: What is the point of automation if it will require a pilot and a co-pilot? Surely, this is only for the initial period. Still, as The Verge’s writer notes, there are three dimensions to this issue: technology, social acceptance, and regulation.

When it comes to technology, it is still premature: for instance, it does not interpret human body language and it may be imperfect (to put it softly) to deal with unexpected behavior that will not surprise a reasonably experienced driver, if only because humans can anticipate the conduct of their kin. Regulation, too, is another major question: what type of insurance will handle robots and the issues of liability arising from it.

But the essential barrier is social acceptance. And for good reason too. The point of a self-driving car would be to free up motorists’ time to do other things. Instinctively, however, few people would trust a robotic car to guide itself while they fiddle with their mobile phones or watch a movie during their ride.

We simply do not trust self-driving cars because we do not trust robots to make decisions that involve risk assessment of human behavior and conscience, beyond calculations and mechanics. Cars are fast-moving vehicles mostly driving through streets and roads of densely populated areas. There is no doubt that automation will take over vehicular traffic at some point. That, however, will only happen when robots have satisfactorily shown to be reliable partners of humans on the road, and able to predict their habits, either at the wheel or when crossing the street.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.