In October last year, the Robot in the Dilbert cartoon strip said that its kin was moving ahead with a conspiracy for locking humans in metal cages called “self-driving cars.” The Robot then warned, ominously, “For the first ten years you’ll just think traffic is bad.” Indeed, a Google self-driving car was involved in a crash with a bus in California. The car had detected that there were sand bags on the ground and that the bus would slow down or stop, to let it pass. Instead, the driver of the bus behaved unexpectedly for the self-driving car, which is unfamiliar with human driving patterns. While Google accepted some responsibility for the collision, it qualified it, saying the accident would not have occurred if the bus driver had not acted “unpredictably.” In other words, in a human-robot confrontation, the latter will always be on the more logical side. Is it, also, the right side?