A Tesla Model-S crashed last May while on autopilot, killing its driver. While the car’s self-driving feature has earned favorable reviews, it does have its glitches. In this case, the autopilot failed to spot a white tractor trailer against “a brightly lit sky.” If past experience is any guidance, a long legal fight will probably ensue to determine liability. The novelty here, however, is that one of the subjects in the incident is a piece of software. Now, as Patrick Lin discusses in his article for Forbes, disclaimers may not exempt Tesla from responsibility in the crash. The self-driving feature is still in Beta-testing mode and drivers agreed to use it at their discretion, monitoring traffic. Is the decision to activate a feature on Beta-testing in real driving wise? Is that an example of inexperience by a very new carmaker such as Tesla? For General Motors has refused to allow autopilot Beta-testing mode in real driving. It would not be the first case in which a fatality as a result of product malfunction opens room for debate. Simply put, no matter what the small letter says, the product failed. That’s so, regardless of arguments lawyers for the plaintiff and the defendant will put forth, if the case gets to court. But a new legal dimension is opening up now. Who is responsible when a device or system that lacks human consciousness and will makes life-or-death decisions? Would drivers have to configure an “ethics setting” profile that determined the autopilot’s responses to the universe of potential hazards? Most importantly, the advantages of autopilot for emergency situations are clear to imagine. But if autopilot will be used to free up drivers to engage for distractions, peril will not only be at their expense but also other motorists and pedestrians who have not signed up for it.
