Originally Posted by
Darth Lefty
I still want to know what any of you think about my idea that the self-driving car's programmer has an obligation to save his customers, rather than the strangers in their path.
Seems clear. The supposition is that every individual, including the AI driving the Google-Uber, will try to keep the passengers alive. One cannot make decisions for the other vehicles, one cannot make assumptions that the operators of the othehr vehicles would do anything except try to survive ... so the AI car needs to use the same rationale.