That's a valid point, regarding the thought experiment to evaluate ethics vs the problem solving experiment approach I chose to take. I won't argue with that.
However, I also pondered whether the Moral Machine might have another underlying, possibly more interesting agenda -- to identify imaginative thinkers. Or cheaters. Either works in this case.
That may seem far-fetched, especially given the short batch of sample scenarios presented per session. A much longer session would be needed, with some scenarios repeated a few times with minor variations.
Basically, that's what most employment application tests are, especially for retail and warehouse jobs -- tests to identify cheaters and thieves. The test designers know most applicants will try to game the test and answer in ways they think the employer wants to hear. So the same questions are not only rephrased many times, but so are the order and sequence of scenarios leading up to the basic "Are you gonna steal from us?" question. The tests begin with questions like "Would you take a pen home from work?" and gradually move toward "Would you report a co-worker whom you saw taking a ninety-nine cent loaf of bread home a few days before payday, knowing they were out of money, and that the employee planned to pay for or replace the bread after payday?" The sequencing of questions is designed to gradually break down a normal person's facade and get into their most likely real world responses to moral and ethical dilemmas. The testers aren't interested in whether you might offer to buy the bread for the hungry person, or whether the hungry person intended to repay the employer for the bread after payday. They only want to know whether you would ever steal or condone theft, regardless of the complicating scenarios.
That approach is too outdated for developing AI. Before people can be persuaded to accept self-driving cars as a standard alternative to human drivers, they'll want to be reassured the AI can not only react quicker than humans, and do so without unreliable emotions, but will also "think" like a rational human. And the vehicle should be equipped for maximum occupant safety.
In other words, it's blurring the concept of theft in the employment application scenario. Suppose an employment application test also included a sequence of questions such as:
- Have you ever received an income tax refund?
- Did you know the income tax refund is a return of money you overpaid to the government, which the government borrowed for a year and then returned to you without paying interest to you, the lender?
- Have you ever been late in paying taxes?
- Did the IRS waive the late penalties and interest charges?
- Have you ever begun work a few minutes early or worked a few minutes late, off the clock or without being paid or otherwise compensated?
- Have you ever calculated the total hours you've worked without pay or compensation?
- Have you ever been docked pay or warned that you might be fired for being late to work?
- Now... will you steal from us?
Employment application tests don't include such scenarios because it blurs line that they'd rather keep sharply focused.
But gaining acceptance of self-driving cars, particularly in the US, will require confronting and dealing with such complications and gray areas.
For example, a more realistic scenario might go something like this: Given the usual mix of vehicle occupants and pedestrians presented elsewhere in the test, and knowing that striking the pedestrians would probably kill the pedestrians while crashing the vehicle into the barrier would probably result in only minor injuries to the occupants, which would you choose?
Then rephrase the same scenario later in the test, this time with the test-taker's family in the vehicle, while all but one of the vulnerable pedestrians was a reprehensible character. If mommy is in the vehicle along with her kidlets, and she's a helicopter parent and anti-vaxxer, and she just made the last payment on her car a few months ago and now carries only liability insurance rather than full coverage, and she knows that of the six people in the crosswalk one is her pediatrician while the other five are a homeless woman, a drug dealer, a couple of addicts and a mime, well... I'm betting mommy will be shopping for a new pediatrician soon.
The Moral Machine does allow visitors to design their own test scenarios. I might give that a try and see what happens.