RISK: How do you program ethics into a driverless car?
MIT Media Lab's Moral Machine forces us to consider whose lives we would prioritize
But before that happens, automakers and insurers need to figure out what driverless cars will do when an accident is inevitable. Will they prioritize passengers’ lives? Pedestrians? Animals? Babies? The elderly? Pregnant women? What if the car still needs to choose between killing a cyclist or injuring all its passengers?
And then there’s the matter of individual choice. Will each automaker pre-program all their driverless cars with the same set of ethics and priorities, or will owners get to choose?
As executives, insurers and philosophers debate these questions–and as we wait for the technology to evolve–we can test our own moral fibre with MIT Media Lab’s Moral Machine.
“The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb,” the Moral Machine website states. “This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices.”
Users are given 13 situations in which they decide who is saved and who is killed when a driverless car’s breaks fail: passengers, pedestrians, cats, dogs, women, men, doctors, executives, homeless people, athletes and large people.
At the end of the “judgement” section, the Moral Machine compares your preferences to those of the average user. “Our goal is not to judge anyone, but to help the public reflect on important and difficult decisions,” the website states in a disclaimer.
So give it a go, and let us know what you think.
Image: MIT Media Lab’s Moral Machine