The Moral Machine & Autonomous Vehicles

http://moralmachine.mit.edu/

I thought this was a really interesting exercise and definitely demonstrated the importance of and need for the social sciences, philosophy and the humanities when implementing autonomous vehicles and understanding the possible implications this new tech has on society. I started off reading the instructions and browsing the different scenarios involved and decided that I was going to take a Utilitarian approach. I would always choose to save the most lives as possible and valued all human life as equal (e.g. wouldn't change direction of car if same # of people were affected or based on gender, age, etc.).

This project definitely made me feel uncomfortable and was difficult to make a decision even with a Utilitarian approach and mindset going into this exercise. Compared to others who have taken this quiz, saving more lives, upholding the law, a higher social value preference, avoiding intervention, and a human species preference were the most important aspects for my decision making. I think this is in line with my Utilitarian approach, but I was often tempted to change decision based on whether or not avoiding intervention was really the "best" approach. Even though this exercise made me feel uncomfortable, I thought it was very interesting because it emphasizes just how important understanding and appreciating the ethical implications of this new tech are and how extremely difficult it can be when trying to design how autonomous vehicles should act.

Brian Donnelly