Self-driving cars enabled with moral decisions like humans - check how

  • Facebook
  • Twitter
  • Reddit
  • Flipboard
  • Email
  • WhatsApp
Self Driving Cars now able to take moral decisions
Self Driving Cars now able to take moral decisions

New Delhi : A new research has claimed that smart self driving cars are now capable of making a moral decision between life and death.  The researchers demonstrated how smart car will be able to make ethical decisions on the roads, just like normal human beings.

An algorithm has been designed on the basis of a study conducted over human behaviour in a series of virtual reality-based trials.

"Human behaviour in dilemma situations can be modelled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object."

As per statistics, humans have been identified as terrible drivers as they are prone to distractions, road rage and careless driving under the alcoholic influence; Worldwide 1.3 million people lose their lives in road accidents, including 93 per cent accidents in the US caused due to human error.

With Self-driving cars all set to make an entry in the market, this could end up in being one of the safest options for road travel.

Using virtual reality to simulate a foggy road in a suburban setting, the researchers placed a group of participants in the driver's seat in a car on a two-lane road. A variety of paired obstacles, such as humans, animals and objects, appeared on the virtual road. In each scenario, the participants were forced to decide which obstacle to save and which to run over.

Next, the researchers used these results to test three different models predicting decision making. The first predicted that moral decisions could be explained by a simple value-of-life model, a statistical term measuring the benefits of preventing a death.

The second model assumed that the characteristics of each obstacle, such as the age of a person, played a role in the decision-making process. Lastly, the third model predicted that the participants were less likely to make an ethical choice when they had to respond quickly.

After comparing the results of the analysis, the team found the first model most accurately described the ethical choices of the participants. This means that self-driving cars and other automated machines can make human-like moral choices using a relatively simple algorithm.

The research was published in Frontiers in Behavioral Neuroscience.