Researchers develop self-driving car models: Here's how it works

  • Facebook
  • Twitter
  • Reddit
  • Flipboard
  • Email
  • WhatsApp
Representational Image
Representational Image

New Delhi : Good news! A team of Indian American researchers has come up with a novel model that uses human inputs to uncover Artificial Intelligence (AI) "blind spots" in self-driving cars so that vehicles can avoid accidents on roads.

MIT in collaboration with Microsoft researchers has identified a model in which autonomous systems have "learned" from training examples that don't match what's actually happening in the real world.

With the help of this self-driving model, automobile engineers can redesign model to improve the safety of AI systems, such as driverless vehicles and autonomous robots.

“The model helps autonomous systems better know what they don’t know,” said first author Ramya Ramakrishnan from Computer Science and Artificial Intelligence Laboratory at MIT.

"Many times, when these systems are deployed, their trained simulations don't match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors," explained Ramakrishnan.

The AI systems powering driverless cars are trained extensively in virtual simulations to prepare the vehicle for nearly every event on the road. But sometimes the car makes an unexpected error in the real world because an event occurs that should, but doesn't, alter the car's behaviour.

The researchers have verified their methods using video games, with a simulated human correcting the learned path of an on-screen character.

Now, they are planning to incorporate the model with traditional training and testing approaches for autonomous cars and robots with human feedback.

Julie Shah, an associate professor in the Department of Aeronautics and Astronautics and head of the CSAIL's Interactive Robotics Group and Ece Kamar, Debadeepta Dey, and Eric Horvitz from Microsoft are the co-authors of the paper. 

"When the system is deployed into the real world, it can use learned model to act more cautiously and intelligently," said Ramakrishnan.