
It's a fact that many of us wouldn't fully trust a completely autonomous, self-driving car to make an emergency stop when it has to, avoid collision or do whatever else it's supposed to do, once our own safety is in question. Scientists wish to prove us wrong and convince us that AI can be trusted - they developed a much better understanding of the deep neural networks branch of artificial intelligence, which made it possible for them to start using this knowledge for improving autonomous systems that can "perceive, learn, decide and act on their own," according to researchers from Oregon.
Now, eight computer science professors in Oregon State University's College of Engineering will use a $6.5 million grant from the Defense Advanced Research Projects Agency (DARPA) to make artificial-intelligence-based systems, including autonomous vehicles and robots, something people can trust. The way the scientists explain it, "in deep learning the computer program learns on its own from many examples. Potential dangers arise from depending on a system that not even the system developers fully understand." What they will now try to do is they will try to get the AI program explain to humans why and how a specific decision is reached.
Alan Fern, principal investigator for the grant and associate director of the College of Engineering's recently established Collaborative Robotics and Intelligent Systems Institute stated that "the research is crucial to the advancement of autonomous and semi-autonomous intelligent systems", adding that "nobody is going to use these emerging technologies for critical applications until we are able to build some level of trust, and having an explanation capability is one important way of building trust," he said.