How to Qualitatively improve AI algorithms in Autonomous Vehicles?

In spite of considerable time and energy and computational power spent to improve Autonomous Vehicles (AVs) we are still far away from a truly self-driving car, or level 5 autonomy. If you follow the news about AVs, most probably you have seen quite a few examples of how some AVs from some famous brands have given up and returned the control to a human driver. In particular in the busy streets of populated cities in which the traffic laws are not followed very closely and there is a huge amount of uncertainty in predicting how others drivers might behave at any given moment. 
The current approaches used to tackle the AV problem are generally using data-driven approaches and machine learning methods, also methods rooted in control theory and dynamical systems. All are quite relevant and important sets of tools that have already been quite essential in bringing the AV to the state that they are now. But there is one area of science, whose results have been conspicuously absent in the current discourse about AVs. And that of course is neuroscience.

Insights from Neuroscience to improve self-driving cars
The exact mechanisms of decision-making in our brains are an active area of research. And in spite of all the advances which are made, we are still far away from a clear understanding of the decision-making mechanisms in the brain. Yet, there is no doubt that our brains have a remarkable ability to filter out unnecessary information from the incoming stream of sensory inputs we receive from the environment. Which in turn allows us to outperform existing AI algorithms used in AVs. To better understand the point, you can look at this video:
This is the outcome of one of the recordings I did in my lab at the Max Planck Institute for Biological Cybernetics. The participant was asked to look at this recording on the screen, while his gaze behaviour was recorded using an eye-tracking system, and his brain activity was recorded using an EEG system. The yellow circles you see are the fixation points. We can see that even on such a busy and chaotic street, we still find time to even read the billboards. Although, we should keep in mind that this is the gaze behaviour of somebody who has looked at prerecorded dash-cam footage, not the gaze behaviour of the actual driver. I will get back to that shortly. 
Hopefully, one day we will have a thorough understanding of the processes in the brain that allows us to focus our attention on what is truly important to perform a task, even if a task is as complicated as driving a vehicle. But what the neuroscientist has already discovered, including my own work, is mature enough to be used in practice, as an addition to existing methods and as means to improve them qualitatively. In order to do so, we can perform a series of relatively simple experiments 
These experiments would involve different recording modalities, and justifications for each can be found in my descriptions of my other works in the areas of neuroscience. We need to set up a virtual environment in which a human driver will drive a vehicle. We should measure pupil diameter, aka pupillometry, which will be used as a proxy for Locus Coeruleus Noradrenergic activity (LCNE). In this page, I have explained here how LCNE activity modulates our brain connections when we face uncertainty in the environment. We also need to measure gaze behaviour using an eye-tracking system. You can read more about that on this page that explains my work on visual perception. And we need to record EEG signals, to discover the changes in the cortical activities, in particular between occipital, somatosensory and prefrontal cortices. 
These experiments would be a first step in combining the existing AV algorithms and the results of many decades of research by neuroscientists, and I do believe will lead to a qualitative improvement in the performance of Avs. 
Back to Top