A recent study at the University of Tokyo suggests that robotic eyes on autonomous vehicles could increase pedestrian safety. In virtual reality (VR) games, participants had to choose whether or not to cross a road in front of a moving vehicle. The participants were able to make safer or more effective decisions when that vehicle was equipped with robotic eyes that either looked at the pedestrian (registering their presence) or away (not registering them).
Autonomous vehicles appear to be coming soon. A lot of research is being done to bring a once-futuristic idea to life, whether they will be used for package delivery, preparing fields for planting, or transporting children to school.
Researchers at the University of Tokyo have focused on a more “human” concern of self-driving technology, whereas the practical aspects of building vehicles that can autonomously navigate the world are the main focus for many. ” The interaction between self-driving cars and their surroundings, such as pedestrians, has not been adequately studied. According to Professor Takeo Igarashi of the Graduate School of Information Science and Technology, more research must be done into this interaction in order to provide society with safety and assurance regarding self-driving cars.
One significant difference with self-driving cars is that either no one is behind the wheel at all or the driver may become more of a passenger and not pay full attention to the road. This makes it challenging for pedestrians to determine whether a vehicle has noticed them or not because there may not be any eye contact or other cues from those insid.
So how might pedestrians be alerted when a self-driving car sees them and is going to stop? A self-driving golf cart was outfitted with two sizable, remote-controlled robotic eyes, just like a character from the Pixar film Cars. The “gazing car” was what the researchers called it. In this case, they were testing whether people would still cross the road in front of a moving vehicle when pressed for time if moving eyes were placed on the cart.
The group created two scenarios with the cart having eyes and two without. Either the cart had seen the pedestrians and was going to stop, or it had not seen them and was continuing to go. When the cart had eyes, they would either be looking at the pedestrian and about to stop or they would be looking somewhere else (not going to stop).
Since it would be obviously dangerous to ask participants to decide whether or not to walk in front of a moving vehicle in real life (though there was a hidden driver for this experiment), the team recorded the scenarios using 360-degree video cameras, and the 18 participants (nine women and nine men, aged 18 to 49, all Japanese) experienced the experiment in virtual reality. They were given three seconds each to decide whether or not they would cross the road in front of the cart after going through the scenarios multiple times in a random order. When they crossed the street when they should have waited and stopped when they could have, the researchers recorded their choices and calculated the error rates of those choices.
Project Lecturer Chia-Ming Chang, a member of the research team, said that the findings “suggested a clear difference between genders, which was very surprising and unexpected.” While other factors like age and background might have also influenced the participants’ reactions, we believe this is an important point, as it shows that different road users may have different behaviors and needs that require different communication methods in our future self-driving world.
“In this study, the male participants frequently chose to cross the road in a risky situation (i.e., when the car was not stopping), but these mistakes were lessened by the cart’s eye gaze. The safe situations for them (i.e., deciding to cross when the car was about to stop) did not differ significantly, according to Chang. On the other hand, the eye gaze of the cart helped the female participants make less error-prone decisions (e.g., choosing not to cross when the car was intended to stop). For them, the unsafe circumstances were not very different. In the end, the experiment demonstrated that the eyes made everyone’s crossing easier or safer.
But how did the participants feel after seeing the eyes? While some people found them to be cute, others found them to be spooky or frightening. Many of the male participants said they felt the situation was more dangerous when their eyes were turned away. Many female participants reported feeling safer when eyes were fixed on them. We only built the simplest one possible due to financial constraints, explained Igarashi, and it would be preferable in the future to have a qualified product designer find the best design, but it would probably still be difficult to please everyone. Personally, I like it. It’s kind of adorable.
The team is aware that the small number of participants acting out just one scenario limits this study. Additionally, it’s possible that decisions made in virtual reality differ from those made in the real world. But switching from manual to automatic driving is a significant change. We should seriously consider adding eyes if they can actually improve safety and reduce traffic accidents. According to Igarashi, the robotic eyes connected to the self-driving AI will eventually have automatic control instead of manual control, which will allow us to adapt to various situations. “I hope that this research will encourage other groups to test similar ideas, anything that makes it easier for autonomous vehicles and pedestrians to get along, which will save lives in the long run.”