Researchers at Cornell University have made a way for self-driving cars to “remember” what happened in the past and use that information to guide them in the future, especially when the weather is bad, and the sensors can’t be trusted.
No matter how many times they’ve been down a certain road, cars with artificial neural networks don’t remember the past. They always see the world for the first time.
To get around this restriction, the researchers have written three papers simultaneously. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022) is taking place June 19–24 in New Orleans, and two papers are being presented there.
Can we learn from repeated traversals? is the key question. Kilian Weinberger, senior author and computer science professor, agreed.For instance, a car’s laser scanner might first mistake a strangely shaped tree for a pedestrian while it is far away, but once it is close enough, the object’s category will be obvious. Therefore, you would hope that the self-driving automobile would now recognize the tree accurately the second time you drive past it, even in snow or fog.
Drove an automobile equipped with LiDAR (Light Detection and Ranging) sensors frequently over a 15-kilometer loop in and around Ithaca, 40 times over an 18-month period, to create a dataset, led by doctorate student Carlos Diaz-Ruiz. Various environments (highway, urban, university), weather patterns (sunny, wet, snowy), and times of day are depicted in the traversals. There are more than 600,000 scenes in the final dataset.
It purposefully draws attention to one of the major problems with self-driving cars, which is bad weather, according to Diaz-Ruiz. When the street is covered in snow, people can rely on their memories, but neural networks have a hard time remembering things.
HINDSIGHT is a method that computes object descriptors as the automobile passes them using neural networks. It then condenses these descriptions, which the team refers to as SQUaSH? (Spatial-Quantized Sparse History) features, and saves them on an electronic map, much like a “memory” kept in the brain.
The self-driving car may “remember” what it learnt the previous time it drove through the same area by querying the nearby SQUaSH database of all the LiDAR points along the route. The database is shared between vehicles and is regularly updated, which makes the information that can be used for recognition better.
Yurong You, a PhD student, stated that any LiDAR-based 3D object detector may use this information as a feature. Without further oversight or labor-and time-intensive human annotation, the detector and the SQuaSH representation can be trained together.
The team is currently working on a research project called MODEST (Mobile Object Detection with Ephemerality and Self-Training), which would make HINDSIGHT even better by letting the vehicle learn the whole perceptual pipeline from scratch.
While MODEST continues to believe that the artificial neural network in the car has never been exposed to any items or streets at all, HINDSIGHT continues to presume that the network is already trained to recognize objects and enhances it with the ability to generate memories. It may discover which elements of the environment are stationary and which are moving by repeatedly traversing the same route. It gradually learns who the other traffic participants are and what may be safely ignored.
The self-driving system will then be able to find these things reliably, even on roads that weren’t used often at first. The approaches, according to the researchers, have the potential to significantly lower the development costs of autonomous vehicles (which currently rely heavily on expensive human-annotated data) and increase the efficiency of such vehicles by teaching them to navigate the areas where they are used the most.