The basics of self-driving — Distance estimation
There is no such thing as a self-driving car. At least, not yet. Our new series of blogs The basics of self-driving examines the fundamental challenges to tackle before we reach autonomy.
Not hitting anything
The path to self-driving is paved by ever-increasing driving automation. While currently, all automated driving solutions have completely different goals, no matter the use case they must be safe.
The core of safety while in motion? Not hitting anything. Thus, one of the most fundamental issues of automating any driving task is perception, and within it, the ability to judge the distance of other objects and actors around the vehicle.
The whole problem may seem a banality. There are several different traditional solutions that can be used to measure distance. The history of radars, ultrasonics and cameras can be measured in decades. Not only is the resolution and information density of these sensors ever-increasing, but new technologies have also been developed to face the challenge. Driving automation has become a poster child for LiDAR sensors.
However, this doesn’t mean that we’ve solved the problem. In road transportation things change quickly, vehicles move close to each other at high speeds, and adverse lighting and weather conditions all have to be considered. Each sensor has its own strengths and limitations, as Peter Kozma, from our Sensors and Data Team, discussed at length in his blog.
The solution is a precise and robust sensor setup based on redundant and varied software and hardware solutions. Such a setup includes multiple sensors and different operating principles and relies on as little a priori assumption as possible.
Old meets new — Active sensors
Radars, LiDARs, and ultrasonics all use the same physical principles to detect distance and speed: electromagnetic waves, sonic waves or lasers are emitted and the backscattered waves or light compared to the original signal. The limitations of these types of sensors are actually connected to the signals they emit. To mention only one for each, radars have difficulty detecting objects with low reflectivity. Snow, rain and even fog can cause false alarms for LiDARs. Finally, ultrasonics are limited by the speed of sound, and thus offer exact measurements only at lower speeds.
Just a cursory glance at these limitations underlines the need for a varied sensor setup. At AImotive we’ve put a lot of effort into developing a range of solutions for camera-based distance estimation. Naturally, each of these has their respective weaknesses, as all solutions do, but they can be effectively used alongside other sensors to increase the safety of automated driving solutions.
Another option — Cameras
Stereo camera setups, for example, are an accepted method for distance calculation based on triangulation. This limits the need for a priori assumption, as triangulation is based on mathematical principles. However, the solution is extremely sensitive to calibration errors. Another solution relies on a single camera and artificial intelligence to estimate depth. However, this is limited by the validation of AI for automotive use cases and the need for large amounts of carefully compiled and representative datasets not only for training but also for validation.
Other methods rely on some form of a priori assumption or knowledge. In an object is recognized and its measurements are known, this data can be used to calculate its distance. This method corresponds to a virtual stereo setup with a base distance equivalent to that of the object size. Another option is to create a virtual stereo rig with the base distance of the camera’s height. While this solution can be extremely precise it requires exact road geometry data, and is extremely sensitive to camera pitch estimation errors, especially at long range.
The third set of solutions rely on changes in object size, such as proportionality or scale change. For certain use cases, these are extremely precise, but proportionality only works for objects leaving the sensor range as it’s based on measurements completed by the sensors themselves. Scale change is only useful at short ranges but, for example, can be usefully integrated into collision avoidance systems.
Finally, the laws of physics offer solutions for camera-based distance estimation as well. The intensity of light decreases quadratically as a function of distance, thus if intensity is measured at two distinct points exact distance measurement can be calculated. This approach required no a priori knowledge and is based entirely on the laws of physics making it simple to integrate. However, it is sensitive to view angles and lens characteristics, so certain dynamic characteristics have to adjust on the fly.
Getting the measurement right
Naturally, there are still more camera-based solutions, such as SLAM or distance measurement with two active light sources. The descriptions above are meant only to showcase the range of options engineers have to make distance estimations based on camera images. However, these are all optimized for different use cases, and different combinations of these methods, with different sensor types, should be deployed for different operational domains.
No one has found the killer self-driving sensor setup for all use cases yet. The solution lies in the use of different sensor types and methodologies supported by robust sensor solutions, which account for the strengths and weaknesses of each sensor. Even then, validation remains a challenge. It’s enough for an ADAS system to reach 99% performance in its existence hypotheses and other measurements. For autonomy, we need 99.9 and your choice of how many further nines percent accuracy. And that’s a lot harder to achieve…