What Is Sensor Fusion?

Sensor fusion is the ability to bring together inputs from multiple radars, lidars and cameras to form a single model or image of the environment around a vehicle. The resulting model is more accurate because it balances the strengths of the different sensors. Vehicle systems can then use the information provided through sensor fusion to support more-intelligent actions.

Each sensor type, or “modality,” has inherent strengths and weaknesses. Radars are very strong at accurately determining distance and speed — even in challenging weather conditions — but can’t read street signs or “see” the color of a stoplight. Cameras do very well reading signs or classifying objects, such as pedestrians, bicyclists or other vehicles. However, they can easily be blinded by dirt, sun, rain, snow or darkness. Lidars can accurately detect objects, but they don’t have the range or affordability of cameras or radar.

Sensor fusion brings the data from each of these sensor types together, using software algorithms to provide the most comprehensive, and therefore accurate, environmental model possible. It can also correlate data pulled from inside the cabin, through a process known as interior and exterior sensor fusion.

A vehicle could use sensor fusion to fuse information from multiple sensors of the same type as well — for instance, radar. This improves perception by taking advantage of partially overlapping fields of view. As multiple radars observe the environment around a vehicle, more than one sensor will detect objects at the same time. Interpreted through global 360° perception software, detections from those multiple sensors can be overlapped or fused, increasing the detection probability and reliability of objects around the vehicle and yielding a more accurate and reliable representation of the environment.

What is Sensor Fusion?


Low-level sensor fusion

Of course, the more sensors on a vehicle, the more challenging fusion becomes, but also the more opportunity exists to improve performance. To tap into these benefits, Aptiv uses a technique called low-level sensor fusion.

In the past, the processing power to analyze sensor data to determine and track objects has been packaged with the cameras or radars. With Aptiv’s Satellite Architecture approach, the processing power is centralized into a more powerful active safety domain controller, allowing for low-level sensor data to be collected from each sensor and fused in the domain controller.

Moving the processing to a domain controller results in sensors that take up less volume and less mass — up to 30 percent less. For comparison, the footprint of a camera is reduced from the size of a deck of playing cards to the size of a pack of chewing gum. By keeping sensors as small as possible, OEMs have more options in vehicle packaging.

Another benefit is increased data sharing. With traditional systems, smart sensors process environmental inputs independently, which means any decisions made when using the information are only as good as what that individual sensor can see. However, with Satellite Architecture, where all the data coming from the sensors is shared centrally, there is more opportunity for active safety applications in the domain controller to make use of it. Aptiv can even apply artificial intelligence (AI) tools to extract useful information that would otherwise be discarded. The right AI can learn from it, and that helps us solve challenging corner cases our customers face.

A third benefit of low-level sensor fusion is reduced latency. The domain controller doesn’t have to wait for the sensor to process data before acting upon it. This can help speed performance in situations where even fractions of a second count.

More data leads to better decisions. By embracing a vehicle architecture that allows for a high number of sensors and then synthesizes the data through sensor fusion, vehicles can become smarter, faster.

 

Sensor fusion is the ability to bring together inputs from multiple radars, lidars and cameras to form a single model or image of the environment around a vehicle. The resulting model is more accurate because it balances the strengths of the different sensors. Vehicle systems can then use the information provided through sensor fusion to support more-intelligent actions.

Each sensor type, or “modality,” has inherent strengths and weaknesses. Radars are very strong at accurately determining distance and speed — even in challenging weather conditions — but can’t read street signs or “see” the color of a stoplight. Cameras do very well reading signs or classifying objects, such as pedestrians, bicyclists or other vehicles. However, they can easily be blinded by dirt, sun, rain, snow or darkness. Lidars can accurately detect objects, but they don’t have the range or affordability of cameras or radar.

Sensor fusion brings the data from each of these sensor types together, using software algorithms to provide the most comprehensive, and therefore accurate, environmental model possible. It can also correlate data pulled from inside the cabin, through a process known as interior and exterior sensor fusion.

A vehicle could use sensor fusion to fuse information from multiple sensors of the same type as well — for instance, radar. This improves perception by taking advantage of partially overlapping fields of view. As multiple radars observe the environment around a vehicle, more than one sensor will detect objects at the same time. Interpreted through global 360° perception software, detections from those multiple sensors can be overlapped or fused, increasing the detection probability and reliability of objects around the vehicle and yielding a more accurate and reliable representation of the environment.

What is Sensor Fusion?


Low-level sensor fusion

Of course, the more sensors on a vehicle, the more challenging fusion becomes, but also the more opportunity exists to improve performance. To tap into these benefits, Aptiv uses a technique called low-level sensor fusion.

In the past, the processing power to analyze sensor data to determine and track objects has been packaged with the cameras or radars. With Aptiv’s Satellite Architecture approach, the processing power is centralized into a more powerful active safety domain controller, allowing for low-level sensor data to be collected from each sensor and fused in the domain controller.

Moving the processing to a domain controller results in sensors that take up less volume and less mass — up to 30 percent less. For comparison, the footprint of a camera is reduced from the size of a deck of playing cards to the size of a pack of chewing gum. By keeping sensors as small as possible, OEMs have more options in vehicle packaging.

Another benefit is increased data sharing. With traditional systems, smart sensors process environmental inputs independently, which means any decisions made when using the information are only as good as what that individual sensor can see. However, with Satellite Architecture, where all the data coming from the sensors is shared centrally, there is more opportunity for active safety applications in the domain controller to make use of it. Aptiv can even apply artificial intelligence (AI) tools to extract useful information that would otherwise be discarded. The right AI can learn from it, and that helps us solve challenging corner cases our customers face.

A third benefit of low-level sensor fusion is reduced latency. The domain controller doesn’t have to wait for the sensor to process data before acting upon it. This can help speed performance in situations where even fractions of a second count.

More data leads to better decisions. By embracing a vehicle architecture that allows for a high number of sensors and then synthesizes the data through sensor fusion, vehicles can become smarter, faster.

 

Careers


Shape the future of mobility. Join our team to help create vehicles that are safer, greener and more connected.

View Related Jobs

Subscribe


All Attachments (1)