Delphi’s Autonomous Driving Playing by the Rules with AI

The development of autonomous driving is changing the transportation technology landscape in a manner that has been compared to the “Wild West” and the “next frontier.” 

Pick your analogy, but one thing is clear: the opportunities in the autonomous driving landscape are currently wide open, even in a crowded field. Many companies are taking different avenues and approaches to creating the first automated-driving system capable of being mass produced. In the meantime, the rules are being written as time passes and technological concepts become reality.

The bottom line: there aren’t any established rules in the race to fully autonomous driving. 

Nevertheless, the Delphi Centralized Sensing Localization Planning platform (CSLP) relies on pre-determined, generalized rules for basic safe operations and uses artificial intelligence (AI) to solve for the optimal path. That means that instructions have been coded into an algorithm, or set of rules the car follows, thereby by creating a vehicle capable of making decisions using AI.

There isn’t a rule for every situation, at least for now, because all the testing hasn’t been completed. By anyone. That is also why there are fleets of autonomous vehicles scattered around the globe - they are collecting data. That data is then used to catalog the myriad possible situations a vehicle may encounter - and subsequently, how it should safely react.

How a vehicle reacts to a situation depends on what it sees and hears, through its sensors. There are three types of sensors – radar, vision (cameras) and LiDAR. Some companies are using only one type of sensor, but Delphi’s CSLP relies on all three. 

By fusing each sensor’s inputs, the Delphi autonomous driving system gains the highest confidence in the vehicle’s surrounding. Why receive and fuse each sensor? Because each sensor has its strengths:

  • Radar isn’t impacted by weather;
  • LiDAR provides very accurate range and distance information; and 
  • Vision provides object classification accuracy

By combining all three, the system can generate the most comprehensive view of what’s around the vehicle, adding redundant safety and confidence. 

AI is used primarily in the vision space, while machine learning is used to improve object classification and recognition. What’s machine learning? This concept derives from the idea that a computer can learn without being programmed with precise instructions on how to react.

In order to do so, algorithms must be “trained” to recognize what’s around the vehicle. Machine learning relies on what called a “neural net” – so-called because it’s designed to behave like the human brain -- lives onboard the vehicle and classifies objects in real-time. From here, the vehicle can “follow” specific rules. It’s very complicated stuff and that’s part of the reason why a hybrid approach, which incorporates AI with a machine learning approach, helps autonomous vehicles “drive” more like a human. In certain cases, such as a red light or when another vehicle is stopped in the roadway, it is important the vehicle always stops. But in others, such as when a plastic bag is blowing across the street; it is better for the vehicle to recognize the object is not an obstruction, and it’s safe to continue forward.  

For the scenarios a rule-set doesn’t (yet) cover, AI makes decisions by fusing the vision inputs.  

“A neural net only knows what to do if it’s been trained what to do,” explains Glen De Vos, Delphi’s chief technology officer. “And not every scenario a vehicle will encounter can be accounted for. You can’t always predict how a neural net will fill the gaps or the situations it hasn’t seen before. For cases when the vehicle doesn’t recognize something, the neural rules tell it to come to a safe stop. Using a combination of neural rules and AI, we can basically cover everything.”

The development of autonomous driving is changing the transportation technology landscape in a manner that has been compared to the “Wild West” and the “next frontier.” 

Pick your analogy, but one thing is clear: the opportunities in the autonomous driving landscape are currently wide open, even in a crowded field. Many companies are taking different avenues and approaches to creating the first automated-driving system capable of being mass produced. In the meantime, the rules are being written as time passes and technological concepts become reality.

The bottom line: there aren’t any established rules in the race to fully autonomous driving. 

Nevertheless, the Delphi Centralized Sensing Localization Planning platform (CSLP) relies on pre-determined, generalized rules for basic safe operations and uses artificial intelligence (AI) to solve for the optimal path. That means that instructions have been coded into an algorithm, or set of rules the car follows, thereby by creating a vehicle capable of making decisions using AI.

There isn’t a rule for every situation, at least for now, because all the testing hasn’t been completed. By anyone. That is also why there are fleets of autonomous vehicles scattered around the globe - they are collecting data. That data is then used to catalog the myriad possible situations a vehicle may encounter - and subsequently, how it should safely react.

How a vehicle reacts to a situation depends on what it sees and hears, through its sensors. There are three types of sensors – radar, vision (cameras) and LiDAR. Some companies are using only one type of sensor, but Delphi’s CSLP relies on all three. 

By fusing each sensor’s inputs, the Delphi autonomous driving system gains the highest confidence in the vehicle’s surrounding. Why receive and fuse each sensor? Because each sensor has its strengths:

  • Radar isn’t impacted by weather;
  • LiDAR provides very accurate range and distance information; and 
  • Vision provides object classification accuracy

By combining all three, the system can generate the most comprehensive view of what’s around the vehicle, adding redundant safety and confidence. 

AI is used primarily in the vision space, while machine learning is used to improve object classification and recognition. What’s machine learning? This concept derives from the idea that a computer can learn without being programmed with precise instructions on how to react.

In order to do so, algorithms must be “trained” to recognize what’s around the vehicle. Machine learning relies on what called a “neural net” – so-called because it’s designed to behave like the human brain -- lives onboard the vehicle and classifies objects in real-time. From here, the vehicle can “follow” specific rules. It’s very complicated stuff and that’s part of the reason why a hybrid approach, which incorporates AI with a machine learning approach, helps autonomous vehicles “drive” more like a human. In certain cases, such as a red light or when another vehicle is stopped in the roadway, it is important the vehicle always stops. But in others, such as when a plastic bag is blowing across the street; it is better for the vehicle to recognize the object is not an obstruction, and it’s safe to continue forward.  

For the scenarios a rule-set doesn’t (yet) cover, AI makes decisions by fusing the vision inputs.  

“A neural net only knows what to do if it’s been trained what to do,” explains Glen De Vos, Delphi’s chief technology officer. “And not every scenario a vehicle will encounter can be accounted for. You can’t always predict how a neural net will fill the gaps or the situations it hasn’t seen before. For cases when the vehicle doesn’t recognize something, the neural rules tell it to come to a safe stop. Using a combination of neural rules and AI, we can basically cover everything.”

Careers


Shape the future of mobility. Join our team to help create vehicles that are safer, greener and more connected.

View Related Jobs