How Interior Sensing Will Evolve

We are certainly at an early stage in the interior sensing journey, but there are clear steps ahead, from today’s basic driver sensing to cabin sensing and beyond.

Basic driver sensing

The current roadmap for interior sensing systems starts with a low-cost implementation that meets basic regulatory requirements and can be widely deployed across an entire fleet. At this level, a single camera mounted on the steering wheel column, instrument cluster or central display will be able to detect drowsiness and distraction. If the driver takes their eyes off the road for more than two seconds, for instance, the system could sound an audio alert or flash a red light on the dashboard. To determine whether the driver is drowsy, the intelligence behind a camera can measure head position, eye movements, blink rate and how wide the driver’s eyes are open. If the driver is nodding off, the system could shake the seat or sound an audio alert.

Advanced driver sensing

Systems can build on basic drowsiness and distraction recognition with additional capabilities. They can detect voices and accurately identify drivers with cameras and biometrics such as fingerprints. They can determine whether the driver is intoxicated, stressed, lost in thought, or even trying to spoof the autonomous driving system by holding up a picture.

Cabin sensing

An evolution of driver sensing is cabin sensing, where a wide-angle camera covers a larger field of view within the vehicle, often including the passenger seat and rear seats. With this larger visible area, a system could tell if the driver has their hands on the steering wheel. It could identify the front seat passenger, adjust their seat to their specifications, and make sure passengers are wearing their seat belts properly.

With full cabin presence detection, the system could determine how many people are in the vehicle, and would be able to pinpoint their mood and emotion. It would also be able to detect whether a driver has been hit with a sudden illness, which could trigger automated systems that safely pull the vehicle over to the side of the road and notify emergency services.

In addition, a wide-angle 3D camera can be mounted on the interior roof of the vehicle and directed downward, providing a view of the front seats. This enables passengers to control aspects of the vehicle with hand gestures and hand poses, in-air writing, and a point-to-search function, where someone inside the vehicle could point at a landmark to get information about it, or point at a restaurant to reserve a table. This gesture recognition capability continues to evolve, and is on a technology path that is separate from driver state sensing.

Future innovations

As machine-learning systems become more intelligent and more powerful, they will be able to not only understand what is happening with the driver but also to take actions accordingly. For example, if the vehicle is departing its lane and the driver has their eyes off the road, the system might engage lane keeping for a few seconds, even if it is manually turned off. Or let’s say the car ahead stops while the driver is distracted — the system could slow the vehicle down or start braking early in anticipation of the automatic emergency braking feature engaging suddenly.

As interior sensing algorithms improve, other safety applications become possible, such as tracking body positions to adjust airbag deployment.

Long term, full cabin sensing is key to enabling reconfigurable interior designs, and any other nonstandard interior concepts for fully autonomous vehicles, such as leisure cars where people can recline and sleep or watch videos, autonomous medical clinics for health telepresence, or autonomous shopping boutiques.

Learn more about how these innovations are possible in our white paper.

Scalable Interior Sensing Platforms

We are certainly at an early stage in the interior sensing journey, but there are clear steps ahead, from today’s basic driver sensing to cabin sensing and beyond.

Basic driver sensing

The current roadmap for interior sensing systems starts with a low-cost implementation that meets basic regulatory requirements and can be widely deployed across an entire fleet. At this level, a single camera mounted on the steering wheel column, instrument cluster or central display will be able to detect drowsiness and distraction. If the driver takes their eyes off the road for more than two seconds, for instance, the system could sound an audio alert or flash a red light on the dashboard. To determine whether the driver is drowsy, the intelligence behind a camera can measure head position, eye movements, blink rate and how wide the driver’s eyes are open. If the driver is nodding off, the system could shake the seat or sound an audio alert.

Advanced driver sensing

Systems can build on basic drowsiness and distraction recognition with additional capabilities. They can detect voices and accurately identify drivers with cameras and biometrics such as fingerprints. They can determine whether the driver is intoxicated, stressed, lost in thought, or even trying to spoof the autonomous driving system by holding up a picture.

Cabin sensing

An evolution of driver sensing is cabin sensing, where a wide-angle camera covers a larger field of view within the vehicle, often including the passenger seat and rear seats. With this larger visible area, a system could tell if the driver has their hands on the steering wheel. It could identify the front seat passenger, adjust their seat to their specifications, and make sure passengers are wearing their seat belts properly.

With full cabin presence detection, the system could determine how many people are in the vehicle, and would be able to pinpoint their mood and emotion. It would also be able to detect whether a driver has been hit with a sudden illness, which could trigger automated systems that safely pull the vehicle over to the side of the road and notify emergency services.

In addition, a wide-angle 3D camera can be mounted on the interior roof of the vehicle and directed downward, providing a view of the front seats. This enables passengers to control aspects of the vehicle with hand gestures and hand poses, in-air writing, and a point-to-search function, where someone inside the vehicle could point at a landmark to get information about it, or point at a restaurant to reserve a table. This gesture recognition capability continues to evolve, and is on a technology path that is separate from driver state sensing.

Future innovations

As machine-learning systems become more intelligent and more powerful, they will be able to not only understand what is happening with the driver but also to take actions accordingly. For example, if the vehicle is departing its lane and the driver has their eyes off the road, the system might engage lane keeping for a few seconds, even if it is manually turned off. Or let’s say the car ahead stops while the driver is distracted — the system could slow the vehicle down or start braking early in anticipation of the automatic emergency braking feature engaging suddenly.

As interior sensing algorithms improve, other safety applications become possible, such as tracking body positions to adjust airbag deployment.

Long term, full cabin sensing is key to enabling reconfigurable interior designs, and any other nonstandard interior concepts for fully autonomous vehicles, such as leisure cars where people can recline and sleep or watch videos, autonomous medical clinics for health telepresence, or autonomous shopping boutiques.

Learn more about how these innovations are possible in our white paper.

Scalable Interior Sensing Platforms
Authors
Doug Welk blog profile picture
Doug Welk
Global Advanced DSM Lead
Poorab Sarmah profile picture
Poorab Sarmah
Global Product Manager, User Experience

Careers


Shape the future of mobility. Join our team to help create vehicles that are safer, greener and more connected.

View Related Jobs

Subscribe


All Attachments (1)