Jork Stapel: Measure and Monitor driver perception in the wild
While vehicles are getting automated, it will be long before manual driving will disappear completely from our streets. When driving manually, the automation may still service as a supplementary safety system by making it easier to perceive the relevant road users, through some form of visual or auditory augmentation.
Driving is however characterized by sensory overload, which the human only overcomes with a keen ability to be selective in what to attend. Augmentation of this process can only complement the driver effectively when being equally selective. To achieve this, the system has to identify discrepancies between what should be and what is attended.
However, measuring which elements the driver is aware of remains very challenging. Existing alert systems rely on heuristics like “only alert when dangerous, rare or in conflict with common expectation”, which generally limits them to immediate hazards. Providing support for developing hazards or non-critical lapses can only be achieved when the driver’s awareness towards all individual objects can be monitored. Unfortunately, measurement techniques available today are either limited to the simulator (e.g. freeze-probe) or scale poorly to complex scenarios where multiple events have to be evaluated concurrently (e.g. real-time probe or think-out-loud)
In our current research, we investigate if it is possible to monitor driver’s situation awareness on a per-object level using eye tracking, and develop a new recognition-based method for on-road labelling of driver’s situation awareness for all encountered road users in unstructured scenarios. We then test this method with 14 participants while taking left-turns on urban intersections in an on-road experiment.