Comparison of Effective Field of View for Vehicle Operators in a Virtual Metropolitan Environment
Carruth, D. W., Garrison, T. M., King, M. D., Jinkerson, J. L., Irby, D., & Sween, R. (2016). Comparison of Effective Field of View for Vehicle Operators in a Virtual Metropolitan Environment. AHFE 2016. Orlando, FL.
The ability to see the environment around a vehicle is a key requirement for vehicle operators. Many techniques have been developed for assessing field of view. For example, ground plane intersections can reveal areas of limited vision around a vehicle. Spherical projections have also been used to generate maps of occluded field of view using photos and laser scans of vehicles or through simulation of virtual vehicle designs. While these methods allow a designer to evaluate a general field of view, most methods do not provide automated assessments of whether an operator can see task-relevant visual information in a given scenario. In our case, we required analysis of availability of task-relevant visual information for prototype vehicle designs in a specific environment.
Although potentially advantageous, due to the unique characteristics of operational environments, construction of physical prototypes to collect and analyze real-world field of view data would be impractical due primarily to cost and issues related to experimental control. However, a computational analysis tool allows for carefully controlled evaluation of simulated fields of view in a virtual environment at a relatively low cost. A significant obstacle to modeling accurate perception of visual information in a virtual environment is the difficulty associated with developing sophisticated computer vision algorithms to perform object recognition on rendered images of an environment. Rather than operate on rendered imagery, a database of 3D objects was created and associated with semantic tags that would be recognized by visual sensing. A scene is perceived as a collection of tagged objects that are either visible or occluded by vehicle geometry. The collection within an operator’s field of view is then scored to determine how much information is occluded.
The effective field of view for three occupants (a 5% male, a 50% male, and a 95% male) of two vehicles (two consumer truck models) was compared for a simulated drive in a virtual city environment. Based on SAE standards, the eye point was defined for the three occupants for the two virtual vehicles. A drive path through the virtual city was defined. For the current study, natural driving behavior was not simulated. Movement of the simulated vehicle along the drive path was fixed at 35 miles per hour (15.6 meters per second). The operator’s ability to effectively perceive objects outside the vehicle was calculated at every 15.6 meters (i.e., every second). The task-relevant information for the operators included roadway surfaces, pedestrian areas, street signs, and other vehicles.
Simulation provided a second-by-second analysis of the field of view for the simulated vehicle operators in the virtual vehicles along the entire drive path. At each point, we were able to analyze differences in visible and occluded virtual objects. In addition, we were able to combine all of the individual data points into an overall effective field of view for each vehicle for the drive path. Leveraging simulation will allow designers to more effectively design vehicles and for mission planners to effectively select appropriate vehicle designs for specific tasks.