Automotive driver face tracking using range imaging cameras (Masterthesis)
25.11.2014, 16:15, room 3999
This thesis describes an image processing framework to perform eye position measurements by face pose estimation in driving experiments. The camera system, which is used in the experiment, is the ASUS Xtion Pro Live. It incorporates a VGA resolution structured light and an SXGA resolution color sensors, which are calibrated by the manufacturer. The eye position is derived from a face model using solely three-dimensional data. This makes the proposed method robust to deviations in environmental conditions such as lighting and shadows and also shows good performance in terms of being able to estimate the face pose in a large range of position and rotation. Another key advantage of the proposed method is that it works without any training data, but can be automatically adjusted to an individual face to enhance the tracking precision. The algorithms, used in the face tracking pipeline, are also evaluated in terms of their their real-time performance and computational demand. The eye position measurements are transformed into the vehicle coordinate system using extrinsic calibration. It is performed based on the known vehicle interior geometry. This feature eliminates the need for additional manual extrinsic calibration, e.g. through measuring the camera position with a measuering arm or through using a checkerboard pattern located at known position.
The experiment was conducted at BMW Ergonomic and Comfort department in Munich with three different vehicles and ten to fifteen drivers per vehicle. The test route was about twenty two kilometers long and the distance was almost equally distributed between city road, rural road and freeway. The drivers were chosen in a way to get a wide span in the height distribution. The results of the eye measurements are then used to construct the eyellipse: an elliptical model in three dimensions according to the SAE J941; as well as the eye-box: a rectangular parallelepiped, the sides of which are located on the planes perpendicular to the vehicle axes crossing them at the tenth and the ninetieth percentile of the eye position distribution for the corresponding coordinate. The eye-box derived from the measurements is then compared to the eye box derived from the RAMSIS manikin positioning in virtual car model using CAD software.