Natural Gaze Data-Driven Wheelchair
AbstractNatural eye movements during navigation have long been considered to reflect planning processes and link to user’s future action intention. We investigate here whether natural eye movements during joystick-based navigation of wheel-chairs follow identifiable patterns that are predictive of joystick actions. To place eye movements in context with driving intentions, we combine our eye tracking with a 3D depth camera system, which allows us to identify which eye movements have the floor as gaze target and distinguish them from other non-navigation related eye movements. We find consistent patterns of eye movements on the floor predictive of steering commands issued by the driver in all subjects. Based on this empirical data we developed two gaze decoders using supervised machine learning techniques and enabled each of these drivers to then steer the wheelchair by imagining they were using a joystick to trigger appropriate natural eye movements via motor imagery. We show that all subjects are able to navigate their wheelchair “by eye” learning it within a short time span of minutes. Our work shows that simple gaze-based decoding without need for artificial user interfaces suffices to restore mobility and increasing participation in daily life.