Eye Tracking-based LSTM for Locomotion Prediction in VR

Virtual Reality (VR) allows users to perform natural movements such as hand movements, turning the head and natural walking in virtual environments. While such movements enable seamless natural interaction, they come with the need for a large tracking space, particularly in the case of walking. To o...

Verfasser: Stein, Niklas
Bremer, Gianni
Lappe, Markus
FB/Einrichtung:FB 07: Psychologie und Sportwissenschaft
Dokumenttypen:Artikel
Medientypen:Text
Erscheinungsdatum:2022
Publikation in MIAMI:29.04.2022
Datum der letzten Änderung:29.04.2022
Angaben zur Ausgabe:[Electronic ed.]
Quelle:2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2022, 493-503
Schlagwörter:Virtual Reality; Eye Tracking; Locomotion; LSTM; Path Prediction; Machine Learning; Gaze
Fachgebiet (DDC):006: Spezielle Computerverfahren
Lizenz:InC 1.0
Sprache:English
Förderung:Förderer: Deutsche Forschungsgemeinschaft / Projektnummer: 274361309
Förderer: European Commission / Projektnummer: 951910
Format:PDF-Dokument
URN:urn:nbn:de:hbz:6-74019498937
Weitere Identifikatoren:DOI: 10.17879/74019503057
Permalink:https://nbn-resolving.de/urn:nbn:de:hbz:6-74019498937
Verwandte Dokumente:
Onlinezugriff:10.1109_VR51125.2022.00069.pdf

Virtual Reality (VR) allows users to perform natural movements such as hand movements, turning the head and natural walking in virtual environments. While such movements enable seamless natural interaction, they come with the need for a large tracking space, particularly in the case of walking. To optimise use of the available physical space, prediction models for upcoming behavior are helpful. In this study, we examined whether a user’s eye movements tracked by current VR hardware can improve such predictions. Eighteen participants walked through a virtual environment while performing different tasks, including walking in curved paths, avoiding or approaching objects, and conducting a search. The recorded position, orientation and eye-tracking features from 2.5 s segments of the data were used to train an LSTM model to predict the user’s position 2.5 s into the future. We found that future positions can be predicted with an average error of 65 cm. The benefit of eye movement data depended on the task and environment. In particular, situations with changes in walking speed benefited from the inclusion of eye data. We conclude that a model utilizing eye tracking data can improve VR applications in which path predictions are helpful.