Repository logo
 
Dataset

Comprehensive VR Dataset for Machine Learning: Head- and Eye-Centred Video and Positional Data

Description

We present a comprehensive dataset comprising head- and eye-centred video recordings from human participants performing a search task in a variety of Virtual Reality (VR) environments. Using a VR motion platform, participants navigated these environments freely while their eye movements and positional data were captured and stored in CSV format. The dataset spans six distinct environments, including one specifically for calibrating the motion platform, and provides a cumulative playtime of over 10 hours for both head- and eye-centred perspectives. The data collection was conducted in naturalistic VR settings, where participants collected virtual coins scattered across diverse landscapes such as grassy fields, dense forests, and an abandoned urban area, each characterized by unique ecological features. This structured and detailed dataset offers substantial reuse potential, particularly for machine learning applications. The richness of the dataset makes it an ideal resource for training models on various tasks, including the prediction and analysis of visual search behaviour, eye movement and navigation strategies within VR environments. Researchers can leverage this extensive dataset to develop and refine algorithms requiring comprehensive and annotated video and positional data. By providing a well-organized and detailed dataset, it serves as an invaluable resource for advancing machine learning research in VR and fostering the development of innovative VR technologies.

Metadata

Show more
Files
Document
Type
Size
Alexander Kreß; Frank Bremmer; Markus Lappe. (2024.08.15). Comprehensive VR Dataset for Machine Learning: Head- and Eye-Centred Video and Positional Data. https://doi.org/10.60834/tam-datahub-6