Logo des Repositoriums

TAM DataHub

The TAM DataHub is the common research infrastructure of the consortium The Adaptive Mind.

TAM-GitLab (VPN)   Jupyter Hub (VPN)   User Manual   Support

The DataHub Repository lets researchers published their various kinds of work:

  • Research Datasets
  • Code & Software
  • Text Publications of any kind
 

Hauptbereiche in DSpace

Wählen Sie einen Bereich, um dessen Inhalt anzusehen.

Gerade angezeigt 1 - 1 von 1

home.recent-submissions.head

dataset.listelement.badge
Comprehensive VR Dataset for Machine Learning: Head- and Eye-Centred Video and Positional Data
Alexander Kreß; Frank Bremmer; Markus Lappe
We present a comprehensive dataset comprising head- and eye-centred video recordings from human participants performing a search task in a variety of Virtual Reality (VR) environments. Using a VR motion platform, participants navigated these environments freely while their eye movements and positional data were captured and stored in CSV format. The dataset spans six distinct environments, including one specifically for calibrating the motion platform, and provides a cumulative playtime of over 10 hours for both head- and eye-centred perspectives. The data collection was conducted in naturalistic VR settings, where participants collected virtual coins scattered across diverse landscapes such as grassy fields, dense forests, and an abandoned urban area, each characterized by unique ecological features. This structured and detailed dataset offers substantial reuse potential, particularly for machine learning applications. The richness of the dataset makes it an ideal resource for training models on various tasks, including the prediction and analysis of visual search behaviour, eye movement and navigation strategies within VR environments. Researchers can leverage this extensive dataset to develop and refine algorithms requiring comprehensive and annotated video and positional data. By providing a well-organized and detailed dataset, it serves as an invaluable resource for advancing machine learning research in VR and fostering the development of innovative VR technologies.
dataset.listelement.badge
Eyetracking data during observation of movement primitive generated motion
Knopp, Benjamin; Auras, Daniel; Schütz, Alexander, C.; Endres, Dominik
We investigated gaze direction during movement observation. The eye movement data were collected during an experiment, in which different models of movement production (based on movement primitives, MPs) were compared in a two alternatives forced choice task (2AFC). In each trial, participants observed side-by-side presentation of two naturalistic 3D-rendered human movement videos, where one video was based on motion captured gait sequence, the other one was generated by recombining the machine-learned MPs to approximate the same movement. The participants' task was to discriminate between these movements while their eye movements were recorded. We are complementing previous binary decision data analyses with eye tracking data. Here, we are investigating the role of gaze direction during task execution. We computed how much information is shared between gaze features extracted from eye tracking data and decisions of the participants, and between gaze features and correct answers. We found that eye movements reflect the decision of participants during the 2AFC task, but not the correct answer. This result is important for future experiments (possibly in virtual reality), which should take advantage of eye tracking to complement binary decision data.
Vorschaubild
Veröffentlichung
TAM DataHub User Manual
Lenze, Stefan; Pfarr, Julia-Katharina; Berger, Christian; Pietsch Andre; Brand, Ortrun; Endres, Dominik
The DataHub is an infrastructure project of the "The Adaptive Mind" (TAM) cluster initiative in which research groups in the field of psychology and neuroscience, based at multiple hessian universities work together. Guided by the "FAIR" principles, the DataHub offers resources, services and support to affiliated researches on various levels to ease collaborative work: Central storage and compute resources, services like JupyterHub, TAM GitLab and the TAM DataHub Repository, which allow efficient usage of these resources, and support on how to use the services in a way that fits the needs of the individual project. In this manual, you will find an overview on the DataHub, introductions to individual resources and services and workflows to help you using the DataHub.
dataset.listelement.badge
example_BIDSeyetracking
(Institut de Neurosciences de la Timone) Szinte, Martin; Masson, Guillaume; Samonds, Jason; Priebe, Nicholas; Pfarr, Julia-Katharina
Most vertebrates use head and eye movements to quickly change gaze orientation and sample different portions of the environment with periods of stable fixation. Visual information must be integrated across fixations to construct a complete perspective of the visual environment. In concert with this sampling strategy, neurons adapt to unchanging input to conserve energy and ensure that only novel information from each fixation is processed. We demonstrate how adaptation recovery times and saccade properties interact and thus shape spatiotemporal tradeoffs observed in the motor and visual systems of mice, cats, marmosets, macaques, and humans. These tradeoffs predict that in order to achieve similar visual coverage over time, animals with smaller receptive field sizes require faster saccade rates. Indeed, we find comparable sampling of the visual environment by neuronal populations across mammals when integrating measurements of saccadic behavior with receptive field sizes and V1 neuronal density. We propose that these mammals share a common statistically driven strategy of maintaining coverage of their visual environment over time calibrated to their respective visual system characteristics.
Person
Lenze, Stefan
Data Steward