TAM DataHub
Permanent URI for this communityhttps://tam-dspace-prod.hrz.uni-marburg.de/handle/tam/8
Browse
Dataset example_BIDSeyetracking(Institut de Neurosciences de la Timone) Szinte, Martin; Masson, Guillaume; Samonds, Jason; Priebe, Nicholas; Pfarr, Julia-KatharinaMost vertebrates use head and eye movements to quickly change gaze orientation and sample different portions of the environment with periods of stable fixation. Visual information must be integrated across fixations to construct a complete perspective of the visual environment. In concert with this sampling strategy, neurons adapt to unchanging input to conserve energy and ensure that only novel information from each fixation is processed. We demonstrate how adaptation recovery times and saccade properties interact and thus shape spatiotemporal tradeoffs observed in the motor and visual systems of mice, cats, marmosets, macaques, and humans. These tradeoffs predict that in order to achieve similar visual coverage over time, animals with smaller receptive field sizes require faster saccade rates. Indeed, we find comparable sampling of the visual environment by neuronal populations across mammals when integrating measurements of saccadic behavior with receptive field sizes and V1 neuronal density. We propose that these mammals share a common statistically driven strategy of maintaining coverage of their visual environment over time calibrated to their respective visual system characteristics.Dataset Eyetracking data during observation of movement primitive generated motionKnopp, Benjamin; Auras, Daniel; Schütz, Alexander, C.; Endres, DominikWe investigated gaze direction during movement observation. The eye movement data were collected during an experiment, in which different models of movement production (based on movement primitives, MPs) were compared in a two alternatives forced choice task (2AFC). In each trial, participants observed side-by-side presentation of two naturalistic 3D-rendered human movement videos, where one video was based on motion captured gait sequence, the other one was generated by recombining the machine-learned MPs to approximate the same movement. The participants' task was to discriminate between these movements while their eye movements were recorded. We are complementing previous binary decision data analyses with eye tracking data. Here, we are investigating the role of gaze direction during task execution. We computed how much information is shared between gaze features extracted from eye tracking data and decisions of the participants, and between gaze features and correct answers. We found that eye movements reflect the decision of participants during the 2AFC task, but not the correct answer. This result is important for future experiments (possibly in virtual reality), which should take advantage of eye tracking to complement binary decision data.