TAM DataHub

The TAM DataHub is the common research infrastructure of the consortium The Adaptive Mind.

TAM-GitLab (VPN)   Jupyter Hub (VPN)   User Manual   Support

The DataHub Repository lets researchers published their various kinds of work:

  • Research Datasets
  • Code & Software
  • Text Publications of any kind
 

Communities in DSpace

Select a community to browse its collections.

Now showing 1 - 1 of 1

Recent Submissions

Dataset
Eyetracking data during observation of movement primitive generated motion
(2024-08-25) Knopp, Benjamin; Auras, Daniel; Schütz, Alexander C.; Endres, Dominik
We investigated gaze direction during movement observation. The eye movement data were collected during an experiment, in which different models of movement production (based on movement primitives, MPs) were compared in a two alternatives forced choice task (2AFC). In each trial, participants observed side-by-side presentation of two naturalistic 3D-rendered human movement videos, where one video was based on motion captured gait sequence, the other one was generated by recombining the machine-learned MPs to approximate the same movement. The participants' task was to discriminate between these movements while their eye movements were recorded. We are complementing previous binary decision data analyses with eye tracking data. Here, we are investigating the role of gaze direction during task execution. We computed how much information is shared between gaze features extracted from eye tracking data and decisions of the participants, and between gaze features and correct answers. We found that eye movements reflect the decision of participants during the 2AFC task, but not the correct answer. This result is important for future experiments (possibly in virtual reality), which should take advantage of eye tracking to complement binary decision data.
Dataset
Coding of egocentric distance in the macaque ventral intraparietal area
Caziot, Baptiste; Fathkhani, Sadra; Bremmer, Frank
The encoding of three-dimensional visual spatial information is of ultimate importance in everyday life, in particular for successful navigation toward targets or threat avoidance. Eye-movements challenge this spatial encoding: 2-3 times per second, they shift the image of the outside world across the retina. The macaque ventral intraparietal area (VIP) stands out from other areas of the dorsal ‘where’ pathway of the primate visual cortical system: many neurons encode visual information irrespective of horizontal and vertical eye position. But does this gaze invariance of spatial encoding at the single neuron level also apply to egocentric distance? Such an invariance for egocentric distances would correspond to a shift of disparity-tuning curves by vergence angle. Here, concurrent with recordings from area VIP, monkeys fixated a central target at one of three distances (vergence), while a visual stimulus was shown at one of seven distances (disparity). Most neurons’ activity was modulated independently by both disparity and eye vergence, and we did not observe shifts of disparity-tuning curves as expected from encoding egocentric distances at a single-cell level, demonstrating a different type of invariance than for visual directions. By using population activity, we were able to decode egocentric distance of a stimulus which demonstrates that egocentric distances are nonetheless represented within the neuronal population. Our results provide further strong evidence for a role of area VIP in 3D space encoding.