Please use this identifier to cite or link to this item:
https://hdl.handle.net/2445/148559
Title: | An object-tracking model that combines position and speed explains spatial and temporal responses in a timing task |
Author: | Aguilar Lleyda, David Tubau Sala, Elisabet López-Moliner, Joan |
Keywords: | Filtre de Kalman Percepció visual Presa de decisions Espai i temps Kalman filtering Visual perception Decision making Space and time |
Issue Date: | 1-Dec-2018 |
Publisher: | Association for Research in Vision and Ophthalmology |
Abstract: | Many tasks require synchronizing our actions withparticular moments along the path of moving targets.However, it is controversial whether we base theseactions on spatial or temporal information, and whetherusing either can enhance our performance. Weaddressed these questions with a coincidence timingtask. A target varying in speed and motion durationapproached a goal. Participants stopped the target andwere rewarded according to its proximity to the goal.Results showed larger reward for responses temporally(rather than spatially) equidistant to the goal acrossspeeds, and this pattern was promoted by longer motiondurations. We used a Kalman filter to simulate time andspace-based responses, where modeled speeduncertainty depended on motion duration and positionaluncertainty on target speed. The comparison betweensimulated and observed responses revealed that a singleposition-tracking mechanism could account for bothspatial and temporal patterns, providing a unifiedcomputational explanation. |
Note: | Reproducció del document publicat a: https://doi.org/10.1167/18.12.12 |
It is part of: | Journal of Vision, 2018, vol. 18, num. 12, p. 1-19 |
URI: | https://hdl.handle.net/2445/148559 |
Related resource: | https://doi.org/10.1167/18.12.12 |
ISSN: | 1534-7362 |
Appears in Collections: | Articles publicats en revistes (Cognició, Desenvolupament i Psicologia de l'Educació) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
683781.pdf | 7.33 MB | Adobe PDF | View/Open |
This item is licensed under a
Creative Commons License