Studying Perceptual Tasks in Different Domains using Eye-Movements

May 9, 2014 at 12:00 pm by

Place: Large Lecture Room
Affiliation: German Institute of Artificial Intelligence / University of Kaiserslautern, Germany

 

Perception is the process of gleaning information about the world in order to interact with it in an optimal fashion. It includes deciding where to look, how we read or do photo editing, for example. Scientific investigation of perceptual processes include procedures, such as, conducting explicit interviews for recording phenomenological experiences or collecting manual response measures, such as, reaction time, percent correct responses, as a function of some change in stimulus property. These are coarse measures based on the final percept and do not provide a detailed sampling of the cognitive processes involved in acquiring information from the stimulus. Eye movement measures provide richer sampling of the process of information collection dependent on changing stimulus properties. I will present eye movement based studies in domains that are crucial for bridging the gap between human and computer vision. Firstly, I will present a study on development of an implicit measure based on saccadic metrics for strength of factors that bias perceptual grouping. Secondly, I will talk about an eye movement based study for investigating the information-content definition based on contextual probability that best predicts reading of sentences.  Finally, I will present some preliminary data for collecting implicit knowledge of photo editing experts. This technique supplements information from explicit interviews. Interviews do not provide sufficient information about features that can be directly used to train novices or develop software for image processing that models expert behavior. Such studies for extracting implicit features will help develop better technologies for human computer interaction.

 

http://www.sowi.uni-kl.de/psychology-of-perception

 

Watch the Video Presentation

%d bloggers like this: