J Vision
-
When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. ⋯ These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face.
-
When two fields of dots with different directions of movement are presented in tandem, the perceived direction of one is biased by the presence of the other. Although this ‘‘direction illusion’’ typically involves repulsion, with an exaggeration of the perceived angular difference in direction between the dot fields, attraction effects, where the perceived difference is reduced, have also been found under certain presentation conditions. ⋯ It was found that the magnitude and sign of the direction illusion differed substantially from earlier research. Furthermore, there appeared to be significant interindividual variability, with dichoptic presentation producing an attractive rather than repulsive direction illusion in some participants.
-
The sound-induced flash illusion (SIFI) is a multisensory perceptual phenomenon in which the number of brief visual stimuli perceived by an observer is influenced by the number of concurrently presented sounds. While the strength of this illusion has been shown to be modulated by the temporal congruence of the stimuli from each modality, there is conflicting evidence regarding its dependence upon their spatial congruence. We addressed this question by examining SIFIs under conditions in which the spatial reliability of the visual stimuli was degraded and different sound localization cues were presented using either free-field or closed-field stimulation. ⋯ SIFIs were more common for small flashes than for large flashes, and for small flashes at peripheral locations, subjects experienced a greater number of illusory fusion events than fission events. However, the SIFI was not dependent on the spatial proximity of the audiovisual stimuli, but was instead determined primarily by differences in subjects' underlying sensitivity across the visual field to the number of flashes presented. Our findings indicate that the influence of auditory stimulation on visual numerosity judgments can occur independently of the spatial relationship between the stimuli.
-
Comparative Study
Color-detection thresholds in rhesus macaque monkeys and humans.
Macaque monkeys are a model of human color vision. To facilitate linking physiology in monkeys with psychophysics in humans, we directly compared color-detection thresholds in humans and rhesus monkeys. Colors were defined by an equiluminant plane of cone-opponent color space. ⋯ These asymmetries may reflect differences in retinal circuitry for S-ON and S-OFF. At plateau performance, the two species also had similar detection thresholds for all colors, although monkeys had shorter reaction times than humans and slightly lower thresholds for colors that modulated L/M cones. We discuss whether these observations, together with previous work showing that monkeys have lower spatial acuity than humans, could be accounted for by selective pressures driving higher chromatic sensitivity at the cost of spatial acuity amongst monkeys, specifically for the more recently evolved L − M mechanism.
-
Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). ⋯ In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion.