Skip to main content
Eye data quality varies greatly with the eyetracking system, the individual characteristics of participants, the recording environment, and the operator. The quality of the signal has important repercussions for which eye movement... more
Eye data quality varies greatly with the eyetracking system, the individual characteristics of participants, the recording environment, and the operator. The quality of the signal has important repercussions for which eye movement measures are valid, and what conclusions can be drawn from them. Reading research often has high demands in terms of spatial precision and accuracy. Reliably detecting events (e.g., fixations, saccades, pursuit, blinks) depends strongly on intrinsic instrument noise and the correct application of parsing algorithms. We define standard measures of data quality across commercial systems and screen area, as well as system robustness to individual variation, from a large data collection. We show differences in the results from parsing algorithms with varying eye data quality, and infer predictive models of data quality as a function of employed systems, operator, and characteristics of the recorded eye across 12 tower mounted and remote eyetracking systems, an...
Over the last two decades much has been learned about human ability to detect and discriminate information within local patches of image motion. Indeed, a model consisting of spatio-temporal motion energy detectors is an excellent... more
Over the last two decades much has been learned about human ability to detect and discriminate information within local patches of image motion. Indeed, a model consisting of spatio-temporal motion energy detectors is an excellent descriptor of both human low-level motion perception and visual processing within the primary visual cortex of the primate brain. This simple quasilinear view however does not adequately describe human performance in motion tasks which require either global integration across the image or segmentation of multiple overlapping motions presented simultaneously, yet these two abilities are essential components of many real- world aerospace tasks. The overall goal of this project is to measure human performance in motion integration and segmentation tasks and to develop and test computational models of human performance. The specific aim is to examine the mechanisms (abilities and limitations) of the human brain 1) to perform the global spatio-temporal integrat...
When small, closely spaced dots are moved closer together, they appear brighter. The effect is observed for dot spacings of half a degree of visual angle or less and can be cancelled either by dimming the dots themselves or adding a... more
When small, closely spaced dots are moved closer together, they appear brighter. The effect is observed for dot spacings of half a degree of visual angle or less and can be cancelled either by dimming the dots themselves or adding a compensating modulation to the background. The effects are easily observed and challenge current models of brightness perception. This illusion demonstrates that we are unable to judge the brightness of a small dot independently of the total amount of light falling in its local neighborhood. All of the dots are rendered with the same intensity, but the dots appear brighter in the regions where the density is higher.
Objective: Our goals were to compare three techniques for performing a psychomotor vigilance task (PVT) on a touch screen device (fifth-generation iPod) and to determine the device latency. Background: The PVT is a reaction-time test that... more
Objective: Our goals were to compare three techniques for performing a psychomotor vigilance task (PVT) on a touch screen device (fifth-generation iPod) and to determine the device latency. Background: The PVT is a reaction-time test that is sensitive to sleep loss and circadian misalignment. Several PVT tests have been developed for touch screen devices, but unlike the standard PVT developed for laboratory use, these tests allow for touch responses to be recorded at any location on the device, with contact from any finger. In addition, touch screen devices exhibit latency in processing time between the touch response and the time registered by the device. Method: Thirteen participants completed a 5-min PVT on a touch screen device held in three positions (on a table with index finger, handheld portrait with index finger, handheld landscape with thumb). We compared reaction-time outcomes in each orientation condition using paired t tests. We recorded the first session using a high-s...
Page 1. PROGRESS ON FLIGHT VIDEO DATA ANALYSES FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT Jeffrey B. Mulligan NASA Ames Research Center, MS 262-2, Moffett Field, CA 94035 ...
ABSTRACT Space operations present the human visual system with a wide dynamic range of images from faint stars and starlit shadows to un-attenuated sunlight. Lunar operations near the poles will result in low sun angles, exacerbating... more
ABSTRACT Space operations present the human visual system with a wide dynamic range of images from faint stars and starlit shadows to un-attenuated sunlight. Lunar operations near the poles will result in low sun angles, exacerbating visual problems associated with shadowing and glare. We discuss the perceptual challenges these conditions will present to the human explorers, and consider some possible mitigations and countermeasures. We also discuss the problems of simulating these conditions for realistic training.
this paper, we wilssume that the palette is fixed, which is the case forpmany printers and liquid crystal displays. When thealette is one which is "separable" in the red, green-pand blue components (i.e. a given level of one... more
this paper, we wilssume that the palette is fixed, which is the case forpmany printers and liquid crystal displays. When thealette is one which is "separable" in the red, green-pand blue components (i.e. a given level of one phoshor may be displayed regardless of the states of theyother phosphors), then a simple approach is to applyour favorite achromatic dithering algorithm
ABSTRACT A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor... more
ABSTRACT A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.
Abstract. In some computer vision applications, we may need to an-alyze large numbers of similar frames depicting various aspects of an event. In this situation, the appearance may change significantly within the sequence, hampering... more
Abstract. In some computer vision applications, we may need to an-alyze large numbers of similar frames depicting various aspects of an event. In this situation, the appearance may change significantly within the sequence, hampering efforts to track particular features. Active shape models ...
Digital technique for generation of slowly moving video image of sinusoidal grating avoids difficulty of transferring full image data from disk storage to image memory at conventional frame rates. Depends partly on trigonometric identity... more
Digital technique for generation of slowly moving video image of sinusoidal grating avoids difficulty of transferring full image data from disk storage to image memory at conventional frame rates. Depends partly on trigonometric identity by which moving sinusoidal grating decomposed into two stationary patterns spatially and temporally modulated in quadrature. Makes motion appear smooth, even at speeds much less than one-tenth picture element per frame period. Applicable to digital video system in which image memory consists of at least 2 bits per picture element, and final brightness of picture element determined by contents of "lookup-table" memory programmed anew each frame period and indexed by coordinates of each picture element.
We have shown that moving a plaid in an asymmetric window biases the perceived direction of motion (Beutter, Mulligan & Stone, ARVO 1994). We now explore whether these biased motion signals might also drive the smooth eye-movement... more
We have shown that moving a plaid in an asymmetric window biases the perceived direction of motion (Beutter, Mulligan & Stone, ARVO 1994). We now explore whether these biased motion signals might also drive the smooth eye-movement response by comparing the perceived and tracked directions. The human smooth oculomotor response to moving plaids appears to be driven by the perceived rather than the veridical direction of motion. This suggests that human motion perception and smooth eye movements share underlying neural motion-processing substrates as has already been shown to be true for monkeys.
The barber pole illusion is a well-known example of how the perceived direction of motion of an inherently ambiguous one-dimensional pattern is influenced by the shape of the area covered by the pattern. Similar effects may be observed... more
The barber pole illusion is a well-known example of how the perceived direction of motion of an inherently ambiguous one-dimensional pattern is influenced by the shape of the area covered by the pattern. Similar effects may be observed for a stimulus which is restricted to a narrow band of spatial frequencies: when a sinusoidal grating is drifted behind a two-dimensional Gaussian contrast window having unequal standard deviations, the direction of perceived motion is biased in the direction of the major axis of the elliptical window (Mulligan, ARVO 1991). We have extended these results to provide insight into possible mechanisms responsible for the effect.
Although human motion sensing mechanisms can be stimulated by rapid changes in the visual input the response of the motion mechanisms themselves is rather sluggish, i.e. they cannot track rapid changes in the speed or direction of a... more
Although human motion sensing mechanisms can be stimulated by rapid changes in the visual input the response of the motion mechanisms themselves is rather sluggish, i.e. they cannot track rapid changes in the speed or direction of a moving stimulus when there are no positional cues to the variation. We tested the ability of human observers to detect sinusoidal perturbations in the velocity of a moving stimulus at a variety of temporal frequencies and drift velocities; the results show a low-pass temporal characteristic, and performance is degraded at high velocities.
Fundus images provide high optical gain for eye movement tracking, i.e. large image displacements occur as a result of small eye rotations. Subpixel registration techniques can provide resolution better than 1 arc minute using images... more
Fundus images provide high optical gain for eye movement tracking, i.e. large image displacements occur as a result of small eye rotations. Subpixel registration techniques can provide resolution better than 1 arc minute using images acquired with a CCD camera. Ocular torsion may also be estimated, with a precision of approximately 0.1 degree. This talk will discuss the software algorithms used to attain this performance.
In the analysis of visual motion, local features such as orientation are analyzed early in the cortical processing stream (V1), while integration across orientation and space is thought to occur in higher cortical areas such as MT, MST,... more
In the analysis of visual motion, local features such as orientation are analyzed early in the cortical processing stream (V1), while integration across orientation and space is thought to occur in higher cortical areas such as MT, MST, etc. If all areas provide inputs to eye movement control centers, we would expect that local properties would drive eye movements with relatively short latencies, while global properties would require longer latencies. When such latencies are observed, they can provide information about when (and where?) various stimulus properties are analyzed. To this end, a stimulus was employed in which local and global properties determining perceived direction-of-motion could be manipulated independently: an elliptical Gabor patch with a drifting carrier, with variable orientation of the carrier grating and the contrast window. We have previously demonstrated that the directional percepts evoked by this stimulus vary between the "grating direction" (t...
The temporal dynamics of eye movement response to a change in direction of stimulus motion has been used to compare the processing speeds of different types of stimuli (Mulligan, ARVO '97). In this study, the pursuit response to... more
The temporal dynamics of eye movement response to a change in direction of stimulus motion has been used to compare the processing speeds of different types of stimuli (Mulligan, ARVO '97). In this study, the pursuit response to colored targets was measured to test the hypothesis that the slow response of the chromatic system (as measured using traditional temporal sensitivity measures such as contrast sensitivity) results in increased eye movement latencies. Subjects viewed a small (0.4 deg) Gaussian spot which moved downward at a speed of 6.6 deg/sec. At a variable time during the trajectory, the dot's direction of motion changed by 30 degrees, either to the right or left. Subjects were instructed to pursue the spot. Eye movements were measured using a video ophthalmoscope with an angular resolution of approximately 1 arc min and a temporal sampling rate of 60 Hz. Stimuli were modulated in chrominance for a variety of hue directions, combined with a range of small luminanc...
Purpose: In the analysis of visual motion, local features such as orientation are analyzed early in the cortical processing stream (V1), while integration across orientation and space is thought to occur in higher cortical areas such as... more
Purpose: In the analysis of visual motion, local features such as orientation are analyzed early in the cortical processing stream (V1), while integration across orientation and space is thought to occur in higher cortical areas such as MT, MST, etc. If all areas provide inputs to eye movement control centers, we would expect that local properties would drive eye movements with relatively short latencies, while global properties would require longer latencies. When such latencies are observed, they can provide information about when (and where?) various stimulus properties are analyzed. Methods: The stimulus employed was an elliptical Gabor patch with a drifting carrier, in which the orientations of the carrier grating and the contrast window were varied independently. We have previously demonstrated that the directional percepts evoked by this stimulus vary between the "grating direction" (the normal to the grating's orientation) and the "window direction", ...
We have previously shown that windowing a drifting plaid with an asymmetric spatial Gaussian produces systematic biases of similar magnitude in both the perceived direction of motion and the direction of the eye movement response. To... more
We have previously shown that windowing a drifting plaid with an asymmetric spatial Gaussian produces systematic biases of similar magnitude in both the perceived direction of motion and the direction of the eye movement response. To further investigate this, we simultaneously measured the pyschophysical and eye-movement responses (with an ISCAN RK426 IR tracker)to drifting plaids in a direction discrimination task. Three observers were instructed to track a plaid (Type I 90 deg.; TF = 4 Hz, SF = 0.6 c/d; windowed by a circular spatial gaussian) and to respond whether the motion was to the right or left of pure vertical. Three plaid directions were presented (-2, 0, 2 deg. with respect to straight down), so there was uncertainty in the perceptual judgements. This enabled us to examine the trial-by-trial relationship between the eye movements and psychophysical responses. The eye-movement direction was computed to be the slope of the best fitting line to the initial 300 ms of saccade...
The classic "barber-pole illusion" demonstrates how the perceived motion of an ambiguous stimulus is influenced by the shape of the viewing aperture: the motion is preferentially seen in the direction of the long axis of the... more
The classic "barber-pole illusion" demonstrates how the perceived motion of an ambiguous stimulus is influenced by the shape of the viewing aperture: the motion is preferentially seen in the direction of the long axis of the aperture. We have previously reported (ARVO 94) that we are able to continuously vary the strength of this effect by varying the aspect ratio of the window. In the present study we examined the motion of the eyes while subjects viewed such stimuli in an attempt to discover whether the eye movement control system performs the same computation underlying the perceptual judgments.
Our results show that the perceived direction of motion of plaids windowed by asymmetric spatial Gaussians is biased toward the long axis of the window. The bias increases as the relative angle between the plaid motion and the window... more
Our results show that the perceived direction of motion of plaids windowed by asymmetric spatial Gaussians is biased toward the long axis of the window. The bias increases as the relative angle between the plaid motion and the window increases, peaks at a relative angle of about 40 degrees, and then decreases. The peak bias was 14 degrees for a spatial frequency of 0.6 cpd and a window aspect ratio of 4.0. The biases increase as the window is elongated and decrease as the component spatial frequency increases. We tested the predictions of several models of human motion processing (cross correlation, motion energy, intersection of constraints, and vector sum), and show that none of these can predict our data. These results suggest that spatial integration of motion signals plays a crucial role in the perception of plaid motion.
Eye movements can provide a wealth of information about how human operators perceive and process visual information. Video-based measurement systems offer considerable advantages over competing approaches for use in applied contexts... more
Eye movements can provide a wealth of information about how human operators perceive and process visual information. Video-based measurement systems offer considerable advantages over competing approaches for use in applied contexts outside the laboratory: first, they do not require physical contact with the eye, and second, they do not restrict the subject s movement. This research investigates image processing methods in an effort to obtain greater accuracy from video images of the eye. Currently available commercial systems use special hardware to compute eye position in real time from images of the eye s anterior structures (first figure). To attain real-time performance, these systems must use relatively simple processing algorithms. This project is concerned with maximizing the final accuracy by the application of more sophisticated image processing, even when the calculations cannot be performed in real time on present-day microcomputers.
Visual display systems provide critical information to pilots, astronauts, and air traffic controllers. The goal of this research project i s to develop precise and reliable quantitative metrics of human performance based on nonintrusive... more
Visual display systems provide critical information to pilots, astronauts, and air traffic controllers. The goal of this research project i s to develop precise and reliable quantitative metrics of human performance based on nonintrusive eye-movement monitoring that can be used in applied settings. The specific aims are (1) to refine the hardware, optics, and software of eye-trackers to allow the nonintrusive acquisition of high-temporal and high-spatial precision eye-position data: (2) to measure quantitatively the links between eve-movement data and perceptual-performance data during tracking and search tasks; and (3) to develop biologically based computational models of human perceptual and eye-movement performance. Validated quantitative models of human visual perception and eye-movement performance will assist in designing computer and other display systems optimized for specific human tasks, in the development of eye-movement-controlled machine interfaces, and in the evolution...
When two moving patterns are combined additively, observers often perceive two transparent surfaces, even when there are no cues supporting this segmentation in a frozen snapshot. We examined the ability of observers to make quantitative... more
When two moving patterns are combined additively, observers often perceive two transparent surfaces, even when there are no cues supporting this segmentation in a frozen snapshot. We examined the ability of observers to make quantitative judgments about the speed of one of the patterns under these conditions. The component patterns consisted of band-pass filtered random noise presented in a spatial Gaussian contrast envelope, presented for 250 ms. On each trial a standard pattern appeared on one side of the fixation point, while a test pattern appeared on the other. The test pattern moved in the same direction as the standard, but with a speed which varied from trial to trial using a staircase procedure. The subjects' task was to report the side of the fixation point on which faster motion was seen. In some conditions the test stimulus was made to appear transparent by adding a mask pattern. When the mask was stationary, or moved slowly with respect to the test, no significant b...
this paper, we wil ssume that the palette is fixed, which is the case for p many printers and liquid crystal displays. When the alette is one which is "separable" in the red, green - p and blue components (i.e. a given level of... more
this paper, we wil ssume that the palette is fixed, which is the case for p many printers and liquid crystal displays. When the alette is one which is "separable" in the red, green - p and blue components (i.e. a given level of one phos hor may be displayed regardless of the states of the y other phosphors), then a simple approach is to apply our favorite achromatic dithering algorithm to the r red, green and blue component images. We shall efer to this as the "independent component" d c method, since the resulting dither image for the re omponent does not depend on the values in the y n green or blue component images. (This method ma ot be suitable for printers, since the inks in general will not combine additively.) A weakness of the independent component s t method (as well as most other standard methods) i hat it does not exploit the fact that the human visual n system has relatively poor acuity for chromatic sigals which do not vary in luminance. Humans can s
Subjects viewing a drifting noise pattern make reflexive smooth eye movements in the direction of motion, which follow rapid changes in movement direction. These responses are unaffected by rotations of the pattern, suggesting that there... more
Subjects viewing a drifting noise pattern make reflexive smooth eye movements in the direction of motion, which follow rapid changes in movement direction. These responses are unaffected by rotations of the pattern, suggesting that there is no coupling between visually sensed rotation and the direction of ocular following.

And 58 more