Center for Neurobiology of Vision
858.453.4100 ext 1014 | office
I am a vision scientist interested in foundations of perceptual psychology and sensory neuroscience. Much of my research dwells on the interface between two aspects of vision: the entry process called early vision and the constructive process called perceptual organization.
I study the computational principles and biological mechanisms underlying these processes, in particular how visual systems organize information for the perception of motion and change.
I also study sensorimotor integration: how visual and haptic information is used to guide action. Currently I try to understand how vision helps us to plan actions prospectively, for many steps ahead, in view of the dynamic nature of the environment, its varying uncertainties and risks.
As a staff scientist and a principal investigator at the Salk Institute for Biological Studies, I use experimental and computational methods to characterize neuronal mechanisms of sensation, perception, and action. And as a founding member of the 5D Institute, I'm increasingly involved in the translational studies and design of visual media and built environments.
For more detail, double-click the [+] markers.
Gepshtein S, Lesmes LA & Albright TD (2013). Sensory adaptation as optimal resource allocation. Proceedings of the National Academy of Sciences, USA 110 (11), 4368-4373.
Visual adaptation is expected to improve visual performance in the new environment. The expectation has been contradicted by evidence that adaptation sometimes decreases sensitivity for the adapting stimuli, and sometimes it changes sensitivity for stimuli very different from the adapting ones. We hypothesize that this pattern of results can be explained by a process that optimizes sensitivity for many stimuli, rather than changing sensitivity only for those stimuli whose statistics have changed. To test this hypothesis, we measured visual sensitivity across a broad range of spatiotemporal modulations of luminance, while varying the distribution of stimulus speeds. The manipulation of stimulus statistics caused a large-scale reorganization of visual sensitivity, forming the orderly pattern of sensitivity gains and losses. This pattern is predicted by a theory of distribution of receptive field characteristics in the visual system.
Jurica P, Gepshtein S, Tyukin I & van Leeuwen C (2013). Sensory optimization by stochastic tuning. Psychological Review 120 (4), 798-816. doi: 10.1037/a0034192.
Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system's preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit, and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: the higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics, and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation.
Gepshtein S (2010). Two psychologies of perception and the prospect of their synthesis. Philosophical Psychology 23 (2), 217-281.
Two traditions have had a great impact on the theoretical and experimental research of perception. One tradition is statistical, stretching from Fechner's enunciation of psychophysics in 1860 to the modern view of perception as statistical decision making. The other tradition is phenomenological, from Brentano's "empirical standpoint" of 1874 to the Gestalt movement and the modern work on perceptual organization. Each tradition has at its core a distinctive assumption about the indivisible constituents of perception: the just-noticeable differences of sensation in the tradition of Fechner vs. the phenomenological Gestalts in the tradition of Brentano. But some key results from the two traditions can be explained and connected using an approach that is neither statistical nor phenomenological. This approach rests on a basic property of any information exchange: a principle of measurement formulated in 1946 by Gabor as a part of his quantal theory of information. Here the indivisible components are units (quanta) of information that remain invariant under changes of precision of measurement. This approach helped to understand how sensory measurements are implemented by single neural cells. But recent analyses suggest that this approach has the power to explain larger-scale characteristics of sensory systems.
Nikolaev AR, Gepshtein S, Gong P & van Leeuwen C (2010). Duration of coherence intervals in electrical brain activity in perceptual organization. Cerebral Cortex 20 (2), 365-382.
We investigated the relationship between visual experience and temporal intervals of synchronized brain activity. Using high-density scalp electroencephalography, we examined how synchronized activity depends on visual stimulus information and on individual observer sensitivity. In a perceptual grouping task, we varied the ambiguity of visual stimuli and estimated observer sensitivity to this variation. We found that durations of synchronized activity in the beta frequency band were associated with both stimulus ambiguity and sensitivity: the lower the stimulus ambiguity and the higher individual observer sensitivity the longer were the episodes of synchronized activity. Durations of synchronized activity intervals followed an extreme value distribution, indicating that they were limited by the slowest mechanism among the multiple neural mechanisms engaged in the perceptual task. Because the degree of stimulus ambiguity is (inversely) related to the amount of stimulus information, the durations of synchronous episodes reflect the amount of stimulus information processed in the task. We therefore interpreted our results as evidence that the alternating episodes of desynchronized and synchronized electrical brain activity reflect, respectively, the processing of information within local regions and the transfer of information across regions.
Gepshtein S & Kubovy M (2007). The lawful perception of apparent motion. Journal of Vision 7 (8):9, 1-15.
Visual apparent motion is the experience of motion from the successive stimulation of separate spatial locations. How spatial and temporal distances interact to determine the strength of apparent motion has been controversial. Some studies report space-time coupling: If we increase spatial or temporal distance between successive stimuli, we must also increase the other distance between them to maintain a constant strength of apparent motion (Korte's third law of motion). Other studies report space-time tradeoff: If we increase one of these distances, we must decrease the other to maintain a constant strength of apparent motion. In this article, we resolve the controversy. Starting from a normative theory of motion measurement and data on human spatiotemporal sensitivity, we conjecture that both coupling and tradeoff should occur, but at different speeds. We confirm the prediction in two experiments, using suprathreshold multistable apparent-motion displays called motion lattices. Our results show a smooth transition between the tradeoff and coupling as a function of speed: Tradeoff occurs at low speeds and coupling occurs at high speeds. From our data, we reconstruct the suprathreshold equivalence contours that are analogous to isosensitivity contours obtained at the threshold of visibility.
Gepshtein S, Tyukin I & Kubovy M (2007). The economics of motion perception and invariants of visual sensitivity. Journal of Vision 7 (8):8, 1-18.
Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.
Trommershäuser J, Gepshtein S, Maloney LT, Landy MS & Banks MS (2005). Optimal compensation for changes in task relevant movement variability. Journal of Neuroscience 25 (31), 7169-7178.
Effective movement planning should take into account the consequences of possible errors in executing a planned movement. These errors can result from either sensory uncertainty or variability in movement planning and production. We examined the ability of humans to compensate for variability in sensory estimation and movement production under conditions in which variability is increased artificially by the experimenter. Subjects rapidly pointed at a target region that had an adjacent penalty region. Target and penalty hits yielded monetary rewards and losses. We manipulated the task-relevant variability by perturbing visual feedback of finger position during the movement. The feedback was shifted in a random direction with a random amplitude in each trial, causing an increase in the task-relevant variability. Subjects were unable to counteract this form of perturbation. Rewards and penalties were based on the perturbed, visually specified finger position. Subjects rapidly acquired an estimate of their new variability in <120 trials and adjusted their aim points accordingly. We compared subjects' performance to the performance of an optimal movement planner maximizing expected gain. Their performance was consistent with that expected from an optimal movement planner that perfectly compensated for externally imposed changes in task-relevant variability. When exposed to novel stimulus configurations, aim points shifted in the first trial without showing any detectable trend across trials. These results indicate that subjects are capable of changing their pointing strategy in the presence of externally imposed noise. Furthermore, they manage to update their estimate of task-relevant variability and to transfer this estimate to novel stimulus configurations.
Banks MS, Gepshtein S & Landy MS (2004). Why is spatial stereoresolution so low? Journal of Neuroscience 24 (9), 2077-2089.
Spatial stereoresolution (the finest detectable modulation of binocular disparity) is much poorer than luminance resolution (finest detectable luminance variation). In a series of psychophysical experiments, we examined four factors that could cause low stereoresolution: (1) the sampling properties of the stimulus, (2) the disparity gradient limit, (3) low-pass spatial filtering by mechanisms early in the visual process, and (4) the method by which binocular matches are computed. Our experimental results reveal the contributions of the first three factors. A theoretical analysis of binocular matching by interocular correlation reveals the contribution of the fourth: the highest attainable stereoresolution may be limited by (1) the smallest useful correlation window in the visual system, and (2) a matching process that estimates the disparity of image patches and assumes that disparity is constant across the patch. Both properties are observed in disparity-selective neurons in area V1 of the primate (Nienborg et al., 2004).
Gepshtein S & Banks MS (2003). Viewing geometry determines how vision and touch combine in size perception. Current Biology 13 (6), 483-488.
Vision and haptics have different limitations and advantages because they obtain information by different methods. If the brain combined information from the two senses optimally, it would rely more on the one providing more precise information for the current task. In this study, human observers judged the distance between two parallel surfaces in two within-modality experiments (vision-alone and haptics-alone) and in an intermodality experiment (vision and haptics together). We find that the combined size estimates are finer than it is possible with either vision or haptics alone. Indeed, the combined estimates approach statistical optimality.
Gepshtein S & Kubovy M (2000). The emergence of visual objects in space-time. Proceedings of the National Academy of Sciences, USA 97 (14), 8186-8191.
It is is natural to think that in perceiving dynamic scenes, vision takes a series of snapshots. Motion perception can ensue when the snapshots are different. The snapshot metaphor suggests two questions: (i) How does the visual system put together elements within each snapshot to form objects? This is the spatial grouping problem. (ii) When the snapshots are different, how does the visual system know which element in one snapshot corresponds to which element in the next? This is the temporal grouping problem. The snapshot metaphor is a caricature of the dominant model in the field (the sequential model) according to which spatial and temporal grouping are independent. The model we propose here is an interactive model, according to which the two grouping mechanisms are not separable.
Double-click the blue markers [+] for further detail.[+] Prospective optimization
We studied how humans optimize action over multiple future steps in dynamic risky environments. We measured how rapidly healthy adult subjects could recompute the future course of action as new information gradually entered the scope of foreseeable action.
We found that the scope of the future over which our subjects computed future actions was flexible. The scope of computation increased as the task difficulty decreased. But this flexibility had a cost: the larger the scope of computation the lower the ability to use immediate information. This is while the subjects used all the available information: our analyses showed that the subjects did not use such heuristics as only seeking the large-gain steps or only avoiding the small-gain steps. Instead, our findings revealed a sophisticated strategy of prospective optimization that allocates the limited computational resources such as to take advantage of all the information at hand and to balance the immediate and delayed rewards.
Lee D, Snider J, Poizner H and Gepshtein S | early report: SfN 2012
Visual perception is adaptive: it depends on the previous visual stimulation. The adaptive change is mediated by synaptic plasticity of individual neural cells whose behavior is stochastic. Here we show how the stochastic activity of individual cells leads to stochastic updating of their tuning, and how these changes are sufficient to explain some previously puzzling results from behavioral and physiological studies of visual perception
Gepshtein S, Jurica P, Tyukin I, van Leeuwen C and Albright TD | early report: SfN 2012
The spatiotemporal contrast sensitivity function (the 'Kelly function') provides a broad summary of human visual sensitivity used in basic and clinical studies of vision. We ask which features of the Kelly function are invariant across tasks and measurement procedures. Knowing which aspects of sensitivity are invariant facilitates the comprehensive assessment of contrast sensitivity and the changes of sensitivity caused by adaptation or disease.
We isolate those aspects of the Kelly function that do not vary across observers, tasks, and experimental procedures. This allows us to advance specific prescriptions for efficient estimation of sensitivity. In particular, we propose that the method of estimation ought to incorporate measurement of the width (or the slope) of the underlying psychometric functions, or no assumption should be made about the width.
Laddis PA, Lesmes LA, Gepshtein S and Albright TD
The spatiotemporal contrast sensitivity function describes visual sensitivity to moving or flickering gratings across the entire range of visible spatial and temporal frequencies of luminance modulation. In spite of its value for assessment of spatiotemporal vision, the long testing time required for assaying the entire function has often forced researchers to confine measurements to representative sections of spatiotemporal sensitivity: spatial, sampled at a fixed temporal frequency, or temporal, sampled at a fixed spatial frequency. Here we present a novel adaptive method that accelerates the measurement by using Bayesian adaptive inference from the information gained from multiple sections of the spatiotemporal function. The new procedure evaluates the expected gain of information about parameters of sensitivity within every section and selects the stimulus that maximizes the expected gain across several sections. We validated the new procedure in computational and psychophysical experiments. In a direction discrimination task, we used drifting grating stimuli that spanned a broad range of spatial (0.5-8 cycles/deg) and temporal frequencies (0.25-24 Hz) of luminance modulation. Within 300-500 trials (15-25 minutes of run time) the new procedure provided estimates of sensitivity at the accuracy of 10% and the precision of 0.2-0.3 decimal log units.
Lesmes LA, Gepshtein S, Lu Z-L and Albright TD
We study the role of spontaneous cortical activity in perceptual learning. We find that the pre-stimulus cortical activity in the alpha band reflects a process that helps to disambiguate perception.
We measured the electrical brain activity preceding ambiguous visual stimuli: dot lattices, in which the dots are seen to group along one or several orientations depending on dot proximity. Perceptual reports on every trial depended on two factors: participants' sensitivity to dot proximity and their intrinsic bias for the orientation of perceptual grouping. The effect of intrinsic bias changed during the experiment. As participants learned the task, the initially prominent role of intrinsic bias decreased and sensitivity to dot proximity increased, giving way to by the well-known association between pre-stimulus alpha phase and visual sensitivity.
For as long as the role of intrinsic bias was prominent, we observed an intermittent regime of alpha activity, in which a mode of low amplitude and low temporal variability alternated with a mode of high amplitude and high temporal variability. The latter mode was associated with the unbiased responses whereas the former mode was associated with the intrinsic orientation bias. We propose that the intermittent alpha activity is a mechanism that helps to resolve perceptual ambiguity, mediating flexible application of internal representations, thus compensating for a lack of stimulus information.
Nikolaev AN, Gepshtein S and van Leeuwen C | early report: ECVP 2012
The symposium organized by Sergei Gepshtein (Salk Institute) and Alex McDowell (USC) celebrated the rapidly growing interaction between two communities: researchers engaged in the scientific study of human perception and action and the practitioners of interactive and immersive narrative media technologies. Leading scientists and artists discussed human behavior and conscious experience in face of physical, social, and imagined realities represented in purely virtual worlds, as well as in the 'mixed' worlds that interlace the physical and virtual realities. The symposium consisted of a series of sessions each featuring two speakers: a scientist and an artist or immersive-reality practitioner. The speakers first presented their approaches and then reviewed the existing and prospective links between their domains of expertise. Each session was followed by an extensive discussion. An on-line publication featuring footage from the event is forthcoming.
Gepshtein S and McDowell A | preview at the 5D Institute
Until very recently, research on perceptual organization has been primarily descriptive. The result was a taxonomy of phenomena with little attempt to identify underlying mechanisms or develop predictive models. The situation has changed in recent years. New experimental methods have been introduced to measure the organizational processes in vision and other sensory modalities, and new predictive computational theories have been developed. This Handbook is an organized survey of the many new approaches to the study of perceptual (mainly visual) organization with an emphasis on computational and mathematical approaches. With chapters written by leading authorities, the Handbook describes modern experimental and computational methods that not only contribute to deciphering the mechanisms of the classical phenomena of perceptual organization but also open new perspectives in what is sometimes called the neo-Gestalt approach to perception. The intended audience includes researchers in psychology, neural science, computer science, and philosophy as well as graduate and advanced undergraduate students in these fields.
Gepshtein S, Singh M, and Maloney LT
|Seen and unseen: Could there ever be a "cinema without cuts"?
Scientific American Blogs | April 29, 2014
|How the movies of tomorrow will play with your mind
Pacific Standard | April 29, 2014
|The visual system as economist: neural resource allocation in visual adaptation
Medical Xpress | April 1, 2013
|Despite what you may think, your brain is a mathematical genius
ScienceNewsline | April 10, 2013
|Brain waves challenge area-specific view of brain activity [video 1 2]
KU Leuven | March 20, 2013