sergei at salk dot edu
Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, content, and speed. Collectively, the neurons implement the visual system's preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit, and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: the higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics, and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation. [Jurica P, Gepshtein S, Tyukin I and van Leeuwen C]
We investigated how changes of sensory behavior caused by visual adaptation can be mediated by synaptic plasticity of individual neurons. We modeled a simple neural circuit underlying spatiotemporal receptive fields tuned to speed and frequency content of visual stimulation. In this circuit, weighted contributions of multiple "input" cells with diverse receptive fields determine the spatiotemporal extent and tuning of "readout" cells. In numerical simulations of synaptic plasticity, the input-readout synaptic weights increased when a spike of input cell fell within a short interval of a spike of readout cell. In effect, receptive fields of readout cells depended more on those input cells that were stimulated more often, and so biases in visual stimulation yielded small but consistent biases in tuning of readout cells. Because of the stochastic nature of spiking activity, parameters of individual readout cells were biased random processes on a millisecond temporal scale. In simulations of populations of readout cells, the biases in tuning of individual cells created biases in distributions of receptive fields characteristics. Under changes in statistics of stimulation, the number of cells tuned to the prevailing stimuli increased, and the number of cells tuned to the less frequent stimuli decreased. The steady-state behavior of neuronal populations was remarkably stable and it was well approximated by time-invariant solutions of Fokker-Plank equations where individual receptive fields were modeled as continuous-time stochastic processes (Gardiner, 1996). This dynamics was sufficient to explain several puzzling behavioral and physiological results from studies of visual adaptation: that visual sensitivity to adapting stimuli sometimes increases and sometimes decreases, and sometimes changes for stimuli very different from the adapting ones. Notably, the large-scale changes of visual sensitivity observed in the model were similar to changes of spatiotemporal contrast sensitivity observed in behavioral studies of motion adaptation and similar to predictions of a normative theory of resource allocation in the visual system (Gepshtein, Tyukin & Kubovy, 2007). The results indicate that optimal adaptive behavior can emerge in sensory systems without explicit representation of the prior probabilities of stimulation. [Gepshtein S, Jurica P, Tyukin I, van Leeuwen C and Albright TD]
The spatiotemporal contrast sensitivity function provides a broad summary of human visual sensitivity. We investigated alternative methods for rapid estimation of the function and tested the prediction that it has an invariant shape across tasks and measurement procedures. We measured contrast sensitivity in human observers using a speed discrimination task. The resulting sensitivity function had a shape similar to that obtained using other tasks and predicted by a normative theory of sensitivity. We measured the width of the psychometric function at multiple spatial frequencies and found that width was correlated with threshold. Because many adaptive procedures assume a constant width, we asked whether accuracy of measurement depends on correct assumptions about the shape of the psychometric function. A computational experiment revealed that incorrect assumptions about the width increased the error of sensitivity estimation. We conclude that one can rely on the assumption that the shape of the spatiotemporal sensitivity function holds across observers, tasks, and measurement procedures, but estimation of discrimination sensitivity ought to incorporate measurement of the widths of underlying psychometric functions, or no assumption should be made about the width. [Laddis PA, Lesmes LA, Gepshtein S and Albright TD]
The spatiotemporal contrast sensitivity function describes visual sensitivity to moving or flickering gratings across the entire range of visible spatial and temporal frequencies of luminance modulation. In spite of its value for assessment of spatiotemporal vision, the long testing time required for assaying the entire function has often forced researchers to confine measurements to representative sections of spatiotemporal sensitivity: spatial, sampled at a fixed temporal frequency, or temporal, sampled at a fixed spatial frequency. Here we present a novel adaptive method that accelerates the measurement by using Bayesian adaptive inference from the information gained from multiple sections of the spatiotemporal function. The new procedure evaluates the expected gain of information about parameters of sensitivity within every section and selects the stimulus that maximizes the expected gain across several sections. We validated the new procedure in computational and psychophysical experiments. In a direction discrimination task, we used drifting grating stimuli that spanned a broad range of spatial (0.5-8 cycles/deg) and temporal frequencies (0.25-24 Hz) of luminance modulation. Within 300-500 trials (15-25 minutes of run time) the new procedure provided estimates of sensitivity at the accuracy of 10% and the precision of 0.2-0.3 decimal log units. [Lesmes LA, Gepshtein S and Albright TD]
To sustain successful behavior in dynamic environments, biological agents must be able to learn from the consequences of their actions and predict action outcomes. One of the most important discoveries in systems neuroscience over the last 15 years has been about the key role of the neurotransmitter dopamine in mediating such active behavior. Dopamine cell firing was found to encode differences between the expected and obtained outcomes of actions. Although activity of dopamine cells does not specify movements themselves, recent studies suggested that this activity enables normal movement by controlling the implicit motor motivation. Here we studied motor performance of subjects with different degrees of dopamine cell loss: young healthy adults, elderly healthy adults, and patients with Parkinson's disease. All subjects performed rapid sequential movements to visual targets associated with different risk and different energy costs, countered or assisted by gravity. In conditions of low energy cost, patients performed surprisingly well, similar to prescriptions of an ideal planner and healthy subjects. As energy costs increased, however, performance of patients dropped markedly below the optimal prescriptions and below performance of healthy subjects. [Gepshtein S, Li X, Snider J, Plank M, Lee D and Poizner H]
We investigated the human ability to optimize action over multiple future steps in rapidly changing risky environments. Subjects sat in front of a touch screen on which a grid of disks of different sizes (the "stimulus") scrolled down at a variable speed. The task was to maximize the score by touching one disk at a time in a rapid sequence. The positive score from touching a disk was proportional to disk size, while missing a disk incurred a negative score. Auditory and visual feedback conveyed outcomes of every action. Subjects were allowed to touch one disk per row, moving up on the screen. Only one of two disks could be touched in row n+1, adjacent to the disk just touched in row n, such that every choice constrained the part of the stimulus that could be touched in the future. This way, the ability to take into account multiple upcoming rows would yield a better total score. To infer the number of upcoming rows ("depth of computation" D) used in action selection, we counted how often human choices overlapped with choices of ideal planners characterized by different values of D and by different rate of recomputation (R) of action sequences. We found that evidence for D was a decreasing function of distance from the present row to upcoming rows: subjects relied on the immediately upcoming rows more than the remote rows. But as the time pressure decreased (at a lower speed of the stimulus), subjects were able to increase D and thus obtain larger winnings. Notably, the increased evidence for large D was associated with decreased evidence for small D, as if subjects reallocated their computational resources across D. We measured the utility of different disk sizes by comparing evidence for D in actual stimuli with the evidence computed for stimuli where only the largest disk sizes were present, or only the two largest sizes were present, etc. This analysis showed that subjects did not exclusively rely on the larger disks. Indeed, subjects used almost all disk sizes, although large disks had a greater weight in decision making than small disks. The results demonstrate that the ability for rapid decision making is flexible in two respects. First, humans can rapidly recompute the course of action over multiple future steps as new information enters the scope of computation. Second, humans can expand their depth of computation as task difficulty decreases, but the expansion has a cost: a decreased ability to use immediate information, revealing a limited computational resources that are selectively allocated to different aspects of future actions. [Lee D, Snider J, Poizner H and Gepshtein S]
We studied dynamics of visual perceptual organization by recording phenomenal reports and electrical brain activity (EEG) of human observers. Observers did a grouping task in multistable dot lattices. The dots were seen to spontaneously group into strips according to their proximity: one of several orientations was perceived at a time. On the shortest temporal scale, within single trials, the power of pre-stimulus alpha activity in EEG predicted observers' orientation bias: their tendency to report vertical orientations more often than horizontal. The predictions were most reliable in the trials where reported groupings were inconsistent with dot proximity. On the medium scale, for pairs of successive trials, the probability of same responses ("response duplets") was higher than chance only for horizontal orientations, and pre-stimulus alpha power was higher for horizontal than vertical duplets. On the longest scale, across entire experimental session, the orientation bias steadily decreased while observers' grouping sensitivity and pre-stimulus alpha power increased. Consistent with this observation, the predictive effect of pre-stimulus alpha power observed on short and medium scales held only in the beginning of the session. These results indicate that dynamics of perceptual organization is determined by lasting states of the visual system and perceptual learning. [Nikolaev AN, Gepshtein S and van Leeuwen C]
Until very recently, research on perceptual organization has been primarily descriptive. The result was a taxonomy of phenomena with little attempt to identify underlying mechanisms or develop predictive models. The situation has changed in recent years. New experimental methods have been introduced to measure the organizational processes in vision and other sensory modalities, and new predictive computational theories have been developed. This Handbook is an organized survey of the many new approaches to the study of perceptual (mainly visual) organization with an emphasis on computational and mathematical approaches. With chapters written by leading authorities, the Handbook describes modern experimental and computational methods that not only contribute to deciphering the mechanisms of the classical phenomena of perceptual organization but also open new perspectives in what is sometimes called the neo-Gestalt approach to perception. The intended audience includes researchers in psychology, neural science, computer science, and philosophy as well as graduate and advanced undergraduate students in these fields. [Gepshtein S and Maloney LT]
This one-day symposium will celebrate and promote the rapidly growing interaction between two communities: researchers engaged in the scientific study of human perception and action and the practitioners of interactive and immersive narrative media technologies. Leading researchers and artists will discuss human behavior and conscious experience vis-a-vis physical, social, and imagined realities represented in purely virtual worlds, as well as in the 'mixed' worlds that interlace physical and virtual realities.
The symposium will comprise a series of sessions, each featuring two speakers: a scientist and an artist or immersive-reality practitioner. The speakers will first present their approaches and then review both existing and prospective links between their domains of expertise. Following each session, generous time will be devoted to questions from the audience. [chairs: Gepshtein S and McDowell A]
Visual adaptation is expected to improve visual performance in the new environment. The expectation has been contradicted by evidence that adaptation sometimes decreases sensitivity for the adapting stimuli, and sometimes it changes sensitivity for stimuli very different from the adapting ones. We hypothesize that this pattern of results can be explained by a process that optimizes sensitivity for many stimuli, rather than changing sensitivity only for those stimuli whose statistics have changed. To test this hypothesis, we measured visual sensitivity across a broad range of spatiotemporal modulations of luminance, while varying the distribution of stimulus speeds. The manipulation of stimulus statistics caused a large-scale reorganization of visual sensitivity, forming the orderly pattern of sensitivity gains and losses. This pattern is predicted by a theory of distribution of receptive field characteristics in the visual system.
|Press release The visual system as economist: Neural resource allocation in visual adaptation (April 1, 2013).|
|Press release Despite what you may think, your brain is a mathematical genius (April 11, 2013).|
Analyzing single trial brain activity remains a challenging problem in the neurosciences. We gain purchase on this problem by focusing on globally synchronous fields in within-trial evoked brain activity, rather than on localized peaks in the trial-averaged evoked response (ER). We analyzed data from three measurement modalities, each with different spatial resolution: magnetoencephalogram (MEG), electroencephalogram (EEG) and electrocorticogram (ECoG). We first characterized the ER in terms of summation of phase and amplitude components over trials. Both contributed to the ER, as expected, but the ER topography was dominated by the phase component. This means the ER topography is akin to an interference pattern in phase across trials. Hence the observed topography of cross-trial phase will not accurately reflect the phase topography within trials. To assess the organization of within-trial phase, traveling wave (TW) components were quantified by computing the phase gradient. TWs were intermittent but ubiquitous in the within-trial evoked brain activity. At most task-relevant times and frequencies, the within-trial phase topography was described better by a TW than by the trial-average of phase. The trial-average of the TW components also reproduced the topography of the ER; we suggest that the ER topography arises, in large part, as an average over TW behaviors. These findings were consistent across the three measurement modalities. We conclude that, while phase is critical to understanding the topography of event-related activity, the preliminary step of collating cortical signals across trials can obscure the TW components in brain activity and lead to an underestimation of the coherent motion of cortical fields.
Press release Brain waves challenge area-specific view of brain activity (March 20, 2013). [video 1 2]
- Kubovy M, Epstein W & Gepshtein S (2013). Foundations of visual perception. In Healy AF & Proctor RW (Eds.) Experimental Psychology. Volume 4 in Weiner IB (Editor-in-Chief) Handbook of Psychology, 2d ed. John Wiley & Sons, New York, USA. p. 85-119. [+]This chapter contains three tutorial overviews of theoretical and methodological ideas that are important to students of visual perception. From the vast scope of the material we could have covered, we have chosen a small set of topics that form the foundations of vision research. To help fill the inevitable gaps, we have provided pointers to the literature, giving preference to works written at a level accessible to a beginning graduate student. First, we provide a sketch of the theoretical foundations of our field. We lay out four major research programs (in the past they might have been called "schools"), and then discuss how they address eight foundational questions that promise to occupy our discipline for many years to come. Second, we discuss psychophysics, which offers indispensable tools for the researcher. Here we lead the reader from the idea of threshold to the tools of signal detection theory. To illustrate our presentation of methodology, we have not focused on the classics that appear in much of the secondary literature. Rather, we have chosen recent research that showcases the current practice in the field, and the applicability of these methods to a wide range of problems. The contemporary view of perception maintains that perceptual theory requires an understanding of our environment as well as the perceiver. That is why, in the third section, we ask what are the regularities of the environment, how may they be discovered, and to what extent do perceivers use them. Here, too, we use recent research to exemplify this approach.
- Plomp G, van Leeuwen C & Gepshtein S (2012). Perception of time in articulated visual events. Frontiers in Perception Science 3 (564), doi: 10.3389/fpsyg.2012.00564. [+]Perceived duration of a sensory event often exceeds its actual duration. This phenomenon is called time dilation. The distortion may occur because sensory systems are optimized for perception within their respective modalities and not for perception of time. We investigated how the dilation of visual events depends on the duration and content of events. Observers compared the durations of two successive visual stimuli while the luminance of one of the stimuli was modulated at different temporal frequencies. Time dilation correlated with the frequency of modulation and the duration of the stimulus: the faster the modulation and the longer the stimulus duration, the larger the dilation. Notably, time dilation was also accompanied by a decreased sensitivity to stimulus duration. We show that these results are consistent with the notion that stimulus duration is estimated using measurement intervals of the lengths that depend on stimulus frequency content. Estimation of temporal frequency content is more precise using longer measurement intervals, whereas estimation of temporal location is more precise using shorter ones. As a result, visual perception will benefit from using longer intervals when the stimulus is modulated so that its frequency content is measured more precisely. A side effect of using longer temporal intervals is a larger uncertainty about the timing of stimulus offset (temporal location), ensuing time dilation and the reduction of sensitivity to duration. Our findings support the view that time dilation follows from basic principles of measurement and from the notion that visual systems are optimized for visual perception rather than for perception of time.
- Vidal-Naquet M & Gepshtein S (2012). Spatially invariant computations in stereoscopic vision. Frontiers in Computational Neuroscience 6 (47), doi: 10.3389/fncom.2012.00047. [+]Perception of stereoscopic depth requires that visual systems solve a correspondence problem: find parts of the left-eye view of the visual scene that correspond to parts of the right-eye view. The standard model of binocular matching implies that similarity of left and right images is computed by inter-ocular correlation. But the left and right images of the same object are normally distorted relative to one another by the binocular projection, in particular when slanted surfaces are viewed from close distance. Correlation often fails to detect correct correspondences between such image parts. We investigate a measure of inter-ocular similarity that takes advantage of spatially invariant computations similar to the computations performed by complex cells in biological visual systems. This measure tolerates distortions of corresponding image parts and yields excellent performance over a much larger range of surface slants than the standard model. The results suggest that, rather than serving as disparity detectors, multiple binocular complex cells take part in the computation of inter-ocular similarity, and that visual systems are likely to postpone commitment to particular binocular disparities until later stages in the visual process.
- Wagemans J, Feldman J, Gepshtein S, Kimchi R, Pomerantz JR, van der Helm PA & van Leeuwen C (2012). A century of Gestalt psychology in visual perception. Conceptual and theoretical foundations. Psychological Bulletin 138 (6), p. 1218-1252.
- Gepshtein S, Tyukin I & Kubovy M (2011). A failure of the proximity principle in the perception of motion. Humana Mente 17, p. 21-34. [+]The proximity principle is a fundamental fact of spatial vision. It has been a cornerstone of the Gestalt approach to perception, it is supported by overwhelming empirical evidence, and its utility has been proven in studies of the ecological statistics of optical stimulation. We show, however, that the principle does not generalize to dynamic scenes, i.e., no spatiotemporal proximity principle governs the perception of motion. In other words, elements of a dynamic display separated by short spatiotemporal distances are not more likely to be perceived as parts of the same object than elements separated by longer spatiotemporal distances.
- Gepshtein S, Tyukin I & Albright TD (2010). The uncertainty principle of measurement in vision. arXiv:1007.0210, July 2, 2010
- Gepshtein S (2010). Two psychologies of perception and the prospect of their synthesis. Philosophical Psychology 23 (2), p. 217-281. [another link] [+]Two traditions have had a great impact on the theoretical and experimental research of perception. One tradition is statistical, stretching from Fechner's enunciation of psychophysics in 1860 to the modern view of perception as statistical decision making. The other tradition is phenomenological, from Brentano's "empirical standpoint" of 1874 to the Gestalt movement and the modern work on perceptual organization. Each tradition has at its core a distinctive assumption about the indivisible constituents of perception: the just-noticeable differences of sensation in the tradition of Fechner vs. the phenomenological Gestalts in the tradition of Brentano. But some key results from the two traditions can be explained and connected using an approach that is neither statistical nor phenomenological. This approach rests on a basic property of any information exchange: a principle of measurement formulated in 1946 by Gabor as a part of his quantal theory of information. Here the indivisible components are units (quanta) of information that remain invariant under changes of precision of measurement. This approach helped to understand how sensory measurements are implemented by single neural cells. But recent analyses suggest that this approach has the power to explain larger-scale characteristics of sensory systems.
- Nikolaev AR, Gepshtein S, Gong P & van Leeuwen C (2010). Duration of coherence intervals in electrical brain activity in perceptual organization. Cerebral Cortex 20 (2), p. 365-382.
- Gepshtein S, Elder JH & Maloney LT (2008). Perceptual organization and neural computation. Journal of Vision 8 (7), p. 1-7.
- Gepshtein S (2008). Closing the gap between ideal and real behavior: Scientific vs. engineering approaches to normativity. Philosophical Psychology 22 (1), p. 61-75. [+]Early normative studies of human behavior revealed a gap between the norms of practical rationality (what humans ought to do) and the actual human behavior (what they do). It has been suggested that, to close the gap between the descriptive and the normative, one has to revise norms of practical rationality according to the Quinean, engineering view of normativity. On this view, the norms must be designed such that they effectively account for behavior. I review recent studies of human perception which pursued normative modeling and which found good agreement between the normative prescriptions and the actual behavior. I make the case that the goals and methods of this work have been incompatible with those of the engineering approach. I argue that norms of perception and action are observer-independent properties of biological agents; the norms are discovered using methods of the natural science rather than the norms are designed to fit the observed behavior.
- Nikolaev AR, Gepshtein S, Kubovy M & van Leeuwen C (2008). Dissociation of early evoked cortical activity in perceptual grouping. Experimental Brain Research 186 (1), p. 107-122.
- Jurica P, Gepshtein S, Tyukin I, Prokhorov D & van Leeuwen C (2007). Unsupervised adaptive optimization of motion-sensitive systems guided by measurement uncertainty. Proceedings of the Third International Conference on Intelligent Sensors, Sensor Networks and Information Processing 2007 (ISSNIP 2007), p. 179-184.
- Gepshtein S & Kubovy M (2007). The lawful perception of apparent motion. Journal of Vision 7 (8):9, p. 1-15.
- Gepshtein S, Tyukin I & Kubovy M (2007). The economics of motion perception and invariants of visual sensitivity. Journal of Vision 7 (8):8, p. 1-18.
- Gepshtein S, Seydell A & Trommershäuser J (2007). Optimality of human movement under natural variations of visual-motor uncertainty. Journal of Vision 7 (5):13, p. 1-18. [Supplementary Materials]
- Trommershäuser J, Gepshtein S, Maloney LT, Landy MS & Banks MS (2005). Optimal compensation for changes in task relevant movement variability. Journal of Neuroscience 25 (31), p. 7169-7178.
- Banks MS, Gepshtein S & Rose HF (2005). Local cross-correlation model of stereo correspondence. Proceedings of SPIE: Human Vision and Electronic Imaging 5666, p. 53–61.
- Gepshtein S & Kubovy M (2005). Stability and change in perception: Spatial organization in temporal context. Experimental Brain Research 160 (4), p. 487-495. [Reviewed in Bruno, N. 2005, TICS 9, 1-3.]
- Gepshtein S, Burge J, Ernst M & Banks MS (2005). The combination of vision and touch depends on spatial proximity. Journal of Vision 5 (11):7, p. 1013-1023.
- Banks MS, Gepshtein S & Landy MS (2004). Why is spatial stereoresolution so low? Journal of Neuroscience 24 (9), p. 2077-2089.
- Kubovy M & Gepshtein S (2003). Grouping in Space and in Space-Time: An Exercise in Phenomenological Psychophysics. In: Behrmann M, Kimchi R & Olson C (Eds.) Perceptual Organization in Vision: Behavioral and Neural perspectives. Lawrence Erlbaum Association, Mahwah, NJ, p. 45-85. [pdf] [+]We show that grouping by proximity can be modeled with a simple model that has few of the characteristics that one might expect of a Gestalt phenomenon. We do phenomenological psychophysics. Because the observers' responses are based on phenomenal experiences, which are still in bad repute among psychologists, we conclude with an explication of the roots of such skeptical views, and show that they have limited validity.
- Gepshtein S & Banks MS (2003). Viewing geometry determines how vision and touch combine in size perception. Current Biology 13 (6), p. 483-488. [+]Vision and haptics have different limitations and advantages because they obtain information by different methods. If the brain combined information from the two senses optimally, it would rely more on the one providing more precise information for the current task. In this study, human observers judged the distance between two parallel surfaces in two within-modality experiments (vision-alone and haptics-alone) and in an intermodality experiment (vision and haptics together). We find that the combined size estimates are finer than it is possible with either vision or haptics alone. Indeed, the combined estimates approach statistical optimality.
- Gepshtein S & Kubovy M (2000). The emergence of visual objects in space-time. Proceedings of the National Academy of Sciences, USA 97 (14), p. 8186-8191. [+]It is is natural to think that in perceiving dynamic scenes, vision takes a series of snapshots. Motion perception can ensue when the snapshots are different. The snapshot metaphor suggests two questions: (i) How does the visual system put together elements within each snapshot to form objects? This is the spatial grouping problem. (ii) When the snapshots are different, how does the visual system know which element in one snapshot corresponds to which element in the next? This is the temporal grouping problem. The snapshot metaphor is a caricature of the dominant model in the field (the sequential model) according to which spatial and temporal grouping are independent. The model we propose here is an interactive model, according to which the two grouping mechanisms are not separable.
March 12, 2013