Center for Perceptual Systems

NETI 2019 Speaker Abstracts

Lynn Kiorpes
Development of sensitivity to naturalistic image statistics: psychophysics and physiology

The program of visual development in primates has been carefully documented over many studies. However, it is clear that different visual functions develop over different time courses, and higher-order visual functions, in particular, develop later and over longer periods of time than basic acuity and contrast sensitivity. Neurophysiological and brain imaging studies suggest that higher order function, such as figure-ground segregation and other aspects of global form perception, depend on visual cortical areas downstream from primary visual cortex in adults. It is plausible that the ability to combine elements of a scene to form a global percept relies on the existence of mature, more basic early visual processing. If so, downstream extrastriate areas should mature later than area V1, perhaps in a hierarchical manner.  To study this question, we took advantage of recent findings linking area V2 to processing of the statistics of naturalistic images. Using synthetic texture patterns, we studied the development of sensitivity to naturalistic image statistics and the impact of abnormal visual experience on that process. We studied visually-typical macaque monkeys and animals with amblyopia – a developmental disorder of vision. Both psychophysical performance on a texture discrimination task and neurophysiological sensitivity in area V2 show immature processing of these naturalistic textures in young macaques. Furthermore, animals with amblyopia were severely impaired on the texture discrimination task and showed deficient neural processing. These studies provide support for the notion that development of higher order visual processes rely on hierarchically developing extrastriate visual areas.

Weiji Ma
Natural intelligence? Planning in a two-player combinatorial game
I will describe a research program for investigating human planning in very large decision trees. Our case study employs a variant of tic-tac-toe in which players aim to create 4 in a row on a 4-by-9 board. Although this game is far more complex than virtually all tasks used in neuroscience, we can successfully fit a computational model to human play. The model combines intuitive value judgments with mental simulation of potential move sequences and several sources of variability. We validated the model by comparing against alternatives, predicting decisions in experimental variants, and predicting eye fixations. We conducted a Turing test to assess the absolute goodness of fit of the model. We then used the model to study what changes during learning. Finally, in collaboration with a brain training company, we collected a data set consisting of 3.4 million games played in a “natural” setting. 



Pascal Mamassian
Ideal and super-ideal confidence observers

Visual confidence refers to our ability to predict the correctness of our perceptual decisions. Knowing the limits of this ability, both in terms of biases (e.g. overconfidence) and sensitivity (e.g. blindsight), is clearly important to approach a full picture of perceptual decision making. The measurement of visual confidence with the classical method of confidence ratings presents both advantages and disadvantages. In recent years, we have explored an alternative paradigm based on confidence forced-choice. In this paradigm, observers have to choose which of two perceptual decisions is more likely to be correct. I will review some behavioural results obtained with the confidence forced-choice paradigm. I will also present two ideal observers based on signal detection theory, one that uses the same information for perceptual and confidence decisions, and another one that has access to additional information for confidence. These ideal observers help us quantify the limitations of human confidence estimation.

Bruno Olshausen
Efficient representation for active vision
A striking property of biological vision systems is the manner in which they actively acquire information about the world through eye, head and body movements.   We have been exploring the consequences of this active perception setting for learning and computing efficient representations of visual input.  Here I shall discuss three recent findings:  1) we show that drift movements during fixation may actually improve acuity by allowing neural populations in cortex to compute a representation of object shape that averages over spatial inhomogeneities in the retinal sampling lattice;  2)  we show that the optimal image sampling lattice for a visual search task tiles space in a manner similar to the retina, with a dense high-resolution region in the center and a continuous falloff in resolution away from the center;  3) we demonstrate a computational mechanism by which neural populations can build up a holistic scene representation from multiple fixations by representing a visual scene as a superposition of ‘what’ and ‘where’ bindings.  Together, these models and findings provide insight into the neural computations and representations that may underly biological active perception systems.

 

Stephanie Palmer 
Motion in natural scenes: from video analysis to neural response
The statistical analysis of natural stimuli has proven to be a successful approach to understanding the coding properties of sensory neural systems; for example, pairwise correlations of image contrast can predict the spatial and temporal receptive fields of early visual neurons with surprising accuracy. However, pairwise statistics do a poor job of capturing the rich spatiotemporal structure induced by moving objects, which is processed by specialized neural circuitry from the retina to the visual cortex. To measure statistics relevant to motion processing, we first calculated the optical flow for natural movies in the Chicago Motion Database using a standard machine vision algorithm from Michael Black’s group. The optic flow is a vector field describing the spatial transformation of luminance values from one frame to the next, providing a pixel-level estimate of object velocity. We find that object velocity follows a heavy-tailed (Laplacian) distribution. We also developed a simple pixel-tracking algorithm that links optical flow values across frames, and then clustered and analyzed the resulting trajectories as a stochastic process. We find that velocity correlations along these trajectories persist for several hundred milliseconds. These results are relevant to motion processing circuitry in general, but we are particularly interested in retinal circuitry which seems to leverage these velocity correlations in order to compensate for its own processing delays. We present the results of several experiments probing the retina with natural and artificial motion stimuli and discuss them in the context of efficient coding and optimal prediction. 

 

Anitha Pasupathy
Joint encoding of shape and texture in mid-level ventral visual cortex
I am interested in understanding how mid-level processing stages of the primate ventral visual pathway encode visual stimuli and how these representations might underlie our ability to segment visual scenes and recognize objects. Our primary focus is area V4. In my talk, I will present results from two recent experiments that demonstrate that many V4 neurons jointly encode both the shape and surface texture of visual stimuli. I will describe our efforts to develop image-computable models to explain how these properties might arise and discuss why this joint coding strategy may be advantageous for segmentation in natural scenes.

  

Jenny Read
Insect stereopsis: behavior and neurophysiology
Stereopsis – deriving information about distance by comparing views from two eyes – is widespread in vertebrates but so far known in only class of invertebrates, the praying mantids. Understanding stereopsis which has evolved independently in such a different nervous system promises to shed light on the constraints governing any stereo system. Behavioral experiments indicate that insect stereopsis is functionally very different from that studied in vertebrates. Vertebrate stereopsis depends on matching up the pattern of contrast in the two eyes; it works in static scenes, and may have evolved in order to break camouflage rather than to detect distances. Insect stereopsis matches up regions of the image where the luminance is changing; it is insensitive to the detailed pattern of contrast and operates to detect the distance to a moving target. Work from my lab has revealed a network of neurons within the mantis brain which are tuned to binocular disparity, including some that project to early visual areas. This is in contrast to previous theories which postulated that disparity was computed only at a single, late stage, where visual information is passed down to motor neurons. Thus, despite their very different properties, the underlying neural mechanisms supporting vertebrate and insect stereopsis may be computationally more similar than has been assumed.

Aman Saleem
Vision to Navigation: Information processing between the Visual Cortex and Hippocampus
We constantly move from one point to another or navigate in the world: in a room, building or around a city. While navigating, we look around to understand the environment, and our position within it. We use vision naturally and effortlessly to navigate in the world. How does the brain use visual images observed by the eyes for natural functions such as navigation? Research into this area has mostly focused at the two ends of this spectrum: either understanding how visual images are processed, or how navigation related parameters are represented by the brain. However, little is known regarding how visual and navigational areas work together or interact. The focus of my research is to bridge the gap between these two fields of research using a combination of rodent virtual reality, electrophysiology and optogenetic technologies. One of the first steps towards this question is to understand how the visual system functions during navigation. I will describe work on neural coding in the primary visual cortex during locomotion: we discovered that running speed is represented in the primary visual cortex, and how it is integrated with visual information. I will next describe work on how the visual cortex and hippocampus work in cohesion during goal-directed navigation, based on simultaneous recordings from the two areas. We find that both these areas make correlated errors and display neural correlates of behaviour. I will finally show some preliminary work on information processing in areas intermediate to the primary visual cortex and the hippocampus.

Nachum Ulanovsky
Neural codes for natural navigation in the bat hippocampus
The work in our lab focuses on understanding the neural basis of spatial memory and spatial cognition – using bats as our animal model.  In my talk I will present some of our recent studies, which explored the following questions: (i) How does the brain represent positions and directions in 3D ? A set of studies revealed 3D place cells, 3D head-directions cells, and 3D grid cells in the bat hippocampal formation.  (ii) How are navigational goals represented in the brain ? We discovered a new kind of vectorial representation of spatial goals – whereby hippocampal neurons encode the direction and distance to a spatial goal.  (iii) I will describe our recent discovery of “social place-cells” in the bat hippocampus – neurons that represent the position of other bats (conspecifics).  (iv) Finally, I will describe ongoing work towards elucidating hippocampal neural codes in realistic, kilometer-scale environments – where we discovered an unexpected multi-scale coding of space.  Our long-term vision is to develop a “Natural Neuroscience“ approach for studying the neural basis of behavior – tapping into the animal's natural behaviors in complex, large-scale, naturalistic settings.

Xiaoqin Wang
Marmoset as a model system for studying neural basis of vocal communication
Vocal communication is one of the most important natural behaviors of both humans and many animal species. In the past, studies of echolocating bats and songbirds have provided important insights into neural mechanisms of vocal communication. In comparison, much less has been learned from non-human primates. Several factors have contributed to the slow progress in this field. Under captive conditions, most non-human primate species are not vocal as they are in the natural environment, especially for species with large body sizes. The common marmoset (Callithrix jacchus), a New World non-human primate species, has emerged in recent years as a promising model system for studying neural basis of vocal communication. Marmoset offers several critical advantages over other non-human primate species. In the past 20 years, my laboratory has pioneered a number of behavioral and electrophysiological techniques to study single-neuron activity under awake and behaving conditions in the marmoset, including extracellular and intracellular recordings and wireless neural recordings from freely roaming marmosets. Recently, we have developed a cochlear implant model in marmoset. Using these techniques, we have identified non-linear transformations of time-varying signals in auditory cortex and discovered a pitch-processing center in the marmoset brain that mirrors a similar region in the human brain. We also showed that cortical representations of self-produced vocalizations are shaped by auditory feedback and vocal control signals during vocal communication. These findings have important implications for understanding how the brain processes speech and music. They also demonstrate the tremendous potentials of the marmoset in studying the neural basis of vocal communication and social interactions.

Michael Webster
Color vision: discounting the observer
Color constancy is usually considered in the context of variations in the stimulus, such as discounting the illuminant. However a complementary requirement for constancy is discounting variations within the observer, by compensating for changes in sensitivity across time (e.g. during aging) and space (e.g. retinal location). These compensations can be surprisingly sophisticated and adjust to many attributes of color, correcting for sensitivity limits both within and between observers. As a result many aspects of color vision vary less than sensitivity variations would predict, potentially because the different observers are calibrated for similar visual worlds. Yet on the other hand there remain large and reliable inter-observer differences in color appearance (e.g. in the stimuli perceived as pure or unique hues). Analyses of these differences point to a non-metrical representation of color “space” that may be fundamentally different from the representation of visual space.

Daniel Wolpert
Probabilistic models of sensorimotor control and decision making
The effortless ease with which humans move our arms, our eyes, even our lips when we speak masks the true complexity of the control processes involved. This is evident when we try to build machines to perform human control tasks. I will review our work on how humans learn to make skilled movements covering probabilistic models of learning, including Bayesian models as well as the role of context in activating motor memories. I will also review our work showing the intimate interactions between decision making and sensorimotor control processes. Taken together these studies show that probabilistic models play a fundamental role in human sensorimotor control.