College of Liberal Arts
skip to content The University of Texas at Austin

Abstracts

Arash Afraz

Navigating perceptual space with neural perturbations
Local perturbation of neural activity in high-level visual cortical areas alters visual perception. Quantitative characterization of these perceptual alterations holds the key to understanding the mapping between patterns of neuronal activity and elements of perception. The complexity and subjectivity of these perceptual alterations makes them difficult to study. I introduce a new experimental approach, “Perceptography”, to develop “pictures” of the subjective experience induced by optogenetic cortical stimulation in the inferior temporal cortex of macaque monkeys. 

Learn More
College of Liberal Arts

Rowan Candy

The statistics of the natural visual experience selected by infants
Human infants learn to interact with the world over the first months after birth.  They use their ocular motor responses to select structure from the earliest ages in the dynamic three-dimensional environment, even before they begin to reach and move through their surroundings.  Their immature visual function must support this active development by providing the sequential diet of information required for learning everything from basic motor skills to high level cognition.  Here I will review our recent work designed to reveal the low-level statistics of this early visual input.  Using head-mounted cameras and binocular eye-tracking, we have characterized the structure selected by infants from 2-15 months during natural unrestricted activities..

Learn More
College of Liberal Arts

Mark Churchland

Neurobiology of flexible deductive reasoning
Primates can solve novel problems through logical and stepwise reasoning. No two real-world situations are the same, and how one ‘figures out’ a solution may be similarly variable. Studying reasoning has thus been challenging. How should one investigate the neural basis of internal events whose timing and nature are uncertain, and are unlikely to ever unfold the same way twice? To meet this challenge, we used large-scale Neuropixels-probe recordings and a novel task where monkeys apply abstract knowledge to determine the correct ordering of stimuli on the screen. Neural activity in prefrontal cortex (but not in motor cortex) reflected the sequential ‘figuring out’ of a solution. The set of internal steps, and their timing, were different on every trial. For example, the animal might sometimes figure out the last element first, and work backwards. On other trials they might use the opposite approach. In some ways neural activity was complex: a set of multiple choices could be made in any order, involved physical locations that could be anywhere on the screen, and had to respect a rule that varied on every trial. Yet in some sense neural activity was simple, and more step-wise than one might expect. At any specific moment, the animal was engaged in a single internal choice, governed by the current rule and precisely one stimulus. He then committed that choice to memory, and moved on to the next decision. These events were entirely internal, and occurred before a go cue was given and choices were rendered through action. These results show that step-like reasoning is used by monkeys to solve problems, and affords great flexibility. The same neural ‘strategy’ can unfold very differently on different trials, yet still solve the problem at hand successfully.

Learn More
College of Liberal Arts

Sven Dickinson

Symmetry in Human and Computer Vision: a Case Study in Scene Perception
Symmetry is one of the most ubiquitous regularities in our natural  world. For 100 years, human vision researchers have studied how the human vision system has evolved to exploit this powerful regularity for perceptual  grouping.  In the computer vision community, early (pre-deep learning) researchers also exploited symmetry, and developed elegant representations for symmetry in support of segmentation, grouping, 3-D reconstruction, and object recognition. In the first part of the talk, I will review our research that draws on a symmetry-based representation in computer vision to identify the important role that symmetry plays in the task of human scene perception.  In the second part of the talk, I will review our more recent efforts to leverage these findings in human vision to improve the performance of a deep learning computer vision system addressing the same task. This is part of a new research program that seeks to draw on human vision to better inform the design of computer vision systems.

Learn More
College of Liberal Arts

Ben Hayden

Neuronal basis of syntax and semantics in natural speech
The ability to record responses of single neurons in awake humans allows us to understand the neurocomputational foundations of language during natural speech. We recorded responses of neural populations in hippocampus and anterior cingulate cortex during both speech listening and conversations. We find that single neurons in
both regions use a dense code with mixed selectivity to encode both speaker identity and word meanings. Meanwhile we find that morphosyntactic processes correspond to specific and consistent vectorial transformations.

Learn More
College of Liberal Arts

Kohitij Kar

From independent snapshots to integrated streams: Probing Neural Mechanisms of Dynamic Scene Perception in the Primate Brain 
How does the brain represent, predict, and interpret the dynamic visual world? While much of our understanding of visual object processing has been shaped by studies conducted with static images, recent work has moved beyond these constraints to investigate the same fundamental processes in dynamic contexts. In this talk, I will first review key advances we have made using static image paradigms to develop sensory computable, mechanistic, anatomically referenced, and testable (SMART) models of object recognition, thereby elucidating the underlying neural mechanisms. I will then highlight our ongoing shift toward dynamic scene perception, encompassing various facets of scene dynamics — object motion, facial motion during emotion transitions, action prediction and the role of motion vs. appearance information in those paradigms, future scene outcome predictions with limited dynamic content, and the intricate interplay of form and motion revealed by camouflage-breaking tasks. Alongside a discussion of how we evaluate dynamic neural signals and computational models, I will touch on our use of non-human primate behavioral testing (including chemogenetics) and outline future directions in this rapidly evolving research landscape. This integrative approach aims to deepen our understanding of how the visual system—and the computational models used to operationalize our understanding of it—effectively handles dynamic, real-world scenarios.

Learn More
College of Liberal Arts

Julio Martinez-Trujillo

Why do primates have view cells instead of place cells? 
Hippocampal place cells, which encode an individual's spatial location during navigation, have been widely reported in rodent species such as rats and mice. However, studies in primates have instead identified hippocampal cells that encode views of the environment. We investigated spatial navigation in two primate species, macaques using virtual reality, and freely moving marmosets. We found that their navigation strategies differ from those of rodents. Moreover, we observed a predominance of neurons in the CA1 and CA3 subfields of the hippocampus that encode views of the environment, as well as other variables related to gaze direction and head kinematics. We propose that the evolution of a visual system adapted for daylight navigation has shaped spatial navigation strategies and their neural substrates in the primate hippocampus.

Learn More
College of Liberal Arts

Jude Mitchell

Neural mechanisms of active foveal vision in marmoset monkeys
Primates use high acuity central vision to scan visual scenes and monitor distant objects. Each saccadic eye movement brings objects of interest to the fovea, a central region within 1 visual degree eccentricity of highest acuity. Despite its importance in primate vision, few studies have examined foveal representations in visual cortex due to technical challenges. Here I present advances from my laboratory in recording foveal neuron activity as marmoset monkeys actively forage visual scenes and moving targets. First, we have used high-resolution eye tracking during free-viewing of video displays to correct for instantaneous eye position and map foveal visual receptive fields in V1 (Yates et al., Nature Communications, 2023). This free-viewing approach has also allowed us to examine how eye movements modulate visual signals. We find that each eye movement initiates a wave of suppression followed by rebound activity that has distinct timing across the population, and could support a coarse-to-fine processing strategy (Parker et al., Nature Neuroscience, 2023). These saccade-related modulations can also carry top-down predictions important for remapping attention during saccades. In a second experiment, we recorded from peripheral and foveal representations in extra-striate areas MT/MTC while marmosets made saccades to moving targets. Peripheral neurons exhibited gain enhancement when the motion stimulus in their receptive field was the target of an upcoming saccade, consistent with pre-saccadic attention (Coop et al., J. Neuroscience, 2024). In addition, a subset of neurons with foveal receptive fields also showed pre-saccadic enhancements specific to the target. This was possible because those neurons had receptive fields that extended out from the fovea into the periphery and could respond when the peripheral stimulus was the target of the saccade. Those enhancements had spread to the other foveal neurons after the saccade, including neurons in the opposite visual hemi-field. This resulted in enhanced processing for the target anticipated at the fovea. These mechanisms are ideal to support continued selection of the target as it is tracked across eye movements.

Learn More
College of Liberal Arts

Emily Oby

Structure and flexibility of neural population activity 
Learning is a critical part of life. We learn to walk, to communicate, and to reason with our world. Some behaviors are learned quickly. Other behaviors are much more difficult to learn, requiring weeks of effort and the guidance of a coach. I use brain-computer interfaces (BCIs) to examine how neural population activity changes with learning. First, I will show that in a BCI learning task, the structure of neural population activity constrains what is learned on short time scales. It takes many days and an incremental training procedure to change that structure and generate new patterns of neural activity. Then, I will show that neural population activity is also temporally constrained. Animals were unable to violate the naturally-occurring temporal structure in neural population activity observed in motor cortex when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement. 

Learn More
College of Liberal Arts