Center for Perceptual Systems

Liberty Hamilton

Assistant ProfessorUT Moody College of Communication - The Department of Communication and Science Disorders & The Department of Neurology

Liberty Hamilton



Human speech perception involves transforming a highly variable acoustic signal into a meaningful linguistic representation (phonemes, words, and meaning). People can parse speech sounds with high fidelity despite wide variations in this signal related to speaker identity, speaker rate, phonetic context, and levels of background noise. How the brain is able to do this is still poorly understood. Furthermore, it is unknown how speech processing in the brain changes during development, as children learn to speak and understand language. Dr. Liberty Hamilton, Assistant Professor in the Department of Communication Sciences and Disorders and the Department of Neurology at UT Austin, conducts research examining how brain networks process speech sounds, and how neural representations of speech and other environmental sounds change during development. This research will help uncover how the healthy human brain processes speech sounds, but will also provide insight into potential treatments for those with communication disorders or other language-related disabilities.

Our research is performed in a unique clinical setting, in collaboration with clinicians at Dell Children’s Comprehensive Epilepsy Center and at Dell Seton Medical Center. Our lab works with patients with epilepsy who experience seizures that cannot be effectively controlled with medication. These patients undergo a surgery in which grids of electrodes are placed on the surface of their brain in order to localize the source of epileptic activity by recording electrical activity directly from the brain surface, a method called electrocorticography (ECoG). After seizures are localized to a particular brain area, that brain area is resected in a surgery. This type of surgery is highly effective in reducing or eliminating seizure activity. In temporal lobe epilepsy, the areas involved in seizure activity may be near language-related areas, so it is important for clinicians to map out these areas to determine their functioning so that language function is minimally affected. Our research, while separate from this clinical mapping, also provides a window into language function and localization in these patients. During a patient’s stay in the hospital, they may volunteer to participate in our research, which involves listening to words, phrases, sentences, or stories while we record their brain activity. We use this to learn how speech is processed in the brain, and how this changes depending on the patient’s age. In addition to invasive recordings, we also employ noninvasive electroencephalography (EEG) to study speech representations in the brains of healthy participants. We hope that our findings will provide insight into brain-based treatments for communication disorders including aphasia, delayed language learning, and dyslexia. 


CSD 350 • Language And The Brain

07270 • Spring 2020
Meets MWF 2:00PM-3:00PM CMB 2.102
(also listed as LIN 350)

Language and the Brain delves into the neuroanatomical and functional operations of the major brain structures that underlie speech/language.  Topics include hemispheric dominance for language, neurological and language breakdowns in aphasia, and brain imaging methods and studies of language representation.


Selected publications: 
Breshears J, Hamilton LS, Chang EF (2018). Spontaneous neural activity in the posterior superior temporal gyrus recapitulates tuning for speech features. Frontiers in Human Neuroscience. 18 September 2018

Hamilton LS and Huth AG (2018). The revolution will not be controlled: natural stimuli in speech neuroscience.  Language, Cognition, and Neuroscience. 

Hamilton LS*, Edwards E*, Chang EF (2018). A spatial map of onset and sustained responses to speech in the human superior temporal gyrus.  Current Biology. *co-first authors.

Hamilton LS, Chang DL, Lee MB, Chang EF (2017). Semi-automated anatomical labeling and inter-subject warping of high-density intracranial recording electrodes in electrocorticography.  Frontiers in Neuroinformatics. 11(62).  doi: 10.3389/fninf.2017.00062. See also accompanying code and sample dataset.

Tang C, Hamilton LS, Chang EF (2017). Intonational speech prosody encoding in human auditory cortex. Science. 357(6353): 797-801.

Baud MO, Kleen JK, Anumanchipalli GK, Hamilton LS, Tan Y-L, Knowlton R, Chang EF (2017). Unsupervised Learning of Spatiotemporal Interictal Discharges in Focal Epilepsy. Neurosurgery nyx480.

Muller LN, Hamilton LS, Edwards E, Bouchard K, Chang EF (2016). Spatial resolution dependence on spectral frequency in human speech cortex electrocorticography. Journal of Neural Engineering 13: 056013.

Cheung C*, Hamilton LS*, Johnson K, Chang EF (2016). Auditory Representation of Speech Sounds in Human Motor Cortex. eLife 2016; 5:e12577. *Co-first authors.

Hullett PW, Hamilton LS, Mesgarani N, Schreiner CE, Chang EF (2016). Human Superior Temporal Gyrus Organization of Spectrotemporal Modulation Tuning Derived from Speech Stimuli. Journal of Neuroscience 36(6):2014-2026.

Hamilton LS, Sohl-Dickstein J, Huth AG, Carels VM, Deisseroth K, Bao S (2013). Optogenetic activation of an inhibitory network enhances feedforward functional connectivity in auditory cortex. Neuron 80:4 1066-1076.

Elliott TM, Hamilton LS, Theunissen FE (2013). Acoustic structure of the five perceptual dimensions of timbre in orchestral instrument tones. Journal of the Acoustical Society of America 133:1 (389-404).

Profile Pages

External Links