Unsupervised methods for determining the functional organization of the human speech cortex

To process speech, the brain must transform a series of low level acoustic inputs to higher order linguistic categories, such as phonemes, words, and narrative meaning. This involves being able to encode acoustic features that happen at very fast timescales as well as information that builds up over long periods of time.  How this process is functionally organized into cortical circuits is not well understood.  Liberty Hamilton and colleagues showed that, by applying unsupervised methods to neural recordings of people listening to naturally spoken sentences, they could uncover an organization of the auditory cortex and surrounding areas into two spatially and functionally distinct modules: a posterior area that detects fast onsets important for segmenting the beginning of sentences and phrases, and a slower anterior area that responds in a sustained manner throughout sentences. The Hamilton lab is now applying similar unsupervised methods to look at changes in functional organization during brain development in children with epilepsy.  They also apply computational models to analyze which particular sound features are represented in the brain, and how areas functionally interact during natural speech perception and production.