Chemistries exhibiting complex dynamics – from inorganic oscillators to gene regulatory networks – have been long known but cannot be reprogrammed at will because of a lack of control over their evolved or serendipitously found molecular building blocks. Here we show that information-rich DNA strand displacement cascades could be systematically constructed to realize complex temporal trajectories specified by an abstract chemical reaction network model. We codify critical design principles in a compiler that automates the design process, and demonstrate our approach by building a novel DNA-only oscillator. Unlike biological networks that rely on the sophisticated chemistry underlying the central dogma, our test tube realization suggests that simple Watson-Crick base pairing interactions alone suffice for arbitrarily complex dynamics. Our result establishes a basis for autonomous and programmable molecular systems that interact with and control their chemical environment.
A fundamental everyday visual task is to detect target objects within a background scene. Using relatively simple stimuli, vision science has identified several major factors that affect detection thresholds, including the luminance of the background, the contrast of the background, the spatial similarity of the background to the target, and uncertainty due to random variations in the properties of the background and in the amplitude of the target. Here we use an experimental approach based on constrained sampling from multidimensional histograms of natural stimuli, together with a theoretical analysis based on signal detection theory, to discover how these factors affect detection in natural scenes. We sorted a large collection of natural image backgrounds into multidimensional histograms, where each bin corresponds to a particular luminance, contrast, and similarity. Detection thresholds were measured for a subset of bins spanning the space, where a natural background was randomly sampled from a bin on each trial. In low-uncertainty conditions, both the background bin and the amplitude of the target were fixed, and, in high-uncertainty conditions, they varied randomly on each trial. We found that thresholds increase approximately linearly along all three dimensions and that detection accuracy is unaffected by background bin and target amplitude uncertainty. The results are predicted from first principles by a normalized matched-template detector, where the dynamic normalizing gain factor follows directly from the statistical properties of the natural backgrounds. The results provide an explanation for classic laws of psychophysics and their underlying neural mechanisms.
Responses of individual task-relevant sensory neurons can predict monkeys’ trial-by-trial choices in perceptual decision-making tasks. Choice-correlated activity has been interpreted as evidence that the responses of these neurons are causally linked to perceptual judgments. To further test this hypothesis, we studied responses of orientation-selective neurons in V1 and V2 while two macaque monkeys performed a fine orientation discrimination task. Although both animals exhibited a high level of neuronal and behavioral sensitivity, only one exhibited choice-correlated activity. Surprisingly, this correlation was negative: when a neuron fired more vigorously, the animal was less likely to choose the orientation preferred by that neuron. Moreover, choice-correlated activity emerged late in the trial, earlier in V2 than in V1, and was correlated with anticipatory signals. Together, these results suggest that choice-correlated activity in task-relevant sensory neurons can reflect postdecision modulatory signals.
A systems-based analysis of dendritic nonlinearities reveals temporal feature extraction in mouse L5 cortical neurons
What do dendritic nonlinearities tell a neuron about signals injected into the dendrite? Linear and nonlinear dendritic components affect how time-varying inputs are transformed into action potentials (APs), but the relative contribution of each component is unclear. We developed a novel systems-identification approach to isolate the nonlinear response of layer 5 pyramidal neuron dendrites in mouse prefrontal cortex in response to dendritic current injections. We then quantified the nonlinear component and its effect on the soma, using functional models composed of linear filters and static nonlinearities. Both noise and waveform current injections revealed linear and nonlinear components in the dendritic response. The nonlinear component consisted of fast Na+ spikes that varied in amplitude 10-fold in a single neuron. A functional model reproduced the timing and amplitude of the dendritic spikes and revealed that they were selective to a preferred input dynamic (~4.5 ms rise time). The selectivity of the dendritic spikes became wider in the presence of additive noise, which was also predicted by the functional model. A second functional model revealed that the dendritic spikes were weakly boosted before being linearly integrated at the soma. For both our noise and waveform dendritic input, somatic APs were dependent on the somatic integration of the stimulus, followed a subset of large dendritic spikes, and were selective to the same input dynamics preferred by the dendrites. Our results suggest that the amplitude of fast dendritic spikes conveys information about high-frequency features in the dendritic input, which is then combined with low-frequency somatic integration.NEW & NOTEWORTHY The nonlinear response of layer 5 mouse pyramidal dendrites was isolated with a novel systems-based approach. In response to dendritic current injections, the nonlinear component contained mostly fast, variable-amplitude, Na+ spikes. A functional model accounted for the timing and amplitude of the dendritic spikes and revealed that dendritic spikes are selective to a preferred input dynamic, which was verified experimentally. Thus, fast dendritic nonlinearities behave as high-frequency feature detectors that influence somatic action potentials.
The ability to store and later use information is essential for a variety of adaptive behaviors, including integration, learning, generalization, prediction and inference. In this Review, we survey theoretical principles that can allow the brain to construct persistent states for memory. We identify requirements that a memory system must satisfy and analyze existing models and hypothesized biological substrates in light of these requirements. We also highlight open questions, theoretical puzzles and problems shared with computer science and information theory.
Mitochondrial support of persistent presynaptic vesicle mobilization with age-dependent synaptic growth after LTP
The mechanisms underlying the emergence of orientation selectivity in the visual cortex have been, and continue to be, the subjects of intense scrutiny. Orientation selectivity reflects a dramatic change in the representation of the visual world: Whereas afferent thalamic neurons are generally orientation insensitive, neurons in the primary visual cortex (V1) are extremely sensitive to stimulus orientation. This profound change in the receptive field structure along the visual pathway has positioned V1 as a model system for studying the circuitry that underlies neural computations across the neocortex. The neocortex is characterized anatomically by the relative uniformity of its circuitry despite its role in processing distinct signals from region to region. A combination of physiological, anatomical, and theoretical studies has shed some light on the circuitry components necessary for generating orientation selectivity in V1. This targeted effort has led to critical insights, as well as controversies, concerning how neural circuits in the neocortex perform computations.
Understanding the neural basis of behaviour requires studying brain activity in behaving subjects using complementary techniques that measure neural responses at multiple spatial scales, and developing computational tools for understanding the mapping between these measurements. Here we report the first results of widefield imaging of genetically encoded calcium indicator (GCaMP6f) signals from V1 of behaving macaques. This technique provides a robust readout of visual population responses at the columnar scale over multiple mm2 and over several months. To determine the quantitative relation between the widefield GCaMP signals and the locally pooled spiking activity, we developed a computational model that sums the responses of V1 neurons characterized by prior single unit measurements. The measured tuning properties of the GCaMP signals to stimulus contrast, orientation and spatial position closely match the predictions of the model, suggesting that widefield GCaMP signals are linearly related to the summed local spiking activity.
During decision making, neurons in multiple brain regions exhibit responses that are correlated with decisions. However, it remains uncertain whether or not various forms of decision-related activity are causally related to decision making. Here we address this question by recording and reversibly inactivating the lateral intraparietal (LIP) and middle temporal (MT) areas of rhesus macaques performing a motion direction discrimination task. Neurons in area LIP exhibited firing rate patterns that directly resembled the evidence accumulation process posited to govern decision making, with strong correlations between their response fluctuations and the animal’s choices. Neurons in area MT, in contrast, exhibited weak correlations between their response fluctuations and choices, and had firing rate patterns consistent with their sensory role in motion encoding. The behavioural impact of pharmacological inactivation of each area was inversely related to their degree of decision-related activity: while inactivation of neurons in MT profoundly impaired psychophysical performance, inactivation in LIP had no measurable impact on decision-making performance, despite having silenced the very clusters that exhibited strong decision-related activity. Although LIP inactivation did not impair psychophysical behaviour, it did influence spatial selection and oculomotor metrics in a free-choice control task. The absence of an effect on perceptual decision making was stable over trials and sessions and was robust to changes in stimulus type and task geometry, arguing against several forms of compensation. Thus, decision-related signals in LIP do not appear to be critical for computing perceptual decisions, and may instead reflect secondary processes. Our findings highlight a dissociation between decision correlation and causation, showing that strong neuron-decision correlations do not necessarily offer direct access to the neural computations underlying decisions.
The meaning of language is represented in regions of the cerebral cortex collectively known as the ‘semantic system’. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI (fMRI) data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area. This study demonstrates that data-driven methods—commonplace in studies of human neuroanatomy and functional connectivity—provide a powerful and efficient means for mapping functional representations in the brain.
Information in a computer is quantified by the number of bits that can be stored and recovered. An important question about the brain is how much information can be stored at a synapse through synaptic plasticity, which depends on the history of probabilistic synaptic activity. The strong correlation between size and efficacy of a synapse allowed us to estimate the variability of synaptic plasticity. In an EM reconstruction of hippocampal neuropil we found single axons making two or more synaptic contacts onto the same dendrites, having shared histories of presynaptic and postsynaptic activity. The spine heads and neck diameters, but not neck lengths, of these pairs were nearly identical in size. We found that there is a minimum of 26 distinguishable synaptic strengths, corresponding to storing 4.7 bits of information at each synapse. Because of stochastic variability of synaptic activation the observed precision requires averaging activity over several minutes.
Neurons in the macaque lateral intraparietal (LIP) area exhibit firing rates that appear to ramp upward or downward during decision-making. These ramps are commonly assumed to reflect the gradual accumulation of evidence toward a decision threshold. However, the ramping in trial-averaged responses could instead arise from instantaneous jumps at different times on different trials. We examined single-trial responses in LIP using statistical methods for fitting and comparing latent dynamical spike-train models. We compared models with latent spike rates governed by either continuous diffusion-to-bound dynamics or discrete “stepping” dynamics. Roughly three-quarters of the choice-selective neurons we recorded were better described by the stepping model. Moreover, the inferred steps carried more information about the animal’s choice than spike counts.
In the mammalian cerebral cortex, neural responses are highly variable during spontaneous activity and sensory stimulation. To explain this variability, the cortex of alert animals has been proposed to be in an asynchronous high-conductance state in which irregular spiking arises from the convergence of large numbers of uncorrelated excitatory and inhibitory inputs onto individual neurons. Signatures of this state are that a neuron’s membrane potential (Vm) hovers just below spike threshold, and its aggregate synaptic input is nearly Gaussian, arising from many uncorrelated inputs. Alternatively, irregular spiking could arise from infrequent correlated input events that elicit large fluctuations in Vm. To distinguish between these hypotheses, we developed a technique to perform whole-cell Vm measurements from the cortex of behaving monkeys, focusing on primary visual cortex (V1) of monkeys performing a visual fixation task. Here we show that, contrary to the predictions of an asynchronous state, mean Vm during fixation was far from threshold (14 mV) and spiking was triggered by occasional large spontaneous fluctuations. Distributions of Vm values were skewed beyond that expected for a range of Gaussian input, but were consistent with synaptic input arising from infrequent correlated events. Furthermore, spontaneous fluctuations in Vm were correlated with the surrounding network activity, as reflected in simultaneously recorded nearby local field potential. Visual stimulation, however, led to responses more consistent with an asynchronous state: mean Vm approached threshold, fluctuations became more Gaussian, and correlations between single neurons and the surrounding network were disrupted. These observations show that sensory drive can shift a common cortical circuitry from a synchronous to an asynchronous state.
Previous work has hinted that prospective and retrospective coding modes exist in hippocampus. Prospective coding is believed to reflect memory retrieval processes, whereas retrospective coding is thought to be important for memory encoding. Here, we show in rats that separate prospective and retrospective modes exist in hippocampal subfield CA1 and that slow and fast gamma rhythms differentially coordinate place cells during the two modes. Slow gamma power and phase locking of spikes increased during prospective coding; fast gamma power and phase locking increased during retrospective coding. Additionally, slow gamma spikes occurred earlier in place fields than fast gamma spikes, and cell ensembles retrieved upcoming positions during slow gamma and encoded past positions during fast gamma. These results imply that alternating slow and fast gamma states allow the hippocampus to switch between prospective and retrospective modes, possibly to prevent interference between memory retrieval and encoding.
A Transition to Sharp Timing in Stochastic Leaky Integrate-and-Fire Neurons Driven by Frozen Noisy Input
The firing activity of intracellularly stimulated neurons in cortical slices has been demonstrated to be profoundly affected by the temporal structure of the injected current. This suggests that the timing features of the neural response may be controlled as much by its own biophysical characteristics as by how a neuron is wired within a circuit. Modeling studies have shown that the interplay between internal noise and the fluctuations of the driving input controls the reliability and the precision of neuronal spiking. In order to investigate this interplay, we focus on the stochastic leaky integrate-and-fire neuron and identify the Hölder exponent H of the integrated input as the key mathematical property dictating the regime of firing of a single-unit neuron. We have recently provided numerical evidence for the existence of a phase transition when H becomes less than the statistical Hölder exponent associated with internal gaussian white noise (H=1/2). Here we describe the theoretical and numerical framework devised for the study of a neuron that is periodically driven by frozen noisy inputs with exponent H>0. In doing so, we account for the existence of a transition between two regimes of firing when H=1/2, and we show that spiking times have a continuous density when the Hölder exponent satisfies H>1/2. The transition at H=1/2 formally separates rate codes, for which the neural firing probability varies smoothly, from temporal codes, for which the neuron fires at sharply defined times regardless of the intensity of internal noise.
A phase transition in the first passage of a Brownian process through a fluctuating boundary with implications for neural coding
Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics, and industrial engineering. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied problems in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability p(t) that a Gauss–Markov process will first exceed the boundary at time t suffers a phase transition as a function of the roughness of the boundary, as measured by its Hölder exponent H. The critical value occurs when the roughness of the boundary equals the roughness of the process, so for diffusive processes the critical value is Hc = 1/2. For smoother boundaries, H > 1/2, the probability density is a continuous function of time. For rougher boundaries, H < 1/2, the probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infinity. The critical point Hc = 1/2 corresponds to a widely studied case in the theory of neural coding, in which the external input integrated by a model neuron is a white-noise process, as in the case of uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue that this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply defined times regardless of the intensity of internal noise.