2016 International Joint Conference on Neural Networks (IJCNN), 2016
Online sequence learning from streaming data is one of the most challenging topics in machine lea... more Online sequence learning from streaming data is one of the most challenging topics in machine learning. Neural network models represent promising candidates for sequence learning due to their ability to learn and recognize complex temporal patterns. In this paper, we present a comparative study of Hierarchical Temporal Memory (HTM), a neurally-inspired model, and other feedforward and recurrent artificial neural network models on both artificial and real-world sequence prediction algorithms. HTM and long-short term memory (LSTM) give the best prediction accuracy. HTM additionally demonstrates many other features that are desirable for real-world sequence learning, such as fast adaptation to changes in the data stream, robustness to sensor noise and fault tolerance. These features make HTM an ideal candidate for online sequence learning problems.
Neocortical regions are organized into columns and layers. Connections between layers run mostly ... more Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a repr...
ABSTRACTNeocortical regions are organized into columns and layers. Connections between layers run... more ABSTRACTNeocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose tha...
Visual processing depends on specific computations implemented by complex neural circuits. Here, ... more Visual processing depends on specific computations implemented by complex neural circuits. Here, we present a circuit-inspired model of retinal ganglion cell computation, targeted to explain their temporal dynamics and adaptation to contrast. To localize the sources of such processing, we used recordings at the levels of synaptic input and spiking output in the in vitro mouse retina. We found that an ON-Alpha ganglion cell's excitatory synaptic inputs were described by a divisive interaction between excitation and delayed suppression, which explained nonlinear processing that was already present in ganglion cell inputs. Ganglion cell output was further shaped by spike generation mechanisms. The full model accurately predicted spike responses with unprecedented millisecond precision, and accurately described contrast adaptation of the spike train. These results demonstrate how circuit and cell-intrinsic mechanisms interact for ganglion cell function and, more generally, illustrat...
The ability to recognize and predict temporal sequences of sensory inputs is vital for survival i... more The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and onli...
1.Hierarchical temporal memory (HTM) provides a theoretical framework that models several key com... more 1.Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the spatial pooler, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the spatial pooler outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the spatial po...
Understanding why neural systems can process information extremely fast is a fundamental question... more Understanding why neural systems can process information extremely fast is a fundamental question in theoretical neuroscience. The present study investigates the effect of noise on accelerating neural computation. To evaluate the speed of network response, we consider a computational task in which the network tracks time-varying stimuli. Two noise structures are compared, namely, the stimulus-dependent and stimulus-independent noises. Based on a simple linear integrate-and-fire model, we theoretically analyze the network dynamics, and find that the stimulus-dependent noise, whose variance is proportional to the mean of external inputs, has better effect on speeding up network computation. This is due to two good properties in the transient network dynamics: (1) the instant firing rate of the network is proportional to the mean of external inputs, and (2) the stationary state of the network is robust to stimulus changes. We investigate two network models with varying recurrent interactions, and find that recurrent interactions tend to slow down the tracking speed of the network. When the biologically plausible Hodgkin-Huxley model is considered, we also observe that the stimulus-dependent noise accelerates neural computation, although the improvement is smaller than that in the case of linear integrate-and-fire model.
Computations performed by the visual pathway are constructed by neural circuits distributed over ... more Computations performed by the visual pathway are constructed by neural circuits distributed over multiple stages of processing, and thus it is challenging to determine how different stages contribute based on recordings from single areas. Here, we address this problem in the lateral geniculate nucleus (LGN), using experiments combined with nonlinear modeling capable of isolating various circuit contributions. We recorded cat LGN neurons presented with temporally modulated spots of various sizes, which drove temporally precise LGN responses. We utilized simultaneously recorded S-potentials, corresponding to the primary retinal ganglion cell (RGC) input to each LGN cell, in order to distinguish the computations underlying temporal precision in the retina from those in the LGN. Nonlinear models with excitatory and delayed suppressive terms were sufficient to explain temporal precision in the LGN, and we found that models of the S-potentials were nearly identical, although with a lower ...
The Journal of neuroscience : the official journal of the Society for Neuroscience, Jan 6, 2016
The responses of sensory neurons can be quite different to repeated presentations of the same sti... more The responses of sensory neurons can be quite different to repeated presentations of the same stimulus. Here, we demonstrate a direct link between the trial-to-trial variability of cortical neuron responses and network activity that is reflected in local field potentials (LFPs). Spikes and LFPs were recorded with a multielectrode array from the middle temporal (MT) area of the visual cortex of macaques during the presentation of continuous optic flow stimuli. A maximum likelihood-based modeling framework was used to predict single-neuron spiking responses using the stimulus, the LFPs, and the activity of other recorded neurons. MT neuron responses were strongly linked to gamma oscillations (maximum at 40 Hz) as well as to lower-frequency delta oscillations (1-4 Hz), with consistent phase preferences across neurons. The predicted modulation associated with the LFP was largely complementary to that driven by visual stimulation, as well as the activity of other neurons, and accounted f...
... The current-based neuron models, due to their simplicity, are often used in the theoretic stu... more ... The current-based neuron models, due to their simplicity, are often used in the theoretic study of large-size networks. Hence, we expect that the current study will serve as a building brick forana-lyzing the impact of shunting inhibition on the dynamics of large-size networks. ...
2016 International Joint Conference on Neural Networks (IJCNN), 2016
Online sequence learning from streaming data is one of the most challenging topics in machine lea... more Online sequence learning from streaming data is one of the most challenging topics in machine learning. Neural network models represent promising candidates for sequence learning due to their ability to learn and recognize complex temporal patterns. In this paper, we present a comparative study of Hierarchical Temporal Memory (HTM), a neurally-inspired model, and other feedforward and recurrent artificial neural network models on both artificial and real-world sequence prediction algorithms. HTM and long-short term memory (LSTM) give the best prediction accuracy. HTM additionally demonstrates many other features that are desirable for real-world sequence learning, such as fast adaptation to changes in the data stream, robustness to sensor noise and fault tolerance. These features make HTM an ideal candidate for online sequence learning problems.
Neocortical regions are organized into columns and layers. Connections between layers run mostly ... more Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a repr...
ABSTRACTNeocortical regions are organized into columns and layers. Connections between layers run... more ABSTRACTNeocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose tha...
Visual processing depends on specific computations implemented by complex neural circuits. Here, ... more Visual processing depends on specific computations implemented by complex neural circuits. Here, we present a circuit-inspired model of retinal ganglion cell computation, targeted to explain their temporal dynamics and adaptation to contrast. To localize the sources of such processing, we used recordings at the levels of synaptic input and spiking output in the in vitro mouse retina. We found that an ON-Alpha ganglion cell's excitatory synaptic inputs were described by a divisive interaction between excitation and delayed suppression, which explained nonlinear processing that was already present in ganglion cell inputs. Ganglion cell output was further shaped by spike generation mechanisms. The full model accurately predicted spike responses with unprecedented millisecond precision, and accurately described contrast adaptation of the spike train. These results demonstrate how circuit and cell-intrinsic mechanisms interact for ganglion cell function and, more generally, illustrat...
The ability to recognize and predict temporal sequences of sensory inputs is vital for survival i... more The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and onli...
1.Hierarchical temporal memory (HTM) provides a theoretical framework that models several key com... more 1.Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the spatial pooler, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the spatial pooler outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the spatial po...
Understanding why neural systems can process information extremely fast is a fundamental question... more Understanding why neural systems can process information extremely fast is a fundamental question in theoretical neuroscience. The present study investigates the effect of noise on accelerating neural computation. To evaluate the speed of network response, we consider a computational task in which the network tracks time-varying stimuli. Two noise structures are compared, namely, the stimulus-dependent and stimulus-independent noises. Based on a simple linear integrate-and-fire model, we theoretically analyze the network dynamics, and find that the stimulus-dependent noise, whose variance is proportional to the mean of external inputs, has better effect on speeding up network computation. This is due to two good properties in the transient network dynamics: (1) the instant firing rate of the network is proportional to the mean of external inputs, and (2) the stationary state of the network is robust to stimulus changes. We investigate two network models with varying recurrent interactions, and find that recurrent interactions tend to slow down the tracking speed of the network. When the biologically plausible Hodgkin-Huxley model is considered, we also observe that the stimulus-dependent noise accelerates neural computation, although the improvement is smaller than that in the case of linear integrate-and-fire model.
Computations performed by the visual pathway are constructed by neural circuits distributed over ... more Computations performed by the visual pathway are constructed by neural circuits distributed over multiple stages of processing, and thus it is challenging to determine how different stages contribute based on recordings from single areas. Here, we address this problem in the lateral geniculate nucleus (LGN), using experiments combined with nonlinear modeling capable of isolating various circuit contributions. We recorded cat LGN neurons presented with temporally modulated spots of various sizes, which drove temporally precise LGN responses. We utilized simultaneously recorded S-potentials, corresponding to the primary retinal ganglion cell (RGC) input to each LGN cell, in order to distinguish the computations underlying temporal precision in the retina from those in the LGN. Nonlinear models with excitatory and delayed suppressive terms were sufficient to explain temporal precision in the LGN, and we found that models of the S-potentials were nearly identical, although with a lower ...
The Journal of neuroscience : the official journal of the Society for Neuroscience, Jan 6, 2016
The responses of sensory neurons can be quite different to repeated presentations of the same sti... more The responses of sensory neurons can be quite different to repeated presentations of the same stimulus. Here, we demonstrate a direct link between the trial-to-trial variability of cortical neuron responses and network activity that is reflected in local field potentials (LFPs). Spikes and LFPs were recorded with a multielectrode array from the middle temporal (MT) area of the visual cortex of macaques during the presentation of continuous optic flow stimuli. A maximum likelihood-based modeling framework was used to predict single-neuron spiking responses using the stimulus, the LFPs, and the activity of other recorded neurons. MT neuron responses were strongly linked to gamma oscillations (maximum at 40 Hz) as well as to lower-frequency delta oscillations (1-4 Hz), with consistent phase preferences across neurons. The predicted modulation associated with the LFP was largely complementary to that driven by visual stimulation, as well as the activity of other neurons, and accounted f...
... The current-based neuron models, due to their simplicity, are often used in the theoretic stu... more ... The current-based neuron models, due to their simplicity, are often used in the theoretic study of large-size networks. Hence, we expect that the current study will serve as a building brick forana-lyzing the impact of shunting inhibition on the dynamics of large-size networks. ...
Uploads
Papers by Yuwei Cui