Neural networks may be divided into different generations that are distinguished based on the computational units implemented at the neurons. The first generation of neural networks was formed of neurons, such as the perceptron, having binary outputs. In this generation, for a neuron, its output was a 1 if the weighted sum of its inputs was above a threshold, and a 0 if the weighted sum was below a threshold. The second generation of neural networks introduced a non-linear continuous activation function applied at each neuron, such that the output of each neuron could be any one of a large number of possible values within a continuous range. Spiking neural networks (SNNs) are considered to be a third generation of neural networks that more closely emulate the neural circuits of the brain by making use of spiking neurons. A spiking neuron receives incoming spikes, resulting in an increase to its activation level, which then decays over time. A spiking neuron only fires (i.e. outputs a spike) if and when its activation level exceeds a given threshold. This model implemented at each spiking neuron is referred to as the ‘leaky integrate-and-fire’ (LIF) model.
SNNs are of interest for a number of reasons. Firstly, they have the potential to be far more energy efficient than conventional artificial neural networks (ANNs). This energy efficiency results at least in part from the event-driven mechanism utilised in SNNs, whereby neurons only fire once their activation levels exceed a threshold value, meaning that only a small fraction of neurons may be active at any given time. Specialised computing hardware, known as neuromorphic chips, implementing physical artificial neurons have been developed to implement SNNs and have been found to operate with a large magnitude of energy saving compared to hardware running traditional neural networks. A further advantage of SNNs is that, since they incorporate the timing of spikes as a component in their computation, they are naturally suited for processing information that changes over time.
A number of challenges are, however, associated with SNNs. Generally, there are difficulties associated with training SNNs due to the non-differentiable nature of the spikes in SNNs. Furthermore, specific challenges have been encountered when applying SNNs for performing tasks currently performed well by conventional ANNs. A couple of notable projects are currently underway to enhance the accuracy of SNNs for certain tasks involving the processing of sequential data.
A first project is concerned with the use of SNNs to make future predictions based on historical data, e.g. regarding electricity consumption or traffic patterns. In particular, a challenge associated with the use of SNNs for this task is the choice of a suitable encoding mechanism for encoding the floating-point values of the time-series data into spike trains. To do so, the authors describe an approach in which the temporal dimension between the time-series data and SNNs is aligned and finer information based on the SNN time step is incorporated within the time-series data. Encoding layers are used to convert the floating-point values of the time-series data into spike trains. Experimental data shows that this approach for utilising SNNs is effective for time-series forecasting, showing comparable performance to forecasting using ANN counterparts, whilst significantly reducing energy consumption during inference.
A second project is concerned with the use of SNNs for processing rhythmic and periodic patterns found in natural language processing and time-series analysis. In the field of sequential tasks, positional encoding (PE) is used for certain models, such as transformers, to capture the sequential order of input tokens. The authors describe that the task of applying SNNs to sequential tasks, such as text classification and time-series forecasting, has been hindered by the challenge of creating an effective and hardware-friendly spike-form positional encoding (PE) strategy. Central pattern generators (GPG) are networks of neurons that produce rhythmic outputs when coupled. The authors describe that CPG and PE both generate periodic outputs (with respect to time for CPG and with respect to position for PE), and that these two techniques could therefore be integrated and CPG neurons utilised to encode positional information.
It appears that the ongoing research in the field of SNNs is resulting in a broader range of processing tasks to which SNNs can be applied. As these frameworks develop, more widespread adoption of SNN hardware, such as neuromorphic chips, could potentially follow, along with a new wave of AI applications.