In this paper, a Dynamically-Biased Long Short-Term Memory (DB-LSTM) neural network architecture ... more In this paper, a Dynamically-Biased Long Short-Term Memory (DB-LSTM) neural network architecture is proposed for artificial intelligence internet of things (AIoT) applications. Different from the conventional LSTM which uses static bias, DB-LSTM adjusts the cell bias dynamically based on the previous status. Hence, a DB-LSTM cell contains information of both the previous output and the current cell state. With more information, the DB-LSTM is able to achieve faster training convergence and better accuracy. Furthermore, weight quantization is performed to reduce the weights to either 1-bit or 2-bit, so that the algorithm can be implemented in portable edge device. With the same 100 epochs training setup, more than 70% loss reduction are achieved for floating 32-bit, 1-bit and 2-bit weights, respectively. The loss degradation due to weight quantization is also negligible. The performance of the proposed model is also validated with the classical air passenger forecasting problem. 0.07...
In this paper, a spiking convolutional neural network (SCNN) model for voice keyword recognition ... more In this paper, a spiking convolutional neural network (SCNN) model for voice keyword recognition is presented. The model consists of an input pre-processing layer, a spiking neural network (SNN) layer with build-in filter bank and the convolutional neural network (CNN) layers. A 16-channel infinite impulse response (IIR) filter bank with energy detector extracts power from the voice signal band and converts it to spikes via the SNN layer. The spiking rate in a defined time window is used as the inputs to the following CNN layers for classification. The network is trained using a voice digit dataset, while the weights of the convolutional layers are adjusted through the training of spike-integration results obtained from the spiking layer. This model has been implemented for voice keyword recognition and achieved 96.0 % accuracy. The combination of SNN and CNN reduces the overall number of layer and neuron in the system without compromise in classification accuracy. It is suitable fo...
In this paper, a Dynamically-Biased Long Short-Term Memory (DB-LSTM) neural network architecture ... more In this paper, a Dynamically-Biased Long Short-Term Memory (DB-LSTM) neural network architecture is proposed for artificial intelligence internet of things (AIoT) applications. Different from the conventional LSTM which uses static bias, DB-LSTM adjusts the cell bias dynamically based on the previous status. Hence, a DB-LSTM cell contains information of both the previous output and the current cell state. With more information, the DB-LSTM is able to achieve faster training convergence and better accuracy. Furthermore, weight quantization is performed to reduce the weights to either 1-bit or 2-bit, so that the algorithm can be implemented in portable edge device. With the same 100 epochs training setup, more than 70% loss reduction are achieved for floating 32-bit, 1-bit and 2-bit weights, respectively. The loss degradation due to weight quantization is also negligible. The performance of the proposed model is also validated with the classical air passenger forecasting problem. 0.07...
In this paper, a spiking convolutional neural network (SCNN) model for voice keyword recognition ... more In this paper, a spiking convolutional neural network (SCNN) model for voice keyword recognition is presented. The model consists of an input pre-processing layer, a spiking neural network (SNN) layer with build-in filter bank and the convolutional neural network (CNN) layers. A 16-channel infinite impulse response (IIR) filter bank with energy detector extracts power from the voice signal band and converts it to spikes via the SNN layer. The spiking rate in a defined time window is used as the inputs to the following CNN layers for classification. The network is trained using a voice digit dataset, while the weights of the convolutional layers are adjusted through the training of spike-integration results obtained from the spiking layer. This model has been implemented for voice keyword recognition and achieved 96.0 % accuracy. The combination of SNN and CNN reduces the overall number of layer and neuron in the system without compromise in classification accuracy. It is suitable fo...
Uploads
Papers by Jinhai Hu