1.College of Nuclear Technology and Automation Engineering, Chengdu University of Technology, Dongsanlu, Erxianqiao, Chengdu 610059, China
Corresponding author, huanghongquan@cdut.cn
Scan for full text
Xing-Ke Ma, Hong-Quan Huang, Qian-Cheng Wang, et al. Estimation of Gaussian overlapping nuclear pulse parameters based on a deep learning LSTM model. [J]. Nuclear Science and Techniques 30(11):171(2019)
Xing-Ke Ma, Hong-Quan Huang, Qian-Cheng Wang, et al. Estimation of Gaussian overlapping nuclear pulse parameters based on a deep learning LSTM model. [J]. Nuclear Science and Techniques 30(11):171(2019) DOI: 10.1007/s41365-019-0691-2.
A Long Short-Term Memory (LSTM) neural network has excellent learning ability applicable to time series’ of nuclear pulse signals. It can accurately estimate parameters associated with amplitude, time, and so on, in digitally shaped nuclear pulse signals—especially signals from overlapping pulses. By learning the mapping relationship between Gaussian overlapping pulses after digital shaping and exponential pulses before shaping, the shaping parameters of the overlapping exponential nuclear pulses can be estimated using the LSTM model. Firstly, the Gaussian overlapping nuclear pulse (ONP) parameters which need to be estimated received Gaussian digital shaping treatment, after superposition by multiple exponential nuclear pulses. Secondly, a dataset containing multiple samples was produced, each containing a sequence of sample values from Gaussian ONP, after digital shaping, and a set of shaping parameters from exponential pulses before digital shaping. Thirdly, the Training Set in the dataset was used to train the LSTM model. From these data sets, the values sampled from the Gaussian ONP were used as the input data for the LSTM model, and the pulse parameters estimated by the current LSTM model were calculated by forward propagation. Next, the loss function was used to calculate the loss value between the network-estimated pulse parameters and the actual pulse parameters. Then, a gradient-based optimization algorithm was applied, to feedback the loss value and the gradient of the loss function to the neural network, to update the weight of the LSTM model, thereby achieving the purpose of training the network. Finally, the sampled value of the Gaussian ONP for which the shaping parameters needed to be estimated was used as the input data for the LSTM model. After this, the LSTM model produced the required nuclear pulse parameter set. In summary, experimental results showed that the proposed method overcame the defect of local convergence encountered in traditional methods, and could accurately extract parameters from multiple, severely overlapping Gaussian pulses, to achieve optimal estimation of nuclear pulse parameters in the global sense. These results support the conclusion that this is a good method for estimating nuclear pulse parameters.
Nuclear pulsesS-K digital shapingDeep learningLSTM
F.S. Goulding, Pulses-shaping in low-noise nuclear amplifiers: A physical approach to noise analysis. Nucl. Instrum. Meth. A 100, 493-504(1972). doi: 10.1016/0029-554X(72)90828-2http://doi.org/10.1016/0029-554X(72)90828-2
G. Gerardi, L. Abbene, A. La Manna et al., Digital filtering and analysis for a semiconductor X-ray detector data acquisition. Nucl. Instrum. Meth. A 571, 378-380 (2007). doi: 10.1016/j.nima.2006.10.113http://doi.org/10.1016/j.nima.2006.10.113
T. Noulis, C. Deradonis, S. Siskos et al., Particle detector tunable monolithic Semi-Gaussian shaping filter based on transconductance amplifiers. Nucl. Instrum. Meth. A 589, 330-337(2008). doi: 10.1016/j.nima.2008.02.048http://doi.org/10.1016/j.nima.2008.02.048
S.G. Chen, S.Y. Ji, W.S. Liu, Gaussian pulses shaping of exponential decay signal based on wavelet analysis. ACTA PHYS SIN-CH ED. 57, 2882-2887 (2018). doi: 10.3321/j.issn:1000-3290.2008.05.041http://doi.org/10.3321/j.issn:1000-3290.2008.05.041 (in Chinese)
S.G. Chen, S.Y. Ji, W.S. Liu et al., Recursive implementation of Gaussian pulse shaping based on wavelet analysis. ACTA PHYS SIN-CH ED. 58 (5),3041-3046 (2009). doi: 10.7498/aps.58.3041http://doi.org/10.7498/aps.58.3041 (in Chinese)
K.M. Jiang, H.Q. Huang, X.F. Yang et al., Pulses Parameter Extraction Method Based on S-K Digital Shaping and Population Technique. Nuclear Electronics & Detection Technology. 37, 2(2017). (in Chinese)
H.Q. Huang, X.F. Yang, W.C. Ding et al., Estimation method for parameters of overlapping nuclear pulses signal. Nucl. Sci. Tech. 28, 12(2017). doi: 10.1007/s41365-016-0161-zhttp://doi.org/10.1007/s41365-016-0161-z
G.E. Hinton, R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. Science. 313, 5786(2006). doi: 10.1126/science.1127647http://doi.org/10.1126/science.1127647
G. Dorffner, Neural networks for time series processing. NEURAL NETW WORLD. 6, 1(1996).
Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature. 521, 7553(2015). doi: 10.1038/nature14539http://doi.org/10.1038/nature14539
Y. Lecun, L. Bottou, Y. Bengio et al., Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 2278-2324(1998). doi: 10.1109/5.726791http://doi.org/10.1109/5.726791
J. Du, B.L. Hu, Y.Z. Liu et al., Study on quality identification of macadamia nut based on convolutional neural networks and spectral features. SPECTROSC SPECT ANAL. 38, 1514(2018). doi: 10.3964/j.issn.1000-0593(2018)05-1514-06http://doi.org/10.3964/j.issn.1000-0593(2018)05-1514-06
A. Graves, Generating Sequences With Recurrent Neural Networks(2013). arXiv:1308.0850
A. Graves, A. Mohamed, G. Hinton, Speech Recognition with Deep Recurrent Neural Networks(2013). arXiv:1303.5778
R. Pascanu, C. Gulcehre, K. Cho et al., How to Construct Deep Recurrent Neural Networks(2013). arXiv:1312.6026
S. Hochreiter, J. Schmidhuber, Long short-term memory. Neural. Comput. 9,1735-1780(1997). doi: 10.1162/neco.1997.9.8.1735http://doi.org/10.1162/neco.1997.9.8.1735
A. Graves, M. Liwicki, S. Fernandez et al., A novel connectionist system for unconstrained handwriting recognition. IEEE T Pattern Anal. 31, 855-868(2009). doi: 10.1109/tpami.2008.137http://doi.org/10.1109/tpami.2008.137
F.A. Gers, D. Eck, J. Schmidhuber, Artificial Neural Networks-Icann 2001(Vienna univ technol, vienna, 2001), pp. 669-676
P.J. Werbos, Backpropagation through time: what it does and how to do it. Proceedings of the IEEE. 78, 1550-1560(1990). doi: 10.1109/5.58337http://doi.org/10.1109/5.58337
A. Graves, J. Schmidhuber, Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks. 18, 602-610(2005). doi: 10.1016/j.neunet.2005.06.042http://doi.org/10.1016/j.neunet.2005.06.042
S. Amari, Backpropagation and stochastic gradient descent method. Neurocomputing. 5, 185-196 (1993). doi: 10.1016/0925-2312(93)90006-Ohttp://doi.org/10.1016/0925-2312(93)90006-O
J. Duchi, E. Hazan, Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121-2159(2011).
S. Yeung, O. Russakovsky, N. Jin et al., Every moment counts: Dense detailed labeling of actions in complex videos. Int. J. Comput. Vision 126, 375-389(2018). doi: 10.1007/s11263-017-1013-yhttp://doi.org/10.1007/s11263-017-1013-y
Kingma , P. Diederik, J. Ba, Adam: A Method for Stochastic Optimization(2014). arXiv:1412.6980
G.E. Hinton, N. Srivastava, A. Krizhevsky et al., Improving neural networks by preventing co-adaptation of feature detectors(2012). arXiv:1207.0580
A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks. COMMUN ACM. 60, 84-90(2017). doi: 10.1145/3065386http://doi.org/10.1145/3065386
X. Bouthillier, K. Konda, P. Vincent, et al., Dropout as data augmentation(2015). arXiv:1506.08700
0
Views
0
Downloads
0
CSCD
Publicity Resources
Related Articles
Related Author
Related Institution