Introduction
X-ray fluorescence refers to the X-rays emitted by a sample under irradiation by an excitation source, which contains the elemental and chemical composition information of the analyzed sample. In X-ray fluorescence (XRF) spectrometry, the counting rate and energy resolution are important indicators that directly determine the accuracy of the content analysis of each element in the tested sample [1]. In particular, in the detection of weak elements with lower contents, peak drift and count loss have an inestimable impact on the measurement results. The main causes of peak drift and count loss are pulse distortions caused by the measurement system itself. The key elements of the measurement system include a probe (integrated with the detector and preamplifier), X-ray tube, tested sample, front-end signal conditioning circuit, digital processing unit, controller unit, and upper computer [2]. Distorted pulses primarily include stacked, interfering, slow, spark, double, and truncated pulses. In measurement systems using switch reset preamplifiers, distorted pulses are mainly composed of truncated pulses, which refer to a pulse signal whose pulse amplitude suddenly jumps to zero owing to the reset of the switch, resulting in an insufficient effective width. As a result, the amplitude loss of the triangular shaping results caused by pulse distortion has led to some limitations in current X-ray fluorescence spectroscopy, including spectral peak drift, unreliable net counts, and inaccurate element content analysis.
In the field of X-ray fluorescence spectroscopy, research has mainly focused on digital pulse shaping [3] and filtering [4], and an increasing number of researchers are focusing on using new digital signal processing technologies to solve problems in this field. The algorithm proposed by Zhong [5] solved the problem of poor resolution of the energy spectrum caused by pulse stacking and temperature fluctuations in an X-ray spectrum system. A symmetric conversion method based on Gaussian distribution [6] was proposed to obtain the γ-ray net count from the interlaced overlap peak in the HPGe γ ray spectrometer system. A modified sparse reconstruction method [7] to overcome pulse pile-up, especially with ultrahigh count rates, which uses two regularization terms to compensate for the error caused by an inadequate sampling rate. To achieve high count rates, a new true Gaussian digital shaper for detector pulses [8] and a compensation technology for pulse stacking [9] were proposed. In our previous research, we proposed a pulse elimination method [10] and pulse repair method [11] for distorted pulses, both of which improved the accuracy of spectral analysis to a certain extent. As traditional pulse processing methods, the above research methods have obvious optimization effects on the X-ray fluorescence spectrum analysis in situations where pulse stacking or pulse distortion is not particularly serious. However, traditional pulse processing methods are significantly limited when pulse stacking or pulse distortion is difficult to recognize, and this has become a popular research topic in the field of spectrum processing.
In recent years, deep learning technology has developed rapidly, and many excellent models have emerged, such as UNet [12], VNet [13], and Transformer [14], which have been widely used in medicine [15], industry [16], control [17], and radiation measurements, such as imaging quality improvement in radiation therapy [18], gamma spectrum analysis [19], and pulse signal analysis based on residual structures [20]. Deep learning technology provides various ideas for pulse processing in radiation measurement [21, 22]. Touch [23] applied an artificial neural network for energy-spectrum correction and achieved satisfactory results. Alberto et al. [24] proposed a specific type of U-net that filters pulses, returns their height, and estimates the pulse amplitude. Byoungil et al. [25] proposed a deep learning-based method for separating and predicting the true pulse height of a signal for application in spectroscopy with a scintillation detector. Liu [26] investigated a pulse-coupled neural network (PCNN) for higher anti-noise performance in the neutron and gamma-ray (n−γ) discrimination field. Ma [27, 28] accurately predicted the trapezoidal-forming parameters of stacked pulses using a long short-term memory (LSTM) model. Based on previous research on pulse amplitude estimation, this paper proposes a methodology for an LSTM model fused with a convolutional neural network (CNN). Compared to other pulse estimation algorithms, this algorithm exhibited better performance. The introduction and extension of this new technology to X-ray fluorescence spectroscopy are of significant interest.
Principle and method
Principle of peak drift
The X-ray fluorescence spectrum is often analyzed using multichannel pulse amplitude (MCA), with each pulse amplitude corresponding to a count in the counting histogram. When the pulse output of the measurement system experienced an amplitude loss during the digital processing stage, the corresponding counting histogram of the pulse drifted to the left. When the number of pulses is sufficient, they exist as shadow peaks of the characteristic peaks in the generated energy spectrum. The traditional spectrum acquisition process is shown in Fig. 1a and includes a detector, a preamplifier, a CR differential shaper, a digital processing unit, and an MCA unit. For standard sources with a single-element composition, the number of characteristic peaks is limited; therefore, the pulse amplitude of the measured output mostly fluctuates within a certain range. Taking 2000 pulses with an amplitude of approximately 600 mV as an example and assuming a pulse distortion ratio of 5%, the X-ray spectrum obtained using the spectral analysis method shown in Fig. 1a is shown in Fig. 1b. In this figure, the channel range of the characteristic peak region of interest (ROI) is 594–606 with a peak area (net peak count) of 1900. A shadow peak formed by the distorted pulses appeared near the 520th channel on the left side of the characteristic peak, with a peak area of 100, as shown by the green shaded area in Fig. 1b. In practical applications, if the counting rate of the characteristic peak is high, the number of distorted pulses accumulates to a certain extent and a shadow peak is generated. This shadow peak not only reduces the net count of the characteristic peak ROI but also introduces new difficulties to spectral analysis. Therefore, this study proposes a deep learning based CNN-LSTM model that is added before the MCA unit to achieve an accurate estimation of pulse parameters. The spectral acquisition process for the added model is shown in Fig. 1c. In an ideal situation, when the amplitudes of 100 distorted pulses are accurately estimated, the histogram of the characteristic peaks obtained by calling the model is shown in Fig. 1d, where the ROI of the characteristic peaks remains unchanged but the total count increases to 2000. It can be concluded that calling the model to optimize the pulse amplitude estimation not only ensures that the counting of characteristic peaks is not lost, but also eliminates shadow peaks caused by pulse distortion.
-202311/1001-8042-34-11-014/alternativeImage/1001-8042-34-11-014-F001.jpg)
Deep learning
Data acquisition
In the data acquisition stage, the datasets were produced. For a single distorted negative exponential pulse, the pulse distortion time is assumed to be
The mathematical model of a pulse sequence composed of N distortion negative exponential pulses is:
Here, u(t) represents unit-step signal,
The distorted nuclear pulse sequence Vo(nTclk) used for the parameter estimation is regarded as
Here, the dataset of the CNN-LSTM model established in this study was taken from the pulse amplitude sampling value of distortion pulses after triangular shaping, whereas the parameter set P was taken from the negative exponential pulses before triangular shaping. Taking the parameter set
The datasets in Eqs. (5) contains
-202311/1001-8042-34-11-014/alternativeImage/1001-8042-34-11-014-F002.jpg)
As required, this study divided the dataset into training, test, and validation sets in a ratio of 7:2:1. In general, the training set accounts for a large proportion of the data and is used to train the generalization ability of the model, whereas the verification set is used to verify whether the model is overfitted. Once overfitting occurs, it must be eliminated by adding a dropout layer to randomly discard the connections of some neurons.
Hyperparameter optimization
The model adopted in this study addresses a series of nuclear pulse amplitude sequences. As a special recurrent neural network (RNN), LSTM is essentially different from an RNN in that the introduction of forgetting gates determines which information will be retained or forgotten by controlling the parameters. Therefore, LSTM can solve the problems of gradient disappearance and explosion of long time-series samples during the training process.
LSTM usually deals with long sequences and large sample data, but too large sample data will cause some difficulties during model training, such as computation complexity. Therefore, this study combined LSTM wSith a CNN. A convolutional neural network (CNN) is not as much of an algorithm as a feature extraction method, and usually includes a convolution layer (including an activation layer), a pooling layer, and a full link layer. The process of extracting features through convolutional neural network is essentially the process of solving the optimal parameter matrix. The relationship between input and output can be expressed using Eq. (6):
Block | Layer (filter size) | Input size | Output size |
---|---|---|---|
Conv1D_1 | Conv1D (3×3) | (None, 1, 256) | (None, 1, 64) |
Max_pooling1D_1 | Max_pooling1D | (None, 1, 64) | (None, 1, 64) |
Conv1D_2 | Conv1D (3×3) | (None, 1, 64) | (None, 1, 16) |
Max_pooling1D_2 | Max_pooling1D | (None, 1, 16) | (None, 1, 16) |
In the forward propagation process, the input neurons are kept unchanged, and the weight parameter matrix is initialized using a random strategy. The error between the output parameter set
Subsequently, the back-propagation through time (BPTT) algorithm is applied to feed back the gradient of the loss function and
Figure 3 shows the hyperparameter optimization process for the parameters and layers during the training process of the CNN-LSTM model. If the batch size is set too large, it can lead to memory overflow during the training process, and the model is prone to convergence to local optima, making it impossible to complete the training. If the batch size is too small, the rate of convergence of the model will be too slow, and the training time will be too long. Figure 3 shows the iterative loss values obtained for the training and validation sets when the number of layers of the LSTM model was five and the parameter batch size was set to 10 and 100, respectively. When the batch size was 100, the model converged at the 40th epoch with a high loss of 3 × 105. When batch size is 10, the model converges normally, and the loss value after convergence approaches zero as much as possible.
-202311/1001-8042-34-11-014/alternativeImage/1001-8042-34-11-014-F003.jpg)
When setting the parameters for the LSTM model, theoretically speaking, the more layers there are, the more ideal the training results. The problem of vanishing gradient must also be considered. An increase in the number of layers results in a greater computational burden; therefore, when optimizing the hyperparameters, we usually set the number of layers to 3-6. Figure 3 shows the iterative loss values obtained for the training and validation sets for layers 3 and 5. It can be seen that when the batch size is 10, the attenuation speed of the LSTM model with five and three layers is close, but when the layer number is 3, the loss value after model convergence is still as high as 2.8 × 105. When the number of layers was five, the loss values of the training and validation sets approached zero.
After hyperparameter optimization, five LSTM layers were set with an initial learning rate of 0.0001 and a batch size of 10, and Adam was selected as the optimizer. The generated network model structure is shown in Fig. 4, which includes the input layer, hidden layer, output layer, and backpropagation part. The hidden layer includes the CNN and LSTM models.
-202311/1001-8042-34-11-014/alternativeImage/1001-8042-34-11-014-F004.jpg)
Simulation results and experimental verification
The CNN-LSTM model proposed in this study was applied to the peak correction of the X-ray fluorescence spectrum. As mentioned above, when the negative exponential pulse sequence output of the measurement system was significantly distorted, the amplitude value of the triangular shaping result was significantly damaged. According to the generation principle of the digital multichannel spectrum, the amplitude loss of the distorted pulses after shaping appears in the form of count drift in the X-ray fluorescence spectrum, which is unfavorable for obtaining an accurate X-ray fluorescence spectrum. The CNN-LSTM model proposed in this study, based on deep learning, aims to accurately estimate the parameters of the triangular shaping results of the distorted pulses. Thus, the shift in the peak in the X-ray fluorescence spectrum can be corrected to obtain a more accurate X-ray fluorescence spectrum.
CNN-LSTM simulation
Model training
To verify the effect of the CNN-LSTM model on the parameter estimation of the triangular shaping results under the condition of severe distortion of the negative exponential pulses, we took 10000 samples and divided them into training, verification, and test sets according to a ratio of 7:2:1, with a training period of 100 epochs. The change in the loss values obtained from the training and verification sets during the training process is shown in Fig. 5.
-202311/1001-8042-34-11-014/alternativeImage/1001-8042-34-11-014-F005.jpg)
In general, with an increase in the number of training cycles, the loss values of the training and verification sets showed a downward trend. The verification set experiences some shocks in the later period, but soon tends to stabilize. Using the loss function, when the loss values of both the training and validation sets were low and relatively stable, the model during that period was saved as the best model. In the training process of the proposed model, the model at the 91st epoch was saved as the best model, with loss values of 2.7895 and 3.5507 for the training and validation sets, respectively, during this epoch.
Performance evaluation of parameter estimation
In the production of the test set, considering that the distortion degree of the pulses may affect the model test effect, the sample of the test set can be divided into two categories according to the pulse distortion time mentioned above: slightly distorted pulses and severely distorted pulses, whose triangular shaping results are shown in Fig.6. The rising time of the triangular shape,
-202311/1001-8042-34-11-014/alternativeImage/1001-8042-34-11-014-F006.jpg)
To control the impact of other variables, the amplitude parameter
-202311/1001-8042-34-11-014/alternativeImage/1001-8042-34-11-014-F007.jpg)
-202311/1001-8042-34-11-014/alternativeImage/1001-8042-34-11-014-F008.jpg)
In Fig. 7a, the amplitude value of Pulse1 suddenly changes to 0 at the 47th
In Fig. 8a, the amplitude value of Pulse5 suddenly changes to zero at the 100th
During the performance evaluation of the parameter estimation, the CNN-LSTM and LSTM models were used to estimate the parameters of the above two pulse sequences, and the output results are shown in Fig. 9. The real value of the eight pulse amplitudes was fixed at 2000 mV, but the amplitude loss of the triangular shaping result of the Pulse 1-Pulse 4 was very large.
-202311/1001-8042-34-11-014/alternativeImage/1001-8042-34-11-014-F009.jpg)
Based on the parameter estimation results of the slightly and severely distorted pulses, the absolute and relative errors of the different methods used to estimate the pulse amplitudes are summarized in Table 2.
Seriously distorted pulses | Slightly distorted pulses | |||||||
---|---|---|---|---|---|---|---|---|
Pulse1 | Pulse2 | Pulse3 | Pulse4 | Pulse5 | Pulse6 | Pulse7 | Pulse8 | |
Aread(mV) | 2000 | 2000 | 2000 | 2000 | 2000 | 2000 | 2000 | 2000 |
ATri(mV) | 1459.7 | 1629 | 1820 | 1940 | 1998.4 | 2016.4 | 2008.6 | 2004.4 |
ΔTri | 540.3 | 371 | 180 | 60 | 1.6 | 16.4 | 8.6 | 4.4 |
δTri | 27.02% | 18.55% | 9.00% | 3.00% | 0.08% | 0.82% | 0.43% | 0.22% |
ACNN-LSTM(mV) | 1995 | 1998 | 1999 | 2003 | 2000 | 2001 | 2002 | 2002 |
ΔCNN-LSTM | 5 | 2 | 1 | 3 | 0 | 1 | 2 | 2 |
ΔCNN-LSTM | 0.25% | 0.10% | 0.05% | 0.15% | 0.00% | 0.05% | 0.10% | 0.10% |
ALSTM(mV) | 1990 | 1993 | 2003 | 2001 | 2003 | 2005 | 2002 | 2003 |
ΔLSTM | 10 | 7 | 3 | 1 | 3 | 5 | 2 | 3 |
δLSTM | 0.50% | 0.35% | 0.15% | 0.05% | 0.15% | 0.25% | 0.10% | 0.15% |
Areal represents the actual pulse amplitude. The measurement methods used in this study mainly included triangular shaping, CNN-LSTM, and LSTM models, the measurement results of which are represented by
For severely distorted pulses, the average relative error of triangular shaping was as high as 14.39%, whereas that of CNN-LSTM was only 0.14%, and that of LSTM was 1.05%. On the other hand, for slightly distorted pulses, the average relative error of triangular shaping was 0.39%, while that of CNN-LSTM was only 0.06%, and that of LSTM was 0.65%. It can also be observed from the estimation results that the two models can estimate the pulse amplitude very accurately, whether it is a severely or slightly distorted pulse. It is worth noting that although the performance of the CNN-LSTM model is slightly better than that of LSTM, both deep learning methods have very high accuracy in pulse parameter estimation. This study introduces a CNN because the input pulse sequence is relatively complex, and directly uses the LSTM model to process data that are too large. Therefore, using a CNN for sampling reduces the amount of data and saves computational resources.
Experimental verification
In the model simulation, we used two types of distortion pulses to train, verify, and test the CNN-LSTM model and achieved good test results in the parameter estimation of distortion-negative exponential pulses. To further verify the optimization effect of the parameter estimation on the X-ray fluorescence spectrum, an iron ore sample was selected as the measurement object in the experimental verification link. In previous studies [11], we identified that the elemental components with high content in this sample were Fe, Sr, and Sn. The measurement system included a high-performance silicon drift detector (FAST-SDD) and KYW2000A X-ray tube. The effective detection area of the detector is 25 mm2, detector thickness is 500 µm, and the thickness of beryllium window is 12.5 μm. The rated tube voltage was 50 kV and the rated tube current was 0-1 mA. The ADC sampling frequency was 20 MHz and the sampling period was 50 ns.
The original measured spectra are shown in Fig. 10. It is easy to see that the element with the highest content in the measured full spectrum is Sr element. According to the principle of distortion-pulse generation, elements with higher counting rates have a higher probability of generating shadowed peaks. Therefore, we selected Sr with the highest content and considered the net count of the two characteristic peaks of that element and their corresponding shadow peaks in the ROI as the analysis object. There is an unknown peak on the left side of the Sr characteristic peaks. If such shadow peaks are not processed, they may be mistaken as the characteristic peaks of some elements. Unreliable net counts in the ROI directly affect the elemental content analysis. Therefore, it is necessary to correct the shadow peaks in X-ray fluorescence spectra.
-202311/1001-8042-34-11-014/alternativeImage/1001-8042-34-11-014-F010.jpg)
In this study, the optimization of the X-ray fluorescence spectrum was implemented using the CNN-LSTM model based on deep learning to correct the count drift caused by distorted pulses. The negative exponential pulse sequence from the measurement process and its triangular shaping results were saved to create the test set. To create the test set, it is necessary to preprocess the negative exponential pulse sequence. To match the trained CNN-LSTM model, the preprocessing unit primarily completes the discrimination and separation of distorted pulses. For experimental verification, qualitative analysis of the X-ray fluorescence spectrum optimization and quantitative analysis of the spectrum peak correction were completed.
Qualitative analysis
The amplitude of the distorted pulses estimated by the CNN-LSTM model was used to replace the original pulse amplitude, and a comparison diagram of the X-ray fluorescence spectrum is shown in Fig. 11. The red spectral lines represent the results after the peak correction. By magnifying the local characteristics of the shadow peak area in the logarithmic coordinate system, it can be observed that there is a weak peak to the left of the two characteristic peaks of strontium in Fig. 11. Because the chemical symbol of Sr is Sr, its two characteristic peaks are represented by Sr-1 and Sr-2. According to the principle of multichannel spectroscopy, the amplitude loss of triangular shaping results in a left shift in the counts (also known as the left shift of the peak position), and the left-shifted counts form a new shadow peak on the left side of the characteristic peak. In Fig. 11, the left shift of the two characteristic peaks of strontium forms the shadow peaks 1 and 2. After the parameter estimation of the distorted pulses using the CNN-LSTM model, the left shift of the peak position was effectively corrected, and the shadow peak was eliminated.
-202311/1001-8042-34-11-014/alternativeImage/1001-8042-34-11-014-F011.jpg)
Qualitative analysis of the X-ray fluorescence spectrum before and after optimization showed that the CNN-LSTM model trained in this study could effectively correct the shadow peak caused by pulse distortion and optimize the X-ray fluorescence spectrum analysis results.
Quantitative analysis
As mentioned previously, the amplitude loss of shaping results in a left shift in the peak position. A shadow of the characteristic peak was formed on the left side of the characteristic peak. Here, the peak area
The region of interest (ROI) of shadow peak 1 is 896-960, so
-202311/1001-8042-34-11-014/alternativeImage/1001-8042-34-11-014-F012.jpg)
Speakloss1 represents the corrected peak area loss in the ROI of the first characteristic Sr peak of strontium element after calling the model. The ROI of the selected characteristic peak was located in the channel address interval of 960-1024, as shown in the shaded area of Fig. 12b.
Sshadow2 represents the area of the shadow peak formed by the left shift of the second characteristic peak of strontium. The ROI of shadow peak 2 is 1024-1088, so Sshadow2 is numerically equal to the difference in the peak area of the shadow peak ROI before and after spectral peak correction, as shown in Eq. (13); and Sshadow2 is shown in the shaded area in Fig. 12c.
Speakloss2 represents the corrected peak area loss of the second characteristic peak of Sr element after calling the model. The ROI of the selected characteristic peak was located in the channel address interval of 1088–1152, as shown by the shaded area in Fig. 12d. Speakloss2 is numerically equal to the difference in the peak area of the second characteristic peak ROI before and after the spectral peak correction, as shown in Eq. (14).
To quantify the correction effect of the CNN-LSTM model on the X-ray spectra of the measured iron ore samples, two indicators, the correction ratio
Times | Original spectrum | Corrected spectrum | Rc(%) | Re(%) | ||||||
---|---|---|---|---|---|---|---|---|---|---|
1 | 126.91 | 4231.21 | 68.29 | 822.07 | 48.58 | 4304.88 | 52.51 | 837.99 | 1.74 | 95.20% |
2 | 127.26 | 4289.77 | 76.46 | 855.27 | 51.32 | 4341.12 | 49.44 | 877.24 | 1.41 | 71.21 |
3 | 121.42 | 4271.21 | 68.42 | 831.42 | 53.39 | 4317.29 | 55.14 | 844.31 | 1.14 | 72.52 |
4 | 121.67 | 4199.54 | 72.51 | 819.33 | 44.25 | 4286.53 | 41.25 | 824.59 | 1.80 | 84.88 |
5 | 127.36 | 4229.79 | 76.79 | 827.93 | 49.87 | 4314.18 | 45.87 | 841.02 | 1.89 | 89.92 |
6 | 131.48 | 4262.17 | 73.87 | 831.85 | 45.66 | 4351.29 | 43.73 | 847.34 | 2.01 | 90.21 |
7 | 127.75 | 4281.21 | 70.41 | 846.83 | 49.32 | 4358.91 | 46.83 | 846.88 | 1.49 | 76.22 |
8 | 124.61 | 4209.39 | 76.82 | 820.44 | 55.38 | 4291.27 | 52.31 | 826.72 | 1.72 | 94.05 |
9 | 134.59 | 4191.29 | 75.11 | 790.12 | 42.26 | 4276.72 | 38.22 | 832.29 | 2.50 | 98.75 |
10 | 117.75 | 4213.71 | 71.33 | 811.58 | 47.11 | 4277.11 | 51.61 | 824.28 | 1.49 | 84.22 |
Avg | 126.08 | 4237.93 | 73.00 | 825.68 | 48.71 | 4311.93 | 47.69 | 840.27 | 1.72 | 86.27 |
STD | 4.70 | 33.79 | 3.15 | 17.16 | 3.85 | 28.66 | 5.19 | 14.94 | - | - |
From the comparison of the measurement results, it can be seen that the corrected X-ray spectrum obtained using the CNN-LSTM model to predict the pulse height has two typical features. First, the peak area of the characteristic peak ROI was improved compared with the original spectrum, and the standard deviation of the multiple measurement results was also significantly reduced. Second, the peak area of the shadow peak area was significantly reduced. According to the two characteristics above and the energy conservation theorem, it can be inferred that the peak area reduced in the shadow peak area should theoretically be corrected to the characteristic peak ROI, and the correction effect can be evaluated by
Conclusion
In this study, we trained a CNN-LSTM model for peak correction using X-ray fluorescence spectroscopy. The model processed randomly generated distorted pulses. To improve training efficiency, the model was divided into two parts: feature extraction was performed on the data using a CNN, and pulse amplitude estimation was performed using the LSTM network. In the simulation, the relative errors in the amplitude estimation of pulse sequences with different degrees of distortion were obtained using triangular shaping, CNN-LSTM, and LSTM models. As a result, for severely distorted pulses, the relative error of the CNN-LSTM model in estimating the pulse parameters was reduced by 14.35% compared to that of the triangular shaping algorithm; for slightly distorted pulses, the relative error of the CNN-LSTM model was reduced by 0.33%.
During the experiment, FAST SDD was used to perform X-ray measurements on the iron ore samples. The measured pulse sequence was saved offline as the model input, and the pulse amplitude output of the model was analyzed for multichannel pulse height, resulting in an X-ray energy spectrum corrected for shadow peaks. Meanwhile, the original energy spectrum obtained without calling the model was used as a reference spectrum for comparison with the corrected spectrum. The results indicate that the proposed model successfully predicts the heights of the measured pulse sequences. To further validate the performance of the model for shadow peak correction, ten measurements of iron ore samples showed that the peak area of the shadow peak ROI decreased by approximately 86.27%, which can be corrected to the characteristic peak ROI, and the corrected peak area accounted for approximately 1.72% of the characteristic peak ROI. This is of great significance for X-ray spectroscopy and elemental analysis.
Assessment of alternative methods for analyzing X-ray fluorescence spectra
. Appl. Radiat. Isotopes. 146, 133-138 (2019). doi: 10.1016/j.apradiso.2019.01.033Application on straight-line shaping method for energy spectrum measurement in TXRF spectrometer based on SDD detector
. Spectroscopy and Spectral Analysis 41(7), 2148-2152 (2021). (in Chinese)Unfolding-synthesis technique for digital pulse processing
. Part 1: Unfolding. Nucl. Instrum. Meth. A 805, 63-71 (2015). doi: 10.1016/j.nima.2015.07.040A new digital filter based on sinusoidal function for gamma spectroscopy
. Nucl. Instrum. Meth. A 944, 162582 (2019). doi: 10.1016/j.nima.2019.162582A spectrometer with baseline correction and fast pulse pile-up rejection for prompt gamma neutron activation analysis technology
. Rev. Sci. Instrum. 89 (12), 123504 (2018). doi: 10.1063/1.5049517Methods for obtaining characteristic γ-ray net peak count from interlaced overlap peak in HPGe γ-ray spectrometer system
. Nucl. Sci. Tech. 30, 11 (2019). doi: 10.1007/s41365-018-0525-7Pile-up correction in spectroscopic signals using regularized sparse reconstruction
. IEEE Trans. Nucl. Sci. 67, 858-862(2020). doi: 10.1109/TNS.2020.2985104Detection of true Gaussian shaped pulses at high count rates
. J. Instrum. 15, P06015 (2020). doi: 10.1088/1748-0221/15/06/P06015Radiation detector deadtime and pile up: A review of the status of science
. Nucl. Eng. Technol. 50(10), 1006-1016(2018). doi: 10.1016/j.net.2018.06.014A new method for removing false peaks to obtain a precise X-ray spectrum
. Appl. Radiat. Isot. 135, 171-176(2018). doi: 10.1016/j.apradiso.2018.01.033Counting-loss correction for X-ray spectra using the pulse-repairing method
. J. Synchrotron Radiat. 25, 1760-17678 (2018). doi: 10.1107/S160057751801411XECG signals segmentation using deep spatiotemporal feature fusion U-Net for QRS complexes and R-peak detection
. IEEE Transactions on Instrumentation and Measurement 72, 1-12(2023). doi: 10.1109/TIM.2023.3241997CUTBUF: Buffer management and router design for traffic mixing in VNET-Based NoCs
. IEEE Trans. Parall. Distr. 27(6), 1603-1616 (2016). doi: 10.1109/TPDS.2015.2468716Application of multi-head attention mechanism with embedded positional encoding in amplitude estimation of stacked pulses
. Nuclear Techniques 46(9), 090505 (2023). doi: 10.11889/j.0253-3219.2023.hjs.46.090505 (in Chinese)An antibody-free liver cancer screening approach based on nanoplasmonics biosensing chips via spectrum-based deep learning
. NanoImpact 21, 100296 (2021). doi: 10.1016/j.impact.2021.100296Harmonic current control strategy of DC distribution network based on deep learning algorithm
. Energy Reports 8, 13066-13075 (2022). doi: 10.1016/j.egyr.2022.09.071Deep learning for robust adaptive inverse control of nonlinear dynamic systems: Improved settling time with an autoencoder
. Sensors 22(16), 5935-5935(2022). doi: 10.3390/s22165935Deep learning-based image quality improvement for low-dose computed tomography simulation in radiation therapy
. Journal of Medical Imaging 6(4), 043504(2019). doi: 10.1117/1.JMI.6.4.043504Anti-noise performance of the pulse coupled neural network applied in discrimination of neutron and gamma-ray
. Nucl. Sci. Tech. 33, 75 (2022). doi: 10.1007/s41365-022-01054-6Measurement of H0 particles generated by residual gas stripping in the Japan Proton Accelerator Research Complex linac
. Nucl. Instrum. Meth. A 1049, 168033(2023). doi: 10.1016/j.nima.2023.168033Isotope identification using deep learning: An explanation
. Nucl. Instrum. Meth. A 988, 164925 (2021). doi: 10.1016/j.nima.2020.164925Noise signal identification in time projection chamber data using deep learning model
. Nucl. Instrum. Meth. A 1048, 168025 (2023). doi: 10.1016/j.nima.2023.168025A neural network-based method for Spectral Distortion Correction in Photon Counting X-Ray CT
, Phys. Med. Biol. 61(16), 6132-6153 (2016). doi: 10.1088/0031-9155/61/16/6132.Unfolding using deep learning and its application on pulse height analysis and pile-up management
. Nucl. Instrum. Meth. A 1005, 165403 (2021). doi: 10.1016/j.nima.2021.165403Deep Learning-Based Pulse Height Estimation for Separation of Pile-Up Pulses From NaI(Tl) Detector
. IEEE Trans. Nucl. Sci. 69(6), 1344-1351 (2022). doi: 10.1109/TNS.2021.3140050Study on analytical noise propagation in convolutional neural network methods used in computed tomography imaging
. Nucl. Sci. Tech. 33, 77 (2022). doi: 10.1007/s41365-022-01057-3Estimation of trapezoidal-shaped overlapping nuclear pulse parameters based on a deep learning CNN-LSTM Model
. J. Synchrotron. Radiat. 28, 910-918 (2021). doi: 10.1107/S1600577521003441X-ray spectra correction based on deep learning CNN-LSTM model
. Measurement 199, 111510 (2022). doi: 10.1016/j.measurement.2022.111510Speech recognition with deep recurrent neural networks: 2013 IEEE International Conference on Acoustics
,The authors declare that they have no competing interests.