Decomposition of fissile isotope antineutrino spectra using convolutional neural network

NUCLEAR ENERGY SCIENCE AND ENGINEERING

Decomposition of fissile isotope antineutrino spectra using convolutional neural network

Yu-Da Zeng
Jun Wang
Rong Zhao
Feng-Peng An
Xiang Xiao
Yuenkeung Hor
Wei Wang
Nuclear Science and TechniquesVol.34, No.5Article number 79Published in print May 2023Available online 31 May 2023
6000

Recent reactor antineutrino experiments have observed that the neutrino spectrum changes with the reactor core evolution and that the individual fissile isotope antineutrino spectra can be decomposed from the evolving data, providing valuable information for the reactor model and data inconsistent problems. We propose a machine learning method by building a convolutional neural network based on a virtual experiment with a typical short-baseline reactor antineutrino experiment configuration: by utilizing the reactor evolution information, the major fissile isotope spectra are correctly extracted, and the uncertainties are evaluated using the Monte Carlo method. Validation tests show that the method is unbiased and introduces tiny extra uncertainties.

Reactor antineutrinoIsotope antineutrino spectrum decompositionConvolutional neural network
1

Introduction

Significant deviations between the Huber-Mueller model and experimental isotope antineutrino spectra have been demonstrated, causing a 6% deficit in the reactor antineutrino flux (the so-called Reactor Antineutrino Anomaly, RAA) and an excess of reconstructed positron signal events in the 4-6 MeV (the so-called 5-MeV bump) [1-6]. Determining the origin of the reactor antineutrino rate and shape anomaly is critical, especially for understanding nuclear physics and improving nuclear databases for fundamental and application research. Relevant experimental and theoretical efforts have been made to solve the aforementioned problem, including attempting to determine the individual isotope contributions of reactor ν¯e, which has provoked further investigations. In 2017, the Daya Bay experiment revealed a 7.8% discrepancy between the observed and predicted 235U yields by using the span of effective 239Pu fission fractions, which may be the primary contributor to the RAA [7]. In 2019, the PROSPECT experiment measured the 235U spectrum from the highly enriched uranium of the High Flux Isotope Reactor, and the 235U spectrum shape was consistent with a deviation relative to the prediction made by the Daya Bay experiment in the energy region of 5-7 MeV [8]. In the same year, the theoretical result of the summation method was compared with that of the Daya Bay experiment without any renormalization, which reduced the flux discrepancy to 1.9% by inducing the correction of the pandemonium effect [9]. Also in 2019, the Daya Bay experiment first extracted the 235U and 239Pu neutrino spectrum from commercial reactors by using the reactor evolution information [10].

Determining individual isotope antineutrino spectra can also play an important role in nuclear safeguards. The International Atomic Energy Agency (IAEA) cooperates with neutrino physicists to develop new approaches for reactor monitoring methods by observing the ν¯e emitted from the reactor, where the isotope antineutrino spectra are key inputs for the reactor monitoring applications [11] because the reactor antineutrino flux and spectra are sensitive to the changes of the fuel content in the reactor core and can be observed via a suitable antineutrino detector. The applied neutrino physics community also explored the reactor antineutrinos as a tool for reactor monitoring and concluded that improving the knowledge of the reactor antineutrino flux and spectrum is required for reactor safeguards applications [12]. The DOE National Nuclear Security Administration (NNSA) Office of Defense Nuclear Nonproliferation Research and Development (DNN R&D) organized a group of neutrino physicists and nuclear engineers to find practical roles of neutrino technology in nuclear energy and security; the final report, called Nu Tools, asserted that it is possible to exploit the neutrino spectrum to determine the fissile material content of the reactor with high reactor antineutrino rates [13]. The isotope antineutrino spectra decomposed directly from reactor antineutrino experiments have no RAA or spectrum distortion problem while having comparable or better uncertainties than those in the Huber-Mueller model, providing more reliable data inputs for the nuclear safeguards.

Only the Daya Bay experiment has published the reactor isotope antineutrino spectra by using two methods, the minimum χ2 and Markov Chain Monte Carlo (MCMC) methods, and has obtained consistent results. The minimum χ2 method is a statistical inference method that minimizes the χ2 statistic, which is constructed in the form of a χ2 function. The χ2 function χ2(θ) is an estimator for the parameter θ and composed of a likelihood function comparing the binned observation data n=(n1,...,nN), the expectation μ(θ)=(μ1(θ),...,μN(θ)), and the penalty term for constraining the parameters: χ2(θ)=2j=1N[μj(θ)nj+njlnnjμj(θ)]+f(ϵ,Σ), (1) where nj follows a Poisson distribution, and f(ϵ,Σ) is the penalty term that constrains the nuisance parameter ϵ with the correlations Σ of the nuisance parameters. The minimum χ2 method naturally introduces the statistical uncertainty and the systematic uncertainties into the estimator, and the best fit parameters and the corresponding uncertainties can be obtained by minimizing the χ2 function. The minimum χ2 method is a robust, traditional frequency fitting method commonly used in high energy physics. The second method used in the Daya Bay decomposition research is the MCMC method based on Bayesian inference. In Bayesian theory, all knowledge of the parameter θ is summarized in the posteriori probability density function (p.d.f.) p(θ|D): p(θ|D)P(D|θ)π(θ), (2) where D is the data, θ is the parameter, P(D|θ) is the likelihood function, and π(θ) is the priori p.d.f. of θ. By performing statistical calculations on the posteriori p.d.f., the mean values and uncertainties can be extracted. Usually, calculating the posteriori p.d.f. is difficult, especially for high dimensional problems. Instead, the MCMC method is used to sample the posteriori p.d.f., and the mean values and uncertainties can be obtained by performing calculations on the samples. In the Daya Bay experiment, the measured data were divided into 20 groups of inverse beta-decay (IBD) spectra, corresponding to different burning stages of a reactor cycle. The prediction spectra for the 20 groups were obtained by considering the detector and reactor model combined with the reactor information. Data and prediction were used to construct the likelihood function in the minimum χ2 method and the Bayesian inference method. The uncertainties from the detectors and the reactors were incorporated into the penalty terms in the minimum χ2 method and priori p.d.f. in the Bayesian inference method, respectively. Eventually, the results of the decomposed isotope spectra are consistent by using the two methods.

The extraction of isotope antineutrino spectra has been studied in reactor neutrino physics, and there is no convincing answer to RAA; nevertheless, we consider it beneficial to study the applications of new methods. Here, we propose a new method that uses a convolutional neural network (CNN) to decompose primary fissile isotope antineutrino spectra by fitting the weekly detected antineutrino spectrum as a function of the individual isotope fission fractions. A CNN is a network model for machine learning, which provides an optimal architecture for detecting key features in images and time series data. It has broad applications in, for example, computer vision and natural language processing [14-17]. And it has been used in certain physics research fields to extract information from experimental data and fit the model parameters [18]. Notably, the established decomposition methods, such as the minimum χ2 and MCMC methods, are offline algorithms. Thus, the analysis results must be updated from scratch as new data arrives, which is a waste of time, especially for long-term experiments. Second, the minimum χ2 method and the MCMC method have to load the entire dataset into the computer memory, requireing a large amount of computer memory when dealing with big data, for example, with many reactor burning cycles and detailed reactor information, making these methods unusable. Moreover, they usually resample from the original data to reduce the size of the dataset. However, the processing may introduce the loss of information and bias in the analysis. By contrast, the CNN approach is an online algorithm [19]. The advantage of online updating is that analysis can be performed without access to the historical data; thus, overcoming the storage and computation limitations is possible in some cases. In addition, the proposed method makes full use of the data without causing excessive information loss. This provides an additional machine learning technology for the decomposition of reactor fissile isotope spectra and can be used for neutrino spectrum analysis in future reactor antineutrino experiments.

2

Setup of the virtual experiment

In this section, we describe a virtual reactor antineutrino experiment to produce the simulation dataset for the proposed CNN method for training and testing.

Suppose there is a virtual experiment with a one-reactor one-detector layout, where the reactor is a type of pressurized water reactor (PWR) and the sole source of the ν¯e flux. Antineutrinos are produced from thousands of beta-decay branches of the fission products from four major fissile isotopes, 235U, 238U, 239Pu, and 241Pu, in the reactor core. A virtual 20 ton liquid scintillator antineutrino detector with a 50 m baseline from the reactor is set up using the parameters in Table 1. The antineutrino is detected via IBD reactions in the detector: e+pe++n. The predicted ν¯e spectrum at a given time t is calculated as Sd(Eν,t)=Npϵσ(Eν)Psur(Eν,L)4πL2W(t)ifi(t)eiifi(t)Si(Eν), (3) where is the ν¯e energy, Np is the target proton number, ϵ is the detection efficiency, σ() is the inverse beta-decay cross section, L is the distance from reactor to detector, Psur(Eν,L) is the ν¯e survival probability, W(t) is the thermal power of the reactor, ei is the energy released per fission for isotope i, fi is the fission fraction, and Si() is the ν¯e energy spectrum of fissile isotope i per fission.

Table 1
Parameter list of the virtual experiment.
Parameter Value Uncertainty
Thermal power, W 2.9 GW 0.5%
Fission fraction, fi Ref. [20] 5%
Energy/fission, ei Ref. [21] 0.2%
Detection efficiency, ϵ 80.25% 1.5%
Target protons, Np 1.43 × 1030 0.92%
Baseline, L 50 m Negligible
Show more

For the virtual experiment, the isotope antineutrino spectra Si() are assumed to be the same as those in the Huber-Mueller model, denoted by SiHM(Eν). Using the configurations of the Daya Bay experiment as a reference, the experimental parameter values in Eq. (3) are presented in Table 1.

In addition to Table 1, the fission fraction evolution of a fuel cycle is presented in the top panel of Fig. 1, where the fission fractions of the four major fissile isotopes are shown as a function of the burn-up. For PWR, the reactor core usually consists of three batches of fuel assemblies with different ages, and usually, one-third of the old batches are replaced by fresh fuel at the end of a refueling cycle. During the reactor burning time, the fissile isotopes are mainly depleted by fission, decay, and neutron capture processes. Some of them, such as plutonium isotopes, are also generated by the neutron captures and decays from the mother nuclei in the reaction chains. The depletion and generation of the fissile isotopes are essential for the evolution of the reactor fuels.

Fig. 1
(Color online) (Top panel) Isotope fission contribution status in one fuel cycle. The fission fraction summations of four major isotopes are normalized to 1. Data are extracted from Ref. [20]. (Bottom panel) Weekly ν¯e event rates during the entire experiment operation. The color represents the observed ν¯e event rates. The operation comprises several fuel cycles, among which each fuel cycle is similar to that in the top panel. ν¯e event rates vary with the operating time because the fission fractions of fuel components differ. Thus, the observed antineutrino spectrum is a function of time and fission fractions.
pic

Burn-up in the top panel of Fig. 1 is defined as burn-up=WDMU, (4) where W is the average power of the fuel element, D is the number of days since the fuel element begins to burn in the core, and MU is the initial uranium mass of the fuel element. In this study, MU is supposed to be 72 tons.

The uncertainties of the fission fractions of the four major fissile isotopes are assumed to be 5%, as in the Daya Bay experiment, and the correlation matrix of the uncertainties is from Ref. [20], which was extracted from the simulations of a typical PWR. The energies released per fission are from Ref. [21]. All the uncertainties are assumed to be time-correlated in this study.

Due to the fuel evolution of the four major fissile isotopes, the ν¯e emitted from the reactor core changes as a function of time. The bottom panel of Fig. 1 shows the reactor antineutrino spectrum evolution of nine reactor fuel cycles over 657 weeks. These spectra are treated as measurement data from the virtual experimental antineutrino detector, which contain information on the reactor evolution. The individual fissile isotope antineutrino spectra are decomposed from these observed spectra by utilizing the reactor information listed in Table 1, which uses typical values similar to those in the Daya Bay experiment, and in a real reactor antineutrino experiment is provided by the nuclear power plant.

Notably, the IBD cross section σ() and the isotope antineutrino spectrum Si() are coupled with antineutrino energy in Eq. (3). The IBD yield per fission from individual isotopes could be defined as σi(Eν)=σ(Eν)Si(Eν) i=(235, 238, 239, 241), (5) which is the isotope spectrum to be decomposed in this study, as the Daya Bay experiment did [10]. In the Huber-Mueller model case, σi() is denoted by σiHM(Eν).

Thus, the predicted ν¯e spectrum can be denoted as the combination of σi() and the coefficient ki(): Sd(Eν,t)=iki(Eν,t)σi(Eν), (6) where coefficient ki(Eν, t) is the multiplication of a set of experimental parameters referring to Eq. (3): ki(Eν,t)=NpϵPsur(Eν,L)4πL2W(t)lfl(t)elfi(t). (7) Assuming the virtual experiment had run for nine fuel cycles (4600 days), information on the reactor thermal power and antineutrino spectrum is collected weekly during the operation. As a result, a list of coefficients and ν¯e observations varying with time is provided (see the bottom panel of Fig. 1).

3

Configurations of Convolutional Neural Network

Among the many methods of machine learning, the CNN is commonly applied to extract the shift-invariant features of data with its specialized convolutional layer. In the reactor antineutrino spectrum decomposition study, the isotope antineutrino spectra are time-invariant in the reactor evolution data. Thus, the CNN method might be a suitable approach for extracting the isotope antineutrino spectra. To extract the isotope spectra from the simulation dataset, we constructed a one-dimensional CNN. To explicitly describe the CNN model we constructed, before our introduction of the CNN, we introduce the data structures required by the CNN model, the operation performed on data, and some key concepts of the CNN, which are summarized in Table 2. The dataset of the virtual experiment is organized sample by sample that is tagged with time in Table 2, such as t1, t2, ..., tn, for each week. The CNN model splits the periodical experimental measurement data (one week) to create a training sample. The ’Coefficient’ columns of Table 2 is the key input of the CNN, in which the coefficient kti is calculated using Eq. (7) from the virtual experiment for week t and isotope i, week by week. The central part of the CNN is the convolutional kernel, a small matrix for feature extraction, defined as (σ235, σ238, σ239, σ241), as shown in the second row of Table 2, representing the respective isotope spectra in Eq. (5). A linear operation, called convolution in a CNN, is performed on the convolutional kernel and input data to generate the output data in Table 2, as shown in the second row of the ’Expectation’ column. The output is the expected antineutrino spectrum in Eq. (6). The convolutional operation is performed sample by sample across the entire dataset; in other words, the convolutional kernel (σ235, σ238, σ239, σ241) in the table slides along the timeline and combines with each row of coefficients to predict the ν¯e spectrum outcome. Such a process returns a list of calculation outputs (’Expectation’ column), which is compared with the label data, the ν¯e spectrum observed by the detector (’Observation’ column). Notably, entries in Table 2 focus on the same energy bin. In this study, the neutrino energy bins range from 2 to 8 MeV, and each of them covers a range of 0.25 MeV; thus, there are 24 energy bins.

Table 2
Virtual experiment dataset and convolutional operation.
Sample Coefficient Expectation Observation
t1 k15 k18 k19 k11 ik1iσi Sobs(t1)
t2 k25×σ235 k28×σ238 k29×σ239 k21×σ241 ik2iσi Sobs(t2)
     
tn kn5 kn8 kn9 kn1   Sobs(tn)
Input         Output Label
Show more

The CNN aims to learn from reactor antineutrino experimental data to fit the isotope spectra by updating its convolutional kernel. Because this study divides the energy range into 24 bins from 2 to 8 MeV, a corresponding number of convolutional kernels are employed.

The architecture of the constructed CNN model is shown in Fig. 2. This CNN model comprises three layers: the convolutional layer, the flatten layer, and the fully connected layer. The convolutional layer is where most computations occur. This requires input data (rectangles on the left side of Fig. 2) and convolutional kernels (the shaded patch on the bottom left). The input data are from the simulation coefficients, as shown in Table 2. For each energy bin, the coefficient table and the respective convolutional kernels perform the convolutional operation, and the outcomes, representing the expected ν¯e, are conveyed to the second layer (bars on the middle side and marked as feature maps). Next, the flattening operation is applied to transform the multidimensional data into one dimension. Such a flattening operation is commonly used in the transition from the convolutional layer to the fully connected layer. The last layer (bars on the right side), the fully connected layer, outputs the flattened results as the expectation of ν¯e. Later, the CNN compares the output values with the corresponding ν¯e label data and begins its training process via the so-called back-propagation method. The purpose of back-propagation is to make the output values as close as possible to the label values. During the training process, the CNN repeats back-propagation many times, and in this manner, the parameters of the convolutional kernel (σ235, σ238, σ239, σ241) are adjusted to their best fit values by iteration. Unlike the conventional neural networks, described as a black box model, this CNN model is interpretable, where the convolutional kernel components carry the information of isotope spectra, the inputs corresponding to the convolutional kernel components represent the fission rates of the four isotopes, and the outputs simulate the predicted ν¯e spectra.

Fig. 2
(Color online) Architecture of the one-dimensional convolutional neural network. It takes the coefficients of isotope spectra as inputs and performs the convolutional operation by sliding the convolutional kernels over the inputs to form the convolutional layer. The convolutional results (feature maps) are passed to the next layer (flatten layer) and converted from tensor values to scalar values. The last layer (fully connected layer) of the CNN outputs the expectations of antineutrino spectrum in the detector.
pic

Once the architecture of the CNN has been built, the next step is tuning the hyperparameters of the CNN model. Hyperparameters are configurations used to control the training process, for example, the objective function, optimizer, and learning rate. Hyperparameters are usually set before data training; therefore, we should find their appropriate configurations before our real decomposition work. This hyperparameter tuning process is called pre-training to distinguish it from the subsequent training procedure of our real decomposition work, in which the hyperparameters are configured properly. However, one of the most challenging limitations is that the hyperparameters cannot be estimated directly from the data and must be specified manually. Generally, there is no golden rule, and searching for the best hyperparameters is conducted by trial and error.

During the pre-training process, the simulation dataset fed into the CNN is noiseless, and systematic uncertainties of parameters in Table 1 are assumed to be zero. In other words, measurements of the virtual experimental parameters are regarded as being sufficiently precise to suppress the noise effects. Such efforts enable the CNN model to determine the most suitable hyperparameters.

Our computation is conducted on a server cluster consisting of a group of computers with 16-core CPUs. The cluster provides support for up to 500 multi-core jobs for our study. Thus, we are able to decompose from 500 Monte Carlo datasets simultaneously [22]. The pre-training of the CNN is implemented in Keras 2.3, a user-friendly framework that provides a Python frontend for researchers, and Keras uses the TensorFlow platform as its backend. These two tools provide sufficient standard modules for users to build and train the neural network models; thus, our coding is mainly based on the standard modules of the two packages. However, we need to develop a new objective function prototype for our study, which we will explain in detail later. With this cluster and the two packages, our computation process requires 300 Mbytes of memory and 5 hours for each decomposition task.

For decomposing the individual isotope spectra from the data, the CNN requires an objective function to optimize the neural network parameter σi by reducing the difference between the output result and the label data. For general regression problems of a CNN, the mean squared error (MSE) is the conventional choice, in which no uncertainties are considered. However, in this study, an objective function in the form of the χ2 function is constructed by considering the statistical uncertainty and the uncertainties introduced by 238U and 241Pu, commonly used in high energy physics analysis. The χ2 function is defined as J(Eν,σ)=j=1n(Sjobs(Eν)Sjexp(Eν))2Sjexp(Eν)+(σ238(Eν)σ238HM(Eν))2(σ238HM(Eν)×15%)2+(σ241(Eν)σ241HM(Eν))2(σ241HM(Eν)×10%)2, (8) where j is the sample index, and Sjobs(Eν) is the observed ν¯e spectrum of the j-th sample and assumed to be a Gaussian distributed variable. Sjexp(Eν) is the expected ν¯e spectrum of the j-th sample, which is calculated by the CNN using the convolutional operation as follows: Sjexp(Eν)=iki(Eν,tj)σi(Eν). (9) The first term of Eq. (8) is a likelihood function that measures the distance from the predicted ν¯e value to its labeled observation value. As aforementioned, the CNN reduces the difference by iteratively updating its network parameters. The other parts of Eq. (8) are the penalty terms that allow the CNN to use a priori constraints on σ238 and σ241 with their uncertainties. Because the fission fractions of 238U and 241Pu are small and the fuel evolution is not sensitive to the two isotopes, they are treated as penalty terms. Using the Huber-Mueller model as their priors, the shape uncertainties of 238U and 241Pu are assigned values of 15% and 10%, respectively.

During training, the neural network uses an iterative algorithm (called optimizer) to minimize the objective function and adjust its internal network parameters. In this study, the CNN implements the adaptive moment estimation (Adam) method as its optimizer, which facilitates the computation of learning rates by using the first and second moments of the gradient [23, 24].

Initially, the CNN parameter σi is as follows: σi(Eν)=14lσlHM(Eν),   (l=235, 238, 239, 241). (10) Sometimes, the starting point of the parameter is crucial for the training result because the optimizer of the neural network is susceptible to finding the local optima solution and becoming stuck with some of these points. Hence, to examine the influence of different initial parameter values, a 50% uncertainty is assigned to σi in Eq. (10) as the initialization test, and the results are almost identical. This shows that the CNN model is not sensitive to the parameter initialization schemes in this study.

Based on the objective function and optimizer, the neural network follows the specified algorithm to iteratively update its parameters. Controlling the speed of parameter value change (called learning rate) is important because a learning rate that is too large might cause the model to converge too quickly to a local optima solution, whereas a rate that is too small would result in the process being stuck. In this study, the learning rates of the CNN parameters follow the schedule shown in the top panel of Fig. 3, where the learning rates appear to be functions of the epoch. High energy parameters are configured with a smaller learning rate than those of low energy, mainly because isotope spectra exhibit minor values at high energy; and therefore, the CNN requires increased accuracy in control in these areas.

Fig. 3
(Color online) (Top panel) Learning rates schedule of the CNN. All learning rates are divided into two groups, among which those of the low energy region (6 MeV) decay slower with the epoch than those of the high energy region (>6 MeV). (Bottom panel) The superposition of the decomposed results from thousands of training. With the increasing epoch, the verification factor gradually approaches 100%. After the epoch exceeds 1500, the decomposed results tend to be steady.
pic

An epoch refers to training the neural network with all training data for one cycle. It consists of one or more batches, where a part of the dataset is used as the input. The number of samples in a batch is called batch size. In this study, the batch size is set to four samples, hence, four weeks of data are passed into the CNN between each iteration of the parameters.

When the CNN prepares to train the data, the number of training cycles (called epochs) should be set before the training starts. However, determining the exact optimal number of epochs for the model is difficult. Depending on the network model and the various datasets, we must determine when the parameters are converged and when the CNN model should stop its training process. Regarding machine learning, on the one hand, we might have the over-fitting problem, where the neural network model fits perfectly to the training data but has poor generalization performance to new data, usually caused by an excess number of training cycles. On the other hand, we might have the under-fitting problem if the model does not sufficiently learn the data, usually due to a low epoch number. In determining whether a neural network model has converged, the common practice is to examine the variation in the training results with epochs. If the number of epochs is set too low, the training process terminates before the model converges. By contrast, if the number of epochs is set too high, the model is probably over-fitting. Thus, the number of epochs should be considered.

For evaluating and visualizing the effectiveness of the CNN decomposition method, a verification factor is defined as Rratioi(Eν)=σi(Eν)σiHM(Eν)×100%, (11) which is the ratio between the predicted isotope spectrum and the truth spectrum.

In this study, we evaluate the influence from the configuration of different epochs, by conducting thousands of training processes and superposing their results in one plot, as shown in the bottom panel of Fig. 3, where the X-axis represents the training cycle number, the Y-axis represents the verification factor, and the color of the data represents the frequency of the training results. When the epoch reaches the number of 1500, the verification factor stably converges to nearly 100%. Conservatively, the number of epochs is set to 2000 cycles.

After the hyperparameters have been determined, we complete the pre-training process and establish the entire CNN model. Maintaining the same configurations, we prepare to test the decomposition performance of the CNN with the experimental data. In this study, the simulation data are used instead.

4

Results of decomposition

Using the aforementioned hyperparameter configurations, the CNN decomposes the individual isotope spectra from both noiseless and noisy simulation datasets. In this study, we mainly examine the unbiasedness and uncertainties of the decomposition results by using the CNN method.

Using noiseless datasets, in which both systematic uncertainties and statistical error are ignored, the decomposition is implemented 1000 times, and the extracted spectra samples are compared with the truth values to evaluate the bias and uncertainties. As shown in Fig. 4, the ratios of the mean values of the extracted spectra samples to the truth spectra are presented as data points; and the deviations are less than 0.1%, which can be ignored; and the decomposed isotope spectra can be regarded as unbiased. The tiny error bars represent the uncertainties introduced by the CNN model, and they are obtained by calculating the standard deviations of the ratios of the extracted spectra samples to the truth spectra.

Fig. 4
(Color online) Verification of the unbiasedness of the CNN method. The data point shows the ratio of the decomposed isotope spectrum to the truth spectrum.The error bar in the data point represents the decomposition uncertainty. For reasons of contrast, three of the curves are shifted down. Originally all curves are centered at 100%.
pic

When considering the noise effects, the statistical error and the systematic uncertainties are assigned to the experimental measurements, by applying the Poisson fluctuation and the systematic uncertainties in Table 1, respectively. One thousand different noisy datasets are generated with these uncertainties, from which the individual isotope spectra are extracted, and the decomposition results vary under the noise disturbance. The mean value and the standard deviation of the whole decomposition results are shown in Fig. 5.

Fig. 5
(Color online) (Top panel) The decomposed 235U and 239Pu spectrum. The error bar in the data point is the square root of the diagonal term of the decomposed spectrum covariance matrix. (Bottom panel) Ratio of the decomposed spectrum to the truth spectrum. The data point of 235U is displaced for visual clarity of error bars.
pic

Because 238U and 241Pu spectrum are treated as prior knowledge, this study presents the decomposition results of 235U and 239Pu, whose fitting is principally driven by the simulation experimental data. As shown in the bottom panel of Fig. 5, decomposition results of both isotopes deviate from the truth spectrum by less than 0.3%, which is practically unbiased. The decomposed 235U spectrum has a smaller uncertainty than the 239Pu spectrum, mainly because 235U is the primary contributor of reactor ν¯e, and it provides the largest number of antineutrino events.

5

Conclusion and Discussion

In summary, we propose a machine learning approach to decompose 235U and 239Pu isotope antineutrino spectrum from the evolution data of a simulated reactor antineutrino experiment. The CNN decomposition method is applied to noiseless and noisy datasets considering the main uncertainties of a reactor antineutrino experiment, and the validation tests show that the deviations of the decomposed spectra are less than 0.1% and 0.3%, respectively, and thus, could be viewed as unbiased. The uncertainty introduced by the CNN method is less than 0.1%, and the statistical and systematic uncertainties can be evaluated using the Monte Carlo method.

The CNN decomposition method is applicable to realistic commercial reactor antineutrino experiments as well because the physical principles of ν¯e emission and detection in these reactor antineutrino experiments are almost the same as those in the virtual experiment designed in this study. Unlike the virtual experiment, realistic experiments commonly employ multiple reactors and detectors; thus, the coefficient ki(Eν, t) formula defined in Eq. (7) should be replaced by the effective coefficients for different reactors. The effective coefficient is calculated as kid(Eν,t)=Ndϵd4πrWr(t)fir(t)Psur(Eν, Lrd)Lrd2lflr(t)el, (12) where the subscript d is the detector index, r is the reactor index, is the ν¯e energy, Nd is the target proton number, ϵd is the detection efficiency, Lrd is the distance from reactor r to detector d, Psur(,Lrd) is the ν¯e survival probability, Wr(t) is the thermal power of reactor r, el is the energy released per fission for isotope l, and flr is the fission fraction of reactor r for isotope l. This is simply the summation of the coefficient contributions from individual reactors.

Due to the various experimental operation times and baseline scales ranging from 10 m to 1000 m, the number of the observed ν¯e over a period and an experiment could be very different. Thus, we could merge the periodic measurement data, and rearrange them into new groups, to ensure the antineutrino event rate of a sample on the same scale as this study and to guarantee the validity of the χ2 objective function. Implementing such efforts would make the CNN method applicable to realistic experimental cases.

In addition, the decomposition in this study is applied directly to the antineutrino spectrum. However, in realistic reactor antineutrino experiments, the energy spectrum of ν¯e is detected and converted via the visible prompt energy. The prompt energy is related to the antineutrino energy as follows: EpEν¯e0.78 MeV. (13) Therefore, before the step of decomposition with the CNN method, we transfer the measured prompt spectrum to the ν¯e spectrum (commonly called unfolding), which, in principle, can be integrated into the layers of the CNN. We plan to append extra neural network layers to the established CNN model in our future studies to accomplish the unfolding analysis.

In the near future, very short-baseline reactor antineutrino experiments are expected to measure the reactor antineutrino spectrum with higher precision and energy resolution. The promising decomposition approach introduced and well demonstrated in this paper could be applied in these experiments to provide the most up-to-date individual isotope antineutrino spectra.

References
[1] Th. A. Mueller, D. Lhuillier, M. Fallot et al.,

Improved predictions of reactor antineutrino spectra

. Phys. Rev. C 83, 054615 (2011). doi: 10.1103/PhysRevC.83.054615
Baidu ScholarGoogle Scholar
[2] P. Huber,

Determination of antineutrino spectra from nuclear reactors

. Phys. Rev. C 84, 024617 (2011). doi: 10.1103/PhysRevC.84.024617
Baidu ScholarGoogle Scholar
[3] G. Mention, M. Fechner, Th. Lasserre et al.,

Reactor antineutrino anomaly

. Phys. Rev. D 83, 073006 (2011). doi: 10.1103/PhysRevD.83.073006
Baidu ScholarGoogle Scholar
[4] Y. Abe, J.C. dos Anjos, J. C. Barriere et al.,

Improved measurements of the neutrino mixing angle θ13 with the Double Chooz detector

. J. High Energy Phys. 2014, 86 (2014). doi: 10.1007/JHEP10(2014)086
Baidu ScholarGoogle Scholar
[5] S. H. Seo et al.,

(RENO Collaboration), New results from RENO and the 5 MeV excess

. AIP Conf. Proc. 1666, 080002 (2015). doi: 10.1063/1.4915563
Baidu ScholarGoogle Scholar
[6] F.P. An, A.B. Balantekin, H.R. Band et al.,

(Daya Bay Collaboration), Measurement of the Reactor Antineutrino Flux and Spectrum at Daya Bay

. Phys. Rev. Lett. 116, 061801 (2016). doi: 10.1103/PhysRevLett.116.061801
Baidu ScholarGoogle Scholar
[7] F.P. An, A.B. Balantekin, H.R. Band et al.,

(Daya Bay Collaboration), Evolution of the Reactor Antineutrino Flux and Spectrum at Daya Bay

. Phys. Rev. Lett. 118, 251801 (2017). doi: 10.1103/PhysRevLett.118.251801
Baidu ScholarGoogle Scholar
[8] J. Ashenfelter, A.B. Balantekin, H.R. Band et al.,

(PROSPECT Collaboration), Measurement of the Antineutrino Spectrum from 235U Fission at HFIR with PROSPECT

. Phys. Rev. Lett. 122, 251801 (2019). doi: 10.1103/PhysRevLett.122.251801
Baidu ScholarGoogle Scholar
[9] M. Estienne, M. Fallot, A. Algora et al.,

Updated summation model: An improved agreement with the Daya Bay antineutrino fluxes

. Phys. Rev. Lett. 123, 022502 (2019). doi: 10.1103/PhysRevLett.123.022502
Baidu ScholarGoogle Scholar
[10] D. Adey, F.P. An, A.B. Balantekin et al.,

(Daya Bay Collaboration), Extraction of the 235U and 239Pu Antineutrino Spectra at Daya Bay

. Phys. Rev. Lett. 123, 111801 (2019). doi: 10.1103/PhysRevLett.123.111801
Baidu ScholarGoogle Scholar
[11]

Technical Meeting on Nuclear Data for Antineutrino Spectra and their Applications

, 23-26 Apr 2019, Vienna, Austria. https://www.iaea.org/events/evt1703666
Baidu ScholarGoogle Scholar
[12] N.S. Bowden, J.M. Link, W. Wang,

Report of the Topical Group on Neutrino Applications for Snowmass 2021

. doi: 10.48550/arXiv.2209.07483
Baidu ScholarGoogle Scholar
[13] O. Akindele, N. Bowden, R. Carr et al.,

Nu tools: Exploring practical roles for neutrinos in nuclear energy and security

. doi: 10.48550/arXiv.2112.12593
Baidu ScholarGoogle Scholar
[14] D. Bhatt, C. Patel, H. Talsania et al.,

CNN variants for computer vision: History, architecture, application, challenges and future scope

. Electronics. 10, 2470 (2021). doi: 10.3390/electronics10202470
Baidu ScholarGoogle Scholar
[15] Y. T. Luo, H. Du, and Y. M. Yan,

MeshCNN-based BREP to CSG conversion algorithm for 3D CAD models and its application

. Nucl. Sci. Tech. 33, 74 (2022). doi: 10.1007/s41365-022-01063-5
Baidu ScholarGoogle Scholar
[16] X.Y. Guo, L. Zhang, Y.X. Xing,

Study on analytical noise propagation in convolutional neural network methods used in computed tomography imaging

. Nucl. Sci. Tech. 33, 77 (2022). doi: 10.1007/s41365-022-01057-3
Baidu ScholarGoogle Scholar
[17] L.Y. Zhou, H. Zha, J.R. Shi et al.,

A non-invasive diagnostic method of cavity detuning based on a convolutional neural network

. Nucl. Sci. Tech. 33, 94 (2022) doi: 10.1007/s41365-022-01069-z
Baidu ScholarGoogle Scholar
[18] D. Ribli, B.Á. Pataki, J.M. Zorrilla Matilla et al.,

Weak lensing cosmology with convolutional neural networks on noisy data

. Mon. Not. R. Astron. Soc. 490, 1843 (2019). doi: 10.1093/mnras/stz2610
Baidu ScholarGoogle Scholar
[19] C.M. Bishop, N.M. Nasser, Pattern recognition and machine learning, Vol. 4. No. 4. (New York: springer, 2006).
[20] F.P. An, A.B. Balantekin, H.R. Band et al.,

(Daya Bay Collaboration), Improved measurement of the reactor antineutrino flux and spectrum at Daya Bay

. Chin. Phys. C 41, 013002. doi: 10.1088/1674-1137/41/1/013002
Baidu ScholarGoogle Scholar
[21] X.B. Ma, W.L. Zhong, L.Z. Wang et al.,

Improved calculation of the energy release in neutron-induced fission

. Phys. Rev. C 88, 014605. doi: 10.1103/PhysRevC.88.014605
Baidu ScholarGoogle Scholar
[22] J.Y. Shi, Q.L. Huang, L. Wang et al.,

Distributed data processing platform of national high energy physics data center

. Frontiers of Data and Computing 4, 97 (2022). doi: 10.11871/jfdc.issn.2096-742X.2022.01.008 (in Chinese)
Baidu ScholarGoogle Scholar
[23] D.P. Kingma, J. Ba,

Adam: A method for stochastic optimization

. Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015). https://www.iclr.cc/archive/www/2015.html
Baidu ScholarGoogle Scholar
[24] S. Ruder,

An overview of gradient descent optimization algorithms

. doi: 10.48550/arXiv.1609.04747
Baidu ScholarGoogle Scholar
Footnote

The authors declare that they have no competing interests.