Introduction
In a typical Monte Carlo method for nuclear data evaluation in the fast energy region as outlined in various references [1-8], parameters to pre-selected models are adjusted to fit selected experimental data. In Refs. [7, 9] for example, a method based on the use of the minimum χ2 was used to determine the best nuclear data file from a large set of random files produced within a Total Monte Carlo framework [10]. In Ref. [8], a weighted χ2 which assigned large weights to reaction channels with a large number of experimental data, to experimental data with smaller uncertainties as well as to channels with large cross sections, was presented. Other works, such as Refs. [1, 11], followed a ‘model selection’ process [12, 13], wherein the ‘best’ model combination, yielding the smallest reduced χ2 value, was chosen from a pool of candidate models. These selected models were then treated as if they represented the ‘true’ models for subsequent parameter variation steps. In Refs. [1, 11], the reduced χ2 was obtained by comparing model calculations to three different types of experimental data which included the reaction cross sections, the residual production cross section and the elastic angular distributions, and particularly applied to the evaluation of proton-induced reactions. A more statistically rigorous Monte Carlo approach is to base the entire evaluation on Bayes’ theory as presented in Refs. [2-5, 14, 15]. This approach has the advantage that both posterior means and covariances can be obtained. In recent years, the Bayesian Monte Carlo (BMC) method has found application in the TALYS Evaluated Nuclear Data Library (TENDL) evaluations [16]. In Ref. [17], the iterative Bayesian Monte Carlo (iBMC) method presented in Ref. [1] was applied for the evaluation of p+111Cd between 1 and 100 MeV. Here, a ‘best’ model set was selected by comparing calculated cross sections produced by varying numerous models and their parameters within TALYS [18], with experiments from the EXFOR database [19] within a Bayesian framework and used as the starting point for new calculations in an iterative manner.
The Monte Carlo methods presented above, however, relied exclusively on the variation of parameters to pre-selected models and are therefore limited by the constraints of these models. The underlying assumption in this approach has been that the chosen model set or combination accurately represents the ‘true’ distribution of the observables of interest (the cross sections and the elastic angular distributions in this case). Additionally, the models were globally selected, implying that the chosen models were applicable across the entire considered energy range. The conventional belief here is that the uncertainty in nuclear data comes entirely from our imperfect knowledge of the parameters associated with these models [20]. However, this approach tends to ignore uncertainties coming from the models themselves. Consequently, it often leads to a difficulty in achieving satisfactory fits to experimental data within specific energy regions for certain channels as the evaluation is constrained by the shortcomings or deficiencies of the chosen models.
A similar observation was made in Ref. [21], where it was stated that ‘as long as a “near perfect model” is not available, a pure Monte Carlo solution based on model parameters alone cannot adequately combine theoretical results and microscopic experimental data’. To attest to the validity of this observation, we present more than 1000 random 59Co(p,3n) cross section curves produced by exclusively varying model parameters within a single model combination in the TALYS code [18, 22] in Fig. 1. The cross section curves as presented in the figure were produced by executing the TALYS code with the following models [18, 22]:
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F001.jpg)
1. mass model 0: Duflo-Zuker formula,
2. level density 2: Back-shifted Fermi gas model, and,
3. strength 1: Kopecky-Uhl generalized Lorentzian
4. Other default models
From Fig. 1, it can be seen that the random cross section curves overlap some but not all of the experimental data presented. As expected, there was difficulty in reproducing cross sections at the threshold energies. Furthermore, there is a noticeable narrowing of the cross-section spread between approximately 50 to 100 MeV. This makes it difficult to overlap all the experimental data presented in the figure for this energy range. We note however that no outlier data were discarded in Fig. 1 as the goal was to observe visually, the cross section spread due to the variation of only model parameters around a selected model set. The inability to overlap some of the experimental data even with parameter variation can be attributed to the underlying deficiencies in the models used. It is instructive to note that experimental data from the EXFOR database [19] which were also used in this work, have been efficiently verified in Ref. [23] by assigning qualify flags to experimental data sets through a systematic comparison with corresponding values in the major nuclear data libraries. As mentioned in Ref. [21] and observed also in this work (see Fig. 1), by varying only model parameters it is sometimes impossible to reproduce the experimental data due to the deficiencies and rigidity of the selected models. By model deficiency, we refer to the ability of our models to accurately predict the underlying data while model rigidity here relates to the flexibility of our models to capture relationships present in our data.
Another example is given in Fig. 2, where models parameters are varied around two distinct model sets in the TALYS code. For model set (A), the models used corresponds to 6: microscopic level densities from Hilaire’s combinatorial tables, alongside other default TALYS models. Conversely, for model (B), the Generalised superfluid level density model (3) combined with the Exciton model (Numerical transition rates with energy-dependent matrix element) of the pre-equilibrium model, along with other default TALYS models, were used. It can be seen that model set (A) generally follows the shape of the experimental data over the entire considered energy region while model set (B) was observed to only reproduce experimental data from the threshold to about 8 MeV. It is important to state here that in the case of model set (B), even with the variation of model parameters, it was still difficult to reproduce experimental data from about 8 to 20 MeV. This underscores the idea that different nuclear reaction models exhibit varying strengths in different energy regions and that, parameter variation alone is sometimes insufficient for overlapping experimental data.
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F002.jpg)
As shown in Fig. 2, the ‘best’ file is a globally optimized file achieved by comparing the reaction and residual production cross sections with differential experimental data for only model set (B). Additionally, in the case of the ’Frankenstein’ file, instead of comparing the model calculations with experimental data using a global χ2 which took into consideration other cross sections, comparisons were made instead, with only experimental data from the 59Co(p,n) cross section (single objective). The advantage of using a single objective function (χ2 value for the 59Co(p,n)) cross section in this case), is that it helps to reduce the challenges associated with Pareto optimality. This approach eliminates the need to address trade-offs between conflicting objectives. The gray curves are random curves as shown in Fig. 2 were produced by perturbing model parameters around the chosen model sets.
As demonstrated earlier in Refs. [24, 16, 11], the incorporation of both models and parameter uncertainties to obtain a widened prior space results in a greater variability in the randomly generated cross-section curves. This would result in a larger number of the available experimental data falling within the spread of the combined model and parameter uncertainties. To illustrate this, in Fig. 3 (top left), we revisit the 59Co(p,3n) cross-section curves, this time, varying each of the six different level density (ld) models within the TALYS code individually while holding all other models constant as their default TALYS values. The ld models in TALYS are as follows [18, 22]:
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F003.jpg)
• ld model 1 - Constant temperature + Fermi gas model;
• ld model 2 - Back-shifted Fermi gas model;
• ld model 3 - Generalised superfluid model;
• ld model 4 - Microscopic level densities (Skyrme force) from Goriely’s tables;
• ld model 5 - Microscopic level densities (Skyrme force) from Hilaire’s combinatorial tables, and
• ld model 6 - Microscopic level densities (temperature dependent Hartree-Fock-Bogolyubov (HFB), Gogny force) from Hilaire’s combinatorial tables.
It can be noticed from Fig. 3 that each ld model exhibits specific strengths with respect to reproducing the presented experimental data. For example, the cross section curves computed with ld model 3 and 4, compared favourably with experimental data from about 60 to 100 MeV while ld model 2 and 5 exhibit more favourably agreement with experimental data in the 20 to 40 MeV range. In general, it can be seen from the figure that most of the experimental data lie within the model spread or uncertainties as expected. Similarly, cross section curves produced with the six different level density models for the 58Ni(p,α) (top right of figure), 59Co(p,3n) (top left), 58Ni(p,γ) (bottom left) and 58Ni(p,3n) (bottom right), are presented. By using different models, it can be observed that most of the experimental data fell within the model spread. In the case of the 58Ni(p,3n) cross section, no experimental data were not available in the EXFOR database and hence, the cross section curves produced with the ld models are compared with the TENDL-2023 and JENDL-5.0 libraries. It is important to highlight that the threshold energy for the TENDL and JENDL evaluations are different from that of the ld models, beginning at a higher energy of 40 MeV compared with 30 MeV obtained with the different ld models, as presented in the figure. In Fig. 4, the variations of the 58Ni(p,γ) cross section computed using the eight gamma-ray strength function models in TALYS are compared with experimental data as well as the TENDL-2023 and JENDL-5.0 evaluations. It is essential to point out that the gamma-ray strength function models are used in the description of the gamma emission channel. From the figure, strength 6 is observed to over predict the (p,γ) cross section from about 2 to 8 MeV. 8 is observed to reproduce some experimental data from Cheng (1980) while strength 3, 6, and 7, reproduced better, experimental data from Hall (1975). It can be observed that the curves from the JENDL-5.0 evaluation and that of strength 1 were similar. It is worth mentioning that there are cases were cross sections curves have low sensitivity to model variations. An example is presented in Fig. 1 where the four different mass models in TALYS were varied one-at-a-time while keeping all other models as the default TALYS models to produce the (p,4n) cross section. The spread in the (p,4n) cross section curves due to the variation of the mass models was observed to be small.
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F004.jpg)
A potential remedy for addressing model deficiencies has been to use Gaussian processes for treating model defects as proposed and presented in various references (see Refs. [25-27]) or related constructions, e.g., Refs. [28, 29]. This approach however, treats the model defect using pre-selected default TALYS models. In other studies presented in Refs. [2] and [1, 17], the models were selected from a pool of models globally for the entire considered energy range. However, due to limitations and inflexibility inherent in these selected models, achieving improvement in evaluations concerning certain channels and energy regions can still pose challenges. In this work however, in line with the search for a full Monte Carlo solution for combining theoretical and experimental data in nuclear data evaluations, we propose a departure from using a single fixed model set for the entire energy range, as done in the Bayesian Monte Carlo (BMC) approach [2] and the iterative Bayesian Monte Carlo (iBMC) outlined in Refs. [1, 17]. Rather, we propose to select models locally at each incident energy or angle. This approach gives more flexibility to the adjustment process by assigning weights to each model based on their proximity to experimental data. As a result, a weighted average was computed over all the considered models at each incident energy within a Bayesian Model Averaging (BMA) framework [12, 13], using the likelihood function values as weights. It is important to note here that Bayesian Model Averaging has been applied to various fields (see for example, Refs. [30-32]) as well as in nuclear physics [33], among others. In Ref. [34], the covariance matrices generated from model variations, in contrast to the conventional parameter variations, were subsequently applied to quantities in astrophysics. In other studies such as in Ref. [35], machine learning techniques were applied in various aspects of nuclear physics such as, nuclear structure, nuclear reactions, and properties of nuclear matter at low and intermediate energies. In Ref. [36], a prediction of nuclear charge density distribution with feedback neural networks was presented. In Ref. [37], a standardized procedure for adjusting parameters in reaction modeling codes was proposed based on residual product excitation functions with the TALYS code. It is instructive to note that even though machine learning techniques could be used together with training data to obtain similar results, in the absence of experimental data, the accuracy and reliability of machine learning predictions cannot be guaranteed.
It is crucial to note that since the updates of the cross sections and angular distributions as proposed in this work were carried out locally on a per-energy-point basis, this approach typically results in discontinuities or "kinks" in the curves of the cross sections or angular distributions produced. By carrying the adjustments of the cross sections and elastic angular distributions at specific incident energies, we are better able to represent the behavior of the reactions at the considered energies. To address the kinks produced, a smoothing function using spline interpolation was applied. The approach aims to leverage the advancements made in nuclear reaction modeling by averaging across multiple models. In this way, we are able to quantify the uncertainties inherent in the models themselves, as well as the uncertainties related to their parameters, thereby providing a more comprehensive understanding of the uncertainties involved. As proof of concept, the proposed method has been applied for nuclear data evaluation of p+58Ni in the fast energy region between 1 and 100 MeV (Fig. 5).
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F005.jpg)
Methods
The flowchart in Fig. 6 outlines the proposed Bayesian Model Averaging (BMA) methodology for nuclear data evaluation in this study. From the figure, we begin with a careful selection of experimental data from the EXFOR database [19]. Treating outlier experiments is particularly important in BMA for several reasons. Firstly, outliers can significantly skew the model averages resulting in inaccurate predictions. Second, by properly addressing outliers, we ensure that the model average is not unduly biased by anomalous data points which could lead to potential distortions in the shapes of the updated cross section curves.
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F006.jpg)
An alternative approach to selecting experiments would involve assigning weights to each experimental data set based on a quality criteria as carried out in Ref. [23]. In this work, however, we adopted a binary accept/reject approach for accepting and rejecting experimental data as outlined in Refs. [1, 11]. For example, experiments lacking reported uncertainties were penalized with a binary value of zero, except in cases where these experiments were the sole experiments available for the considered channel. In such cases, a 10% relative uncertainty was assigned to each data point of the experimental data set. Additionally, if an experimental data set was found to be systematically inconsistent with other experimental data sets for a particular energy range, the inconsistent data set was excluded.
Next, as can be seen from Fig. 6, we define the prior model and parameter spaces. As proof of concept, similar to Ref. [1], a total of 52 different nuclear reaction model types in the TALYS code were considered in this work. A list of the selected nuclear reaction models are itemized in Table 1. It is important to clarify that, in the context used in this work, the term "models" encompasses sub-models and, at times, components of models or sub-models. From the table it can be seen that there are 4 Jeukenne-Lejeune-Mahaux (JLM) optical models, 6 level density models, 4 pre-equilibrium (PE) models, 4 mass models, and 8 strength functions, among others, available in the TALYS code. It is instructive to note here that each nuclear reaction calculation involves a combination of several of these models interconnected within a nuclear reaction code such as TALYS. Model calculations were performed using TALYS version 1.9 [22].
TALYS keywords | Number of models | Model name |
---|---|---|
preeqmode | 4 | Pre-equilibrium (PE) |
ldmodel | 6 | Level density models |
ctmglobal | 1 | Constant Temperature |
massmodel | 4 | Mass model |
widthmode | 4 | Width fluctuation |
spincutmodel | 2 | Spin cut-off parameter |
gshell | 1 | Shell effects |
statepot | 1 | Excited state in Optical Model |
spherical | 1 | Spherical Optical Model |
radialmodel | 2 | Radial matter densities |
shellmodel | 2 | Liquid drop expression |
kvibmodel | 2 | Vibrational enhancement |
preeqspin | 3 | Spin distribution (PE) |
preeqsurface | 1 | Surface corrections (PE) |
preeqcomplex | 1 | Kalbach model (pickup) |
twocomponent | 1 | Component exciton model |
pairmodel | 2 | Pairing correction (PE) |
expmass | 1 | Experimental masses |
strength | 8 | Gamma-strength function |
strengthM1 | 2 | M1 gamma-ray strength function |
jlmmode | 4 | JLM optical model |
In the case of the parameters, Table 1 provides the parameter widths (or uncertainties) which define the parameter space along with a comprehensive list of the parameters to the nuclear reaction models considered. The parameter widths given in the table were obtained from the TENDL library [16]. It is important to emphasize that these parameter widths or uncertainties were obtained by comparing random cross-section curves produced through parameter variation with scattered experimental data. In Table 2, the model parameters are grouped under phenomenological and semi-microscopic optical models, level density and pre-equilibrium models, and gamma-ray strength functions. The parameter widths or uncertainties as presented in the table are relative uncertainties (in %) except in the case of the level density parameter a, and the gπ and gν parameters, where the uncertainties are given in terms of the mass number A. Where gπ and gν are the single-particle state densities used in the pre-equilibrium model. Similar tables have been provided in Refs. [22] and [1]. A more complete list of all the model parameters can however be found in Refs. [22, 18].
Parameter | Uncertainty [%] | Parameter | Uncertainty [%] |
---|---|---|---|
OMP - phenomenological | |||
2.0 | 2.0 | ||
2.0 | 3.0 | ||
3.0 | 5.0 | ||
10.0 | 10.0 | ||
10.0 | 10.0 | ||
10.0 | 10.0 | ||
10.0 | 3.0 | ||
2.0 | 10.0 | ||
10.0 | 5.0 | ||
10.0 | 20.0 | ||
20.0 | 10.0 | ||
OMP - Semi-microscopic optical model (JLM) | |||
λV | 5 | 5 | |
λW | 5 | 5 | |
level density parameters | |||
a | 11.25-0.03125.A | 30.0 | |
E0 | 20.0 | T | 10.0 |
krot | 80.0 | 30.0 | |
Pre-equilibrium | |||
Rγ | 50.0 | M2 | 30.0 |
gπ | 11.25-0.03125.A | gν | 11.25-0.03125.A |
Cbreak | 80.0 | Cknock | 80.0 |
Cstrip | 80.0 | Esurf | 20.0 |
Rνν | 30.0 | Rπν | 30.0 |
Rππ | 30.0 | Rνπ | 30.0 |
Gamma ray strength function | |||
5.0 | 20 | ||
20 | 10 |
Next, we assign prior probabilities to our models as well as their parameters. We assumed here that all models as well as the parameters have equal prior probabilities and hence the uniform distribution was assigned to both the models and the parameters. Using uniform distributions for our priors ensures that the posterior distribution is predominantly shaped by the influence of the experimental data used.
Additionally, in order to estimate the uncertainty due to only parameter variation, a set of random ENDF nuclear data files were generated by varying only model parameters around default TALYS models. For this, a total of 3030 random nuclear data files were produced for p+58Ni for the considered energy region. In Fig. 7, random curves for the following cross sections: 58Ni(p,non-el), 58Ni(p,2p), 58Ni(p,np) and 58Ni(p,α), are presented. These curves depict the variability due to the combined variation of both models and their parameters (shown in purple) as well as the variability arising solely from the variation of model parameters (shown in orange).
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F007.jpg)
Figure 7 shows, in the case of the 58Ni(p,2p) cross section, that the exclusive variation of model parameters failed to capture several experimental data points from Levkovski (1991) and Ewart (1964). However, it can be seen that all the experimental data presented lie within the large combined prior spread of both the models and their parameters, as expected.
It is worth bearing in mind that a situation may arise where a considered data point or data set falls outside the spread of both the model and parameter uncertainties. This could be due to the fact that the model space (and/or parameter) was not fully explored. A possible solution to this problem would be to increase either the parameter widths or uncertainties (as presented in Table 2) in order to expand the parameter space. Alternatively, the model space can be enlarged further by introducing additional model (if available). However, if the considered experimental data point is deemed an outlier, it is generally recommended to exclude such data points. This precaution is particularly crucial as outlier experimental data can distort the overall shape of the final cross-section curves produced. Likewise, scenarios may arise where no experimental data are available for a given channel. This problem is discussed in more detail in Sect. 2.2. In Fig. 8, prior curves for the residual production cross section, 58Ni(p,x)57Ni, is presented. Note that a particular combination of models yielded ‘unphysical’ cross section curves which intend, distorted the model average between 45 and 70 MeV. How to treat ‘bad’ models is presented in more detail in Sect. 3.4.
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F008.jpg)
In Fig. 9, we present the prior random 59Co(p,3n) cross section curves showing the model average value obtained by simply taking the mean over all the considered models. It can be observed that the model average produced without the inclusion of experimental information, performed comparably well with the TENDL evaluation as well as with the experimental data. This can be attributed to the fact that averaging over multiple models could reduce individual model biases and uncertainties leading to favorable results. However, it should be pointed out that this heavily depends on the diversity and accuracy of the models involved since without experimental data to calibrate the models, model predictions could lead to inaccurate results. Zhuraviev (1984) dataset at about 22 MeV appears to be an outlier and hence, was excluded.
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F009.jpg)
Following the generation of the random cross sections, we compute the likelihood function by combining model predictions with experimental data at each data point, denoted by i, for the channel, c. The likelihood function values are then combined with the prior distributions to derive weighted averages and variances at each incident energy or angle. Finally, a smoothing function using spline interpolation was utilized to smoothing-out any "kinks" or discontinuities in the cross section curves produced. The final product of the evaluation includes a central value accompanied by the corresponding prior and posterior variances and covariances.
BMA in the presence of experimental data
Let’s assume that we have only one model set or combination (
Let’s suppose we have J computing models,
To standardize the weights, we normalize the likelihood function presented in Eq. 10 to have a maximum of 1 as follows:
If we let the relative likelihood function equal to the file weight (wcik), also called Bayesian Monte Carlo (BMC) weights, the weights for each considered channel (c), incident energy (i) and for the file k can be given as:
It is important to note here that instead of using the reduced χ2 for the computation of the weights as presented in Eg. 13, other weight specifications such as the Bayesian information criterion (BIC) [40] or the Akaike Information Criterion (AIC) [41] which are used within the BMA approach could have been used. These weights were however not utilized in this work.
The corresponding weighted variance at incident energy, i, and channel,
BMA in the absence of experimental data
In the absence of experimental data, the Bayesian Model Averaging (BMA) solution for a joint model and parameter distribution involves combining only the prior information from multiple models and their parameters without relying on observed data. As an example, the 58Ni(p,2n) cross section showing evaluations from three different libraries (ENDF/B-VIII.0, JENDL-5.0 and TENDL-2023) is presented in Fig. 10. The ENDF/B-VIII.0 p+58Ni evaluation was produced using the GNASH code system [42] which utilizes the Hauser-Feshbach statistical model, pre-equilibrium and direct-reaction theories. For the evaluation, the particle transmission coefficients used for the Hauser-Feshbach calculations as well as for the elastic proton angular distributions, were obtained from the spherical optical model calculations. The Gamma-ray transmission coefficients were calculated using the Kopecky-Uhl model. Similar to TENDL evaluation, the ECIS95 code was used for the optical model calculations. For the JENDL-5.0 evaluation, the CCONE code system [43] which integrates various nuclear reaction models needed to describe nucleon, light charged nuclei up to alpha-particle and photon induced reactions, was used. For the JENDL-5.0 p+58Ni evaluation, the two-component exciton model with global parametrization of Koning-Duijvestijn, was used. For the level density, the constant temperature and Fermi-gas model with shell energy corrections was used. In the case of the gamma-ray strength functions, the enhanced generalized Lorentzian form was used for E1 transition. For M1 and E2 transitions the standard Lorentzian form was adopted. For the calculation of angular distribution for emitted particles, Kalbach Systematics, was used. For the TENDL-2023 evaluation, the TALYS code system was used with TALYS default models and parameters.
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F010.jpg)
From the figure, a wide spread between the evaluations presented can be observed indicating a lack of a comprehensive understanding of the cross section. The Bayesian Model Averaging solution, assuming that each of the evaluation from the different libraries was a different model, and assuming that all the models were of equal weights, would be to take an average over the three evaluations and proceed with it as though it was our best estimate (see Fig. 10).
The posterior distribution of our quantity of interest,
In the absence of experimental data, the posterior distribution is significantly shaped by the prior distribution and any assumptions made about the considered models. Hence, the likelihood function may be constructed based on our prior beliefs regarding the models in the absence of experimental data. Hence, we could assume that all the considered models have equal weights or weigh each model based on prior information about the models. In nuclear reaction modelling for example, it is generally accepted that microscopic models have superior predictive power than their phenomenological counterparts. Consequently, in the absence of experimental data, relatively larger weights can be assigned to the microscopic models, reflecting their strong predictive capabilities, and lower weights to the phenomenological models because of their known limited predictive power. In this work however, all models were assumed to have equal weights for the channels where experimental data were unable.
The average cross section over the models and parameters at incident energy, i, and channel, c, in the absence of experimental data
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F022.jpg)
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F011.jpg)
The unbiased estimate of the variance which is a measure of how spread out the distribution of the cross section at each energy, i, for channel, c
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F012.jpg)
It can be observed from the figure that the BMA average (in the absence of experimental data) slightly over predicted most of the experimental data available but compared quite favourably with the TENDL evaluation over the entire energy region. This gives an indication that the Bayesian Model Average Prediction (BMAP) can give relatively good evaluations even in the absence of experimental data. As mentioned earlier, averaging over multiple models could lead to a reduction in individual model biases and uncertainties, hence, leading to favorable results. It was further observed that the experimental data fell within the model-parameter uncertainty or spread as expected. In Fig. 13, the 58Ni(p,α) cross section showing the Bayesian Model Average over the prior spread in the absence of experimental data of cross section curves, is presented. It can be observed from the figure that even though the model average value over estimated the experimental data between about 12 and 16 MeV, it compared relatively well with experimental data for the rest of the energy range considered.
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F013.jpg)
Extracting model and parameter uncertainties
A step by step algorithm for the extraction of model uncertainties at each incident energy is outlined in Table 3. As mentioned earlier, we start by selecting the distribution from which the models and their parameters were sampled. Next, we vary the models and their parameters simultaneously using the TALYS code system to generate a large set of random cross section curves as a function of incident energy and the elastic angular distributions as a function of both energy and angle. From the combined spread due to the variation of the models and their parameters, a distribution in the cross section of interest can be obtained at each incident energy, i, in the case of the reaction and residual production cross sections, or in angle, in the case of the elastic angular distributions. The total spread from this distribution can be attributed to the simultaneous variations of both the models and their parameters.
Algorithm |
---|
1: Choose distribution from which models and their parameters would be sampled |
2: Vary multiple models and their parameters simultaneously to produce a large number of random cross sections |
3: Determine the total variance at incident energy |
6: Vary only model parameters around a single model set combination using uniform distributions |
7: Compute the variance at energy i due to only model parameters for each channel c |
8: Extract the uncertainty due to only models |
Next, we determine the variance due to only parameter variation at a considered energy, i. To achieve this, more than 3000 random cross section curves were produced with varying only model parameters, around a single (fixed) set of models. The following model vector, alongside other models not explicitly listed here, was used [22]: ld model 2: Back-shifted Fermi gas model; strength function model 6: Goriely Time (T)-dependent Hartree-Fock-Bogolyubov (HFB); pre-equilibrium model 3: Exciton model using numerical transition rates with optical model for collision probability; preeqspin 2: spin distribution from total level densities is adopted; pair model 2: Compound nucleus pairing correction; widthmode 2: Hofmann-Richert-Tepel-Weidenmüller (HRTW) model.
From the distribution of cross sections or angulation distributions at each incident energy of interest, the variance can be calculated. If we assume that there are no strong correlations between the models and the parameters, the total or combined variance of the calculated cross section at energy, i, for channel,
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F014.jpg)
Since we can compute the uncertainty due to only parameter variations, the model uncertainties can easily be extracted from Eq. 21 as follows:
Application of BMA methodology
Prior distribution of models and parameters
In this work, we adopted a uniform distribution for the prior models and parameters distributions. By using the uniform distribution to each model type, we assign a constant probability to each model within the lower and upper bounds of each model type under consideration. For example, in the case of the level density model type, the six available ld models were each assigned unique identifiers (ld1, .., ld6). These models were drawn randomly many times within the assigned lower and upper bounds of each model type. In Figs. 15 and 16, the distribution of the gamma-ray strength functions and the level density models for about 9000 random samples are presented respectively. To assess whether our model distributions, as shown in Figs. 15 and 16, conformed to uniform distributions, we computed the p-value for each distribution based on a 95% confidence interval. The obtained p-values for each distribution were below 0.05, leading us to reject the null hypothesis (H0 is: The model prior distributions are not uniform) within 95% confidence interval. Applying the same methodology to the different model types resulted in different model vectors and similar conclusions. A total of 100 different model combinations, each used as input to the TALYS code, were produced. An example of a list of models contained in random file number, 2025, is provided in Table 4.
Model combination in random file: 2025 |
---|
1: ldmodel 1: Constant Temperature + Fermi gas model (CTM) |
2: ctmglobal y: Flag to enforce global formulae for the Constant Temperature Model (CTM) |
3: strength 8: Gogny D1M HFB+QRPA |
4: widthmode 0: no width fluctuation, i.e. pure Hauser-Feshbach |
5: preeqmode 3: Exciton model: Numerical transition rates with optical model for collision probability |
6: preeqspin 3: the pre-equilibrium spin distribution is based on particle-hole state densities |
7: kvibmodel 1: Model for the vibrational enhancement of the level density |
8: spincutmodel 2: Model for spin cut-off parameter for the ground state level densities |
9: strengthm1 2: Normalize the M1 gamma-ray strength function with that of E1 as fE1/(0.0588A0.878) |
10: preeqcomplex y: Flag to use the Kalbach model for pickup, stripping and knockout reactions, in addition to the exciton model, in the pre-equilibrium region. |
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F015.jpg)
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F016.jpg)
Thereafter, TALYS parameters to each model combination, as listed for example in Table 4, were varied within their widths or uncertainties (see Table 2) using the TALYS code system. This process generated approximately 100 random nuclear data files per model combination, resulting in more than 9000 random ENDF-formatted nuclear data files for p+58Ni. It is worth noting that the incident energies considered for p+58Ni ranged from 1 to 100 MeV. In Fig. 18, the prior distribution of the geometrical parameters (
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F018.jpg)
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F019.jpg)
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F017.jpg)
Experimental data
The experimental data used in this work were selected from the threshold up to 100 MeV. The Bayesian Model Averaging (BMA) method proposed was applied to the following cross sections and the elastic angular distributions:
• Reaction cross sections: (p,non-el), (p,n), (p,np), (p,p), (p,α), (p,2p).
• Residual production cross sections: 58Ni(p,x)55Co, 58Ni(p,x)56Co, 58Ni(p,x)56Ni, 58Ni(p,x)57Ni.
• Elastic angular distributions at the following incident energies: 9.51, 16.00, 20.00, 21.30, 35.20, 39.60, 40.00, and 61.40 MeV between 1 and 180 degrees.
The experimental data for each of the categories listed were obtained from the EXFOR database.
Case of one and two experimental point
To illustrate the practical implementation of the BMA method, we consider a single experimental data point for the 58Ni(p,np) cross section at the incident energy, i = 24 MeV. The prior distribution at this energy is made up of more than 9000 random cross section values (in mb) generated through the variation of numerous TALYS models and their parameters. Subsequently, we compared the calculated cross sections with the experimental data at this incident energy by computing a reduced χ2. The resulting file weights for each random file at the given ewnergy are then combined with the prior distribution to obtain the posterior distribution, from which the updated mean and 1σ standard deviations were extracted. In Fig. 20, we present an illustrative example showcasing the prior (upper left), and prior and posterior (bottom) distributions, and distribution of file weights (lower left) for each random 58Ni(p,np) cross section value computed at 24 MeV. In the bottom right panel of the figure, a plot illustrating the convergence of the mean and 1σ standard deviation of the prior distribution is presented. We note here that, in the case presented in Fig. 20, both models and their parameters were varied. The prior distribution reflects the cross section extracted from the cross section curves before the inclusion of experimental information while the posterior distribution represents the cross section distribution at 24 MeV after taken experimental data into consideration.
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F020.jpg)
From Fig. 20, a relatively large prior can be observed as expected. This large prior then narrows around the mean of the experimental data for the posterior distribution. The large prior distribution observed from the figure can be attributed to the relatively large non-informative prior used. It can also be observed that the prior distribution is slightly skewed to the right of the distribution where larger cross section values were obtained. The skewed prior distribution was then shaped into a normal distribution after combining experimental data through the likelihood function (see upper right panel of Fig. 20). Since the prior involve variations of many models and parameters, achieving convergence of the random cross sections at each incident energy can be difficult. From the convergence plot (bottom right of Fig. 20), it can be observed that both the mean and the 1σ standard deviation converges after about 7000 random samples. The final mean value of 252.58 Mb is observed to be close to the experimental cross section value of 247 Mb for the 58Ni(p,np) cross section at 24 MeV as expected. It is important to note that this value falls within the experimental uncertainty of ±24.7. It can also be observed that there is a significant reduction in the 1σ uncertainty of the prior uncertainty of 109 to a posterior uncertainty of 25. From the weight distribution presented in the figure, it can be observed that a considerable number of files were assigned with low and insignificant weights between 0 and 0.2. This is expected as the use of a large informative prior resulted in many cross section curves to be positioned far from the experimental data and hence, resulted in large chi square values. In Fig. 21, the BMA methodology is applied to two experimental data points of the 58Ni(p,p)58Ni cross section at 80 and 100 MeV. From the figure, the prior values reflect the simple average over the models while the posterior values denote the BMA values after inclusion of experimental data. The prior uncertainty band is a combination of both model and parameter uncertainties.
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F021.jpg)
Treating ‘bad’ models in the absence of experimental data
As previously discussed, the presence of ‘bad’ models for the computation of the prior mean and in the case where no experimental data is available, can lead to significant distortions in the shape of the cross section or angular distribution curves as well as in the corresponding updated uncertainty bands. It is instructive to note however that the BMA approach (in the presence of experimental data) inherently handles the issue of ‘bad’ models by assigning them lower weights compared to experimental data, thereby, minimizing their impact on the posterior distribution.
In Fig. 22 for example, we present the 58Ni(p,2p) cross section, highlighting non-smooth curves in the energy range between 17 to 25 MeV. These curves look ‘unphysical’ and hence, are assumed to have been generated with a ‘bad’ model combination. The spread in the cross section curves is due to the variation of parameters around the ‘bad’ model combination. The curves were compared against experimental data from EXFOR and the TENDL evaluation. The prior mean in the figure was calculated by averaging over all models at each energy point (see 58Ni(p,2p) cross section in Fig. 7). It can be observed that the non-smooth cross sections deviated from the observed trend of the experimental data as well as the TENDL evaluation. Despite the non-smoothness in the prior mean curve, it was observed that it compared favorable agreement with some experimental data, particularly at threshold energies and at higher energies, especially with data from Reimer (1998) and Kaufman (1960). The non-smoothness in the prior mean curves is attributed to the inclusion of both ‘good’ and ‘bad’ models in the computation of the averaged cross section values. A potential solution to ‘bad’ models involves implementing the Occam’s razor as suggested in Ref. [44]. This approach eliminates models that globally perform poorly in their prediction of experimental data for the considered channels. The non-smooth cross section curves as depicted in Fig. 22, were produced with the following model combination, among other models in the TALYS code [22]:
1. level density model 6: Microscopic level densities (temperature dependent Hartree-Fock-Bogolyubov (HFB), Gogny force) from Hilaire’s combinatorial tables;
2. gamma strength function model 5: Goriely’s hybrid model;
3. width fluctuation correction model 2: Hofmann-Richert-Tepel-Weidenmüller (HRTW);
4. pre-equilibrium model 4: Multi-step direct/compound model;
5. Jeukenne-Lejeune-Mahaux (JLM) model 0: standard Jeukenne-Lejeune-Mahaux (JLM) imaginary potential;
6. mass model 3: HFB-Gogny D1M table;
7. Other default models.
This specific model combination was therefore treated as a ‘bad’ model and consequently, excluded from subsequent analyses. It is noteworthy to point out that in the case of the optical model, the Jeukenne-Lejeune-Mahaux (JLM) semi-microscopic model was utilized instead of the default local and global parameterisations of Koning and Delaroche which are typically used as the default optical model parameterisations in TALYS for non-actinides (as applicable also in this case). Additionally, it’s essential to recognize here that certain model combinations may not exert a significant impact or sensitivity to cross sections or angular distributions of interest. For instance, it has been observed in Ref. [1] that the use of the HRTW model instead of the Moldauer model has no noticeable impact on proton-induced reaction cross sections for p+59Co between 1 and 100 MeV.
For consistency in the evaluated files, it is important that each complete evaluation obeys the cross section sum rules. Typically, the total cross section is the sum of the different partial cross sections for all possible interactions between a projectile and a nuclide. For neutron induced reactions, the total cross section is a well-defined concept since neutrons interact mainly through the nuclear force. In the case of charge particle interactions such as protons (as is the case of this work), however, the computation of the total cross section is not straight forward since, in addition to the nuclear force, we have to deal with the electromagnetic force. Hence, the current version of TALYS does not give the total cross sections for proton induced reactions. In this work, as suggested in Ref. [21], we proposed to reassign any excess cross sections to "less important channels" with no experimental data. By "less important channels", we refer to channels with relatively small cross sections, and whose products hold minimal significance for the scientific community and for which no experimental data are available.
Results
Table 5 presents 1σ model and parameter uncertainties for selected incident energies ranging from 15 to 30 MeV in the case of the 58Ni(p,np) cross section. From the table, relatively smaller parameter uncertainties can be observed from threshold up to about 18 MeV. This observation can be made also from Fig. 7 where a narrow spread was observed in the low energy region in the case of only parameter variation. In contrast, the model uncertainties exhibit relatively larger uncertainties across the entire considered energy region, ranging from 46.44 to 108.12 as can be seen in Table 5. It is generally known that TALYS has difficulty in reproducing experimental data at the threshold energies [18]. However, the sensitivity of cross sections to parameter variation at threshold energies in TALYS, has been observed to be small which normally results in the inability of the code to overlap some experimental cross sections at these energies.
Incident energy (MeV) | Total uncertainty (1σ) | Model uncertainty (1σ) | Parameter uncertainty (1σ) |
---|---|---|---|
15.7 | 46.5 | 46.44 | 2.5 |
16.0 | 52.9 | 52.84 | 2.9 |
16.2 | 54.4 | 54.27 | 3.0 |
16.8 | 62.5 | 62.35 | 3.7 |
17.1 | 66.1 | 66.00 | 4.1 |
17.3 | 66.9 | 66.81 | 4.3 |
17.7 | 72.0 | 71.86 | 4.8 |
17.9 | 76.0 | 75.87 | 5.1 |
18.2 | 80.9 | 80.72 | 5.5 |
18.4 | 83.9 | 83.73 | 5.9 |
19.0 | 90.3 | 90.05 | 7.0 |
19.1 | 87.9 | 87.57 | 7.2 |
19.3 | 85.1 | 84.76 | 7.7 |
19.5 | 85.4 | 85.01 | 8.3 |
20.0 | 98.7 | 98.18 | 9.9 |
20.3 | 99.6 | 99.05 | 10.6 |
20.5 | 101.2 | 100.64 | 11.0 |
20.9 | 104.1 | 103.44 | 11.9 |
21.0 | 105.4 | 104.75 | 12.1 |
21.2 | 108.8 | 108.06 | 12.5 |
21.4 | 108.9 | 108.12 | 12.8 |
21.5 | 107.2 | 106.40 | 12.9 |
22.1 | 101.5 | 100.61 | 13.7 |
22.6 | 97.9 | 96.93 | 13.9 |
23.4 | 99.4 | 98.42 | 14.1 |
24.0 | 107.6 | 106.67 | 14.3 |
24.5 | 118.2 | 117.28 | 14.5 |
25.3 | 120.1 | 119.27 | 14.4 |
25.8 | 109.4 | 108.43 | 14.2 |
26.3 | 103.2 | 102.27 | 14.0 |
27.0 | 97.8 | 96.85 | 13.8 |
27.5 | 96.6 | 95.59 | 13.7 |
27.9 | 97.2 | 96.23 | 13.6 |
28.7 | 102.6 | 101.74 | 13.5 |
29.1 | 107.2 | 106.38 | 13.5 |
A similar table showing the model and parameter uncertainties extracted for energies from 9 to 60 MeV in the case of the 58Ni(p,non-el) cross section, is presented in Table 6. From the table, it can be observed that the parameter uncertainties are generally lower than the model uncertainties as expected, ranging from 23.6 to 59.8 representing respectively. These values represent 3.4% to 7.4% of the cross section at the considered energies. As expected, the model uncertainties are generally large ranging from 19.56% to 26.40% of the cross section. An observed trend in the table is the increase in both model and parameter uncertainties with increasing energy. Additionally, both the model and parameter spreads are narrow in the lower energy regions, widening as the energy increases.
Incident energy (MeV) | Total uncertainty (1σ) | Model uncertainty (1σ) | Parameter uncertainty (1σ) |
---|---|---|---|
9.14 | 137.5 | 135.42 | 23.6 |
22.70 | 182.9 | 178.95 | 38.0 |
25.10 | 188.0 | 183.28 | 41.9 |
30.00 | 197.4 | 192.08 | 45.3 |
30.10 | 197.5 | 192.24 | 45.4 |
34.80 | 205.0 | 199.02 | 49.3 |
39.70 | 211.2 | 204.47 | 53.0 |
40.00 | 211.5 | 204.74 | 53.2 |
45.20 | 216.0 | 208.66 | 55.9 |
47.90 | 217.5 | 209.93 | 56.9 |
49.50 | 218.5 | 210.77 | 57.5 |
60.80 | 221.3 | 213.02 | 59.8 |
In Fig. 23, we present the Bayesian Model Averaging (BMA) results for a selected cross sections: 58Ni(p,np)57Ni, 58Ni(p, γ), 58Ni(p,α), 58Ni(p,2p) illustrating the prior and posterior means along with their corresponding ±1σ uncertainties. For the (p,α) and (p,γ) cross sections, the prior means appear inaccurate in specific energy ranges where the model uncertainties are large. This is however different for the (p,np) cross section where the uncertainties appear relatively constant over the considered energy region, with the prior mean comparing favorably with the experimental data. The posterior means for all channels presented consistently reproduce experimental data; however, localized adjustments at each energy point as carried out in this work introduces discontinuities or kinks in the cross section curves, particularly noticeable in the 58Ni(p,non-el) cross section. These kinks can be attributed to the imperfections in our experimental data and the absence of experimental correlation considerations in the computation of the chi-square. To smoothing-out the cross section curves, cubic spline interpolation was employed. It can be observed that generally smaller posterior uncertainties were obtained after experimental data were taken into account. This can be attributed to a number of reasons. First, this gives an indication that perhaps, the model vectors used have similar performance with respect to the experimental data and hence resulted in similar weights being assigned to each model vector. Additionally, if one of the model vectors (and parameters as well) is supported by strong evidence in the data, this could result in this model set being assigned with large weights compared to the other model vectors. Consequently, this would result in a reduction in the uncertainty associated with the choice of the model (and/or parameters) leading to a small posterior spread. Furthermore, since a large uninformed prior was used in this work, it can be observed that the posterior distribution was dominated by the experimental data through the likelihood function and hence, the impact of the prior is less insignificant when experimental data are available. It should also be noted that since similar nuclear reaction models were used, correction in model predictions could result in similar weights. In such cases, the uncertainty in model selection diminishes, resulting in smaller posterior spreads. Although posterior uncertainties are generally small and, in some instances, smaller than experimental uncertainties, it is important to note that only 1σ uncertainties are reported here. For example, in Fig. 24, the experimental data at 10 MeV appears outside the prior uncertainty band of the models. However, if the prior uncertainty were extended to 3σ, the experimental data would fall within the expected uncertainty band as expected (see 58Ni(p,non-el) cross section in Fig. 7 for example). Visible kinks in the posterior curve, as seen in Fig. 24, necessitate the use of a smoothing function. It’s crucial to acknowledge that the smoothing process could skip some experimental data points, as can be observed in Fig. 24. The posterior uncertainties, which took the differential data into account, are small, and the weighted average cross sections are in close agreement with experimental data.
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F023.jpg)
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F024.jpg)
In Fig. 25, we compare the prior and posterior means, along with their corresponding uncertainties, to experimental data and the ENDF/B-VIII.0 evaluation for the residual production cross sections: 58Ni(p,x)55Co, 58Ni(p,x)56Co, 58Ni(p,x)56Ni, and 58Ni(p,x)57Ni. It is observed that the prior mean, representing the average values over all models without incorporating experimental data, outperformed the ENDF/B-VIII.0 evaluation for all the considered cross sections. This is particularly surprising as the prior does not account for experimental data. Additionally, both the smoothed and non-smoothed versions of the posterior were observed to reasonably agree with experimental data. In Figs. 26 and 27, we present the prior and posterior means along with their corresponding uncertainties, for the elastic angular distributions at selected incident energies for p+58Ni. As expected, it is observed that the posterior mean compares favorably with experimental data. Generally, small uncertainties are observed at smaller angles but they increases particularly at high angles. This is consistent with what was observed in Ref. [1] where it was noted that TALYS had difficulties in reproducing experiments at high angles. In Fig. 28, we present an example of 58Ni(p,np) correlations based on the variation of only model parameters, and on the variations of both models and their parameters. Similar correlation plots have been presented in Fig. 29 for the 58Ni(p,2p) cross section. Additionally, in Fig. 30, correlation matrices are presented for the following residual production cross sections: 58Ni(p,x)57Ni (left) and 58Ni(p,x)56Ni (right) based on the variations of both models and their parameters.
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F025.jpg)
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F026.jpg)
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F027.jpg)
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F028.jpg)
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F029.jpg)
-202411/1001-8042-35-11-016/alternativeImage/1001-8042-35-11-016-F030.jpg)
As expected, high correlations are observed in both cases, especially close to the diagonal. The correlations observed in the variation of only model parameters can be attributed to the use of the same models but with varying parameters. In the case of the variation of both models and their parameters, although different models were used, it is known that the models in each combination, made use of the same or similar model parameters, inputs, and approaches in their solutions. These factors introduce energy-energy correlations in the prior distribution. It must be noted that these prior correlations are taken into account through the simultaneous variations of both the models and the parameters. Additionally, as a consequence of the method, posterior correlations and covariances can be obtained. These prior and posterior correlations can be utilized for sampling and subsequent generation of random cross sections for the purpose of nuclear data uncertainty propagation to applications [45].
Since we select models at each incident energy point rather than globally, the proposed BMA method naturally accounts for over-fitting as well as for under-fitting as over-fitted (and under-fitted) models would normally be assigned with lower posterior probabilities, and hence, their contributions to the final evaluation is addressed by the model averaging process. It is however important to note that the choice of models and their distributions is important as overly complex risks over-fitting.
Conclusion
In traditional BMC approach, a single "best" model is often chosen to make predictions. However, this approach has been observed to be sensitive to the specific choice of the model. Additionally, the uncertainties related to model selection are not explicitly considered. In this work, we proposed a nuclear data evaluation method based on Bayesian Model Averaging (BMA) tailored to the fast energy region. Our proposed approach involves the use of a very large non-informative prior derived from sampling numerous models along with their parameters. In addition, instead of selecting a single "winning" model set for the entire energy range of interest, we select the models locally at each incident energy based on comparison with experimental data. The final evaluation is a weighted average over all considered models, with weights determined by the likelihood function values. Since the cross-sections and angular distributions were updated on a per-energy-point basis, the BMA approach typically results in discontinuities, or "kinks," in the cross sections or angular distributions curves. To address these kinks, a smoothing function was applied. In the future, we intend to explore other methods for smoothing the cross section curves such as the Nadaraya Watson kernel regression using energy dependent weights. Furthermore, both prior and posterior covariances were obtained for the evaluations carried out in this work. The proposed method has been applied to the evaluation of p+58Ni from 1 to 100 MeV energy range. The results demonstrate favorable comparisons with experimental data, as well as with the TENDL-2023 evaluation.
Iterative Bayesian Monte Carlo for nuclear data evaluation
. Nucl. Sci. Tech. 33, 50 (2022). https://doi.org/10.1007/s41365-022-01034-wBayesian Monte Carlo method for nuclear data evaluation
. Eur. Phys. J. A 51, 184 (2015). https://doi.org/10.1140/epja/i2015-15184-xEvaluation of the covariance matrix of neutronic cross sections with the Backward-Forward Monte Carlo method
. in Proceedings of the International Conference on Nuclear Data for Science and Technology.A new formulation of the Unified Monte Carlo Approach (UMC-B) and Cross-Section Evaluation for the dosimetry reaction 55Mn (n, γ) 56Mn
. J. ASTM International 9(3), 1-12 (2012). https://doi.org/10.1520/JAI104115Unified Monte Carlo and mixed probability functions
. J. Korean Phys. Society 59(2), 1284-1287 (2011). https://doi.org/10.3938/jkps.59.1284Combining total Monte Carlo and unified Monte Carlo: Bayesian nuclear data uncertainty quantification from auto-generated experimental covariances
. Prog. Nucl. Energy 96, 76-96 (2017). https://doi.org/10.1016/j.pnucene.2016.11.006How to randomly evaluate nuclear data: A new data adjustment method applied to 239Pu
. Nucl. Sci. Eng. 169, 68 (2011). https://doi.org/10.13182/NSE10-66Uncertainty study of nuclear model parameters for the n+56Fe reactions in the fast neutron region below 20 MeV
. Nucl. Data Sheets 118, 346-348 (2014). https://doi.org/10.1016/j.nds.2014.04.076Evaluation and adjustment of the neutron-induced reactions of 63,65Cu
. Nucl. Sci. Eng. 170, 265 (2012). https://doi.org/10.13182/NSE11-37Towards sustainable nuclear energy: Putting nuclear physics to work
. Ann. Nucl. Energy 35, 2024-2030 (2008). https://doi.org/10.1016/j.anucene.2008.06.004In search of the best nuclear data file for proton induced reactions: varying both models and their parameters
. EPJ Web Conf. 247, 15011 (2021). https://doi.org/10.1051/epjconf/202023913005Bayesian model selection and model averaging
. J. Math. Psychol. 44, 92-107 (2000). https://doi.org/10.1006/jmps.1999.1278Model selection and accounting for model uncertainty in linear regression models
. J. Am. Stat. Assoc. 89(428), 1535-1546 (1994). https://doi.org/10.1080/01621459.1994.10476894On the use of Bayesian Monte-Carlo in evaluation of nuclear data
. EPJ Web Conf. 146, 02007 (2017). https://doi.org/10.1051/epjconf/201714602007Covariance matrices for nuclear cross-sections derived from nuclear model calculations. Report ANL/NDM-159
TENDL: Complete nuclear data library for innovative nuclear science and technology
. Nucl. Data Sheets 155, 1-55 (2019). https://doi.org/10.1016/j.nds.2019.01.002TENDL-based evaluation and adjustment of p+ 111Cd between 1 and 100 MeV
. Appl. Radiat. Isotopes 198,TALYS: modeling of nuclear reactions
. Eur. Phys. J. A 59, 131 (2023). https://doi.org/10.1140/epja/s10050-023-01034-3The art of collecting experimental data internationally: EXFOR, CINDA and the NRDC network
, inPotential sources of uncertainties in nuclear reaction modeling
. EPJ Nucl. Sci. Technol. 4, 16 (2018). https://doi.org/10.1051/epjn/2018014From flatness to steepness: Updating TALYS covariances with experimental information
. Ann. Nucl. Energy 73, 7-16 (2014). https://doi.org/10.1016/j.anucene.2014.06.016Conception and software implementation of a nuclear data evaluation pipeline
. Nucl. Data Sheets 173, 239-284 (2021). https://doi.org/10.1016/j.nds.2021.04.007Treating model defects by fitting smoothly varying model parameters: Energy dependence in nuclear data evaluation
. Ann. Nucl. Energy 120, 35-47 (2018). https://doi.org/10.1016/j.anucene.2018.05.026A first sketch: Construction of model defect priors inspired by dynamic time warping
. EPJ Web Conf. 211, 07005 (2019). https://doi.org/10.1051/epjconf/201921107005Differential cross sections and the impact of model defects
. EPJ Web Conf. 111, 09001 (2016). https://doi.org/10.1051/epjconf/201611109001Impact of model defect and experimental uncertainties on evaluated output
. Nucl. Instrum. Meth. Phys. A 723, 163-172 (2013). https://doi.org/10.1016/j.nima.2013.05.005Consistent procedure for nuclear data evaluation based on modeling
. Nucl. Data Sheets 109, 2762-2767 (2008) https://doi.org/10.1016/j.nds.2008.11.006Bayesian model averaging for linear regression models
. J. Am. Stat. Assoc. 92(437), 179-191. https://doi.org/10.1080/01621459.1997.10473615Bayesian model selection in social research
. Sociol. Methodol. 25, 111-163 (1995).Bayesian averaging of computer models with domain discrepancies: a nuclear physics perspective
(2019). arXiv:1904.04793Covariances from model variation: Application to quantities for astrophysics
. EPJ Web Conf. 281, 00005 (2023). https://doi.org/10.1051/epjconf/202328100005Machine learning in nuclear physics at low and intermediate energies
. Sci. China Phys. Mech. Astron. 66,Prediction of nuclear charge density distribution with feedback neural network
. Nucl. Sci. Tech. 33, 153 (2022). https://doi.org/10.1007/s41365-022-01140-9Investigating high-energy proton-induced reactions on spherical nuclei: Implications for the preequilibrium exciton model
. Phys. Rev. C 103,Many-body theory of nuclear matter
. Phys. Rep. 25(2), 83-174 (1976). https://doi.org/10.1016/0370-1573(76)90017-XOn the use of integral experiments for uncertainty reduction of reactor macroscopic parameters within the TMC methodology
. Prog. Nucl. Energy 88, 43-52 (2016). https://doi.org/10.1016/j.pnucene.2015.11.015Estimating the dimension of a model
. Ann. Statistics 6(2), 461-464 (1978). https://doi.org/10.1214/aos/1176344136Information theory and an extension of the maximum likelihood principle
. InComprehensive Nuclear Model Calculations: Introduction to the Theory and Use of the GNASH Code. Report LA-12343-MS
(1992).The CCONE code system and its application to nuclear data evaluation for fission and other reactions
. Nucl. Data Sheets 131, 259-288 (2016). https://doi.org/10.1016/j.nds.2015.12.004Model selection and accounting for uncertainty in graphical models using Occam’s window
. J. Am. Stat. Assoc. 89, 1535-1546 (1994). https://doi.org/10.1080/01621459.1994.10476894Uncertainty and correlation analysis of lead nuclear data on reactor parameters for the European Lead Cooled Training Reactor
. Ann. Nucl. Energy 75, 26-37 (2015). https://doi.org/10.1016/j.anucene.2014.07.043