Introduction
With the depth of its intellectual development, mathematical modeling and simulation (M&S) have emerged as a powerful tool that promises to revolutionize engineering and science. M&S are expected to describe a system, on the basis of which further research can be conducted. For example, the Consortium for the Advanced Simulation of Light Water Reactors (CASL), was established by the US Department of Energy (DOE) in 2010, with the goal of providing M&S capabilities that support and accelerate the improvement of nuclear energy’s economic competitiveness and ensure nuclear safety [1].
Numerical simulations can differ significantly from experimental observations, and minimizing this difference has always been an important topic in engineering practice [2-4]. The difference between model and reality is particularly true in reactor physics, primarily for the following reasons:
(1) The complex nature of nuclear phenomena. A nuclear system may involve neutron transport, thermal hydraulics, and fuel performance, among others, which makes it difficult to accurately model the neutronic behavior. Theoretically, this can be well described by the Boltzmann transport equation, incorporating coefficients derived from various experimentally evaluated neutron interaction cross sections. These cross sections have strong, discontinuous behavior in space due to material heterogeneities and extreme variations with energy due to resonance phenomena associated with compound nucleus formation [5-7].
(2) Inaccuracy of input parameters. A model requires input design data or explanatory data, such as geometry, composition, and control parameters, and input model parameters, including physical constants such as interaction cross sections and material thermal and mechanical properties. Particularly, the two-level approach (i.e., a micro-level model for determining macro cross sections and a macro-level model for determining the final response of interests) for reactor physics calculation may lead inexact input parameters [5-7]. As a consequence, inexact parameters will clearly cause inaccurate outputs.
With the advancement of science and computational power, we can probably make fewer simplifications and approximations in our models, resulting to reduced differences, whereas the resources spent must still be considered for economic reasons. As we reduce this difference, it becomes increasingly dominated by the input parameters.
In particular, nuclear data, such as neutron cross sections and resonance parameters, are evaluated by combining approximated nuclear physics and experimental observations, which have epistemic and aleatoric uncertainties. These uncertainties then propagate through a neutronics model, creating differences and uncertainties [8]. It is easy to imagine that it would be the best option if we were able to re-evaluate the nuclear data. Unfortunately, the nuclear evaluation process is significantly long and costly, making it unrealistic. For example, [9] estimated the cost of a single new observation of a nuclear datum at 400,000 US dollars.
The uncertainty of the input parameters causes uncertain behavior in the model, resulting to the development of three fields in statistics: uncertainty quantification (UQ), sensitivity analysis (SA), and data assimilation (DA) [10]. They are beneficial in engineering design, with the potential to significantly reduce unnecessary costs. For example, knowledge of the sensitivity of attributes has been used to reduce costly design margins.
Data assimilation assimilates previous experience or experimental observations to reduce the differences and uncertainties in model predictions. It was first proposed in the field of meteorology and is widely used to predict meteorological phenomena, such as hurricanes [11]. Data assimilation methods can be roughly divided into two categories: variational data assimilation [12-14] and sequential data assimilation [15-17], whereas some hybrid data assimilation techniques have emerged [18-20]. Bayesian approaches to data assimilation were established after [21, 22] and [23] provided an overview of the Bayesian perspective, discussing some approaches from this perspective. In neutronics, data assimilation is primarily used to predict the bias of critical systems [24], thereby reducing the uncertainty of advanced reactor designs [25]. In recent years, research interest in data assimilation in neutrons has mainly focused on the adjustment of nuclear data, which could establish cross-correlations between nuclear data or nuclides that are traditionally unavailable [26, 27]. In addition, new methods based on stochastic sampling of input parameters have been proposed, such as MOCABA [28] and BMC [29]. In addition, D. Siefman showed that these two methods yield similar results to each other, as well as the traditional method known as generalized linear least squares (GLLS) [30]. Readers can refer to [31-33] for the Bayesian applications in thermal hydraulics.
The method proposed in this study involves sampling the probability distribution, and the method we used is MCMC. The MCMC method began with the Metropolis method proposed by Metropolis et al. in 1953 and was extended byW.K. Hastings in 1970. In 1984, S. Geman et al. demonstrated how the method, known as the Gibbs sampler, can be adapted to high-dimensional problems that arise in Bayesian statistics [34]. In 1987, S. Duane et al. combined MCMC and molecular dynamics methods, known as the hybrid Monte Carlo method or Hamiltonian Monte Carlo (HMC) [35]. Later, scholars developed a series of improved HMC, such as the No-U-Turn Sampler (NUTS) [36] and stochastic gradient Langevin dynamics (SGLD) [37], based on HMC.
Although the Bayesian approach has been widely used in data assimilation, there is still a lack of research on the selection of a prior distribution, which is generally selected as the probability representation that exists before experimental observations, and is probably unsatisfactory. It is evident that the prior distribution affects the results of the method. This study aims to complete this task, namely, to provide the initial input of prior distributions for classical data assimilation methods. As an example, recent work [38] has applied the Bayesian neural network method to predict the beta decay lifetime of atomic nuclei and their uncertainty, which improves learning accuracy and uses prior distributions of parameters; therefore, it may be useful to consider using the method proposed in this work once before the algorithm to improve the performance of the method with more accurate prior distributions. Moreover, most data assimilation methods only provide point estimations for parameters; however, what if nuclear data are actually probabilistic? In fact, certain parameters in our understanding resemble distributions or random variables, such as the number of neutrons released per fission and the change in the concentration of nuclear fuel over a short period due to fuel depletion. We attempt to fill this research gap and take the first step in this distribution inference. Therefore, we propose an ensemble Bayesian method. It is also worth mentioning the following:
(1) The proposed method is simulation-based. As strategies for the design and evaluation of complex nuclear systems have shifted from heavy reliance on expensive experimental validation to highly accurate numerical simulations, more attention has been paid to simulation-based approaches. This allows our method to play a role in guiding the overall system design as well as optimizing the setup of the validating benchmark experiments.
(2) The proposed method has good compatibility. Because there are numerous legacy codes used extensively in the design process, they are expected to persist as integral parts of nuclear design calculations. Our method can be incorporated in the legacy codes, making it tractable for nuclear systems simulation.
The remainder of this paper is organized as follows. In Sect. 2, we describe the parameter distribution inference problem in a general manner and describe the corresponding experiment. In Sect. 3, we first introduce the Bayesian inference and Metropolis-Hastings algorithms, followed by the core algorithm. In Sect. 4, we analyze the convergence of the method and illustrate a series of numerical tests on the proposed method, including a finite cylindrical reactor and the 2D IAEA benchmark problem. Finally, we provide a brief conclusion in the last section.
Problem Description
When modeling a physical phenomenon, the first step is to introduce reliable theories, such as conservation laws and transport theory. In nuclear reactor physics, there are many classic models such as the finite cylindrical reactor and the 2D IAEA benchmark problem, which will also be presented in Sect. 4. However, it is worth emphasizing that the method proposed in this study is valid, regardless of the theories introduced. For clarity, we skip this step and consider the following general system:
f represents the intrinsic functional relationship between these variables, which models the corresponding physical phenomenon.
Perspective of Random Variables
In current research, most data assimilation methods only provide point estimations for the model parameters μ. However, certain parameters in our understanding resemble distributions or random variables, such as the number of neutrons released per fission and the change in the concentration of nuclear fuel over a short period due to fuel depletion. Here, we deal with the latter case. Thus,
In the following discussion, we are concerned with the overall statistical regularity of the parameter μ within the interval [a,b] and will always treat μ as a random vector with an unknown real joint distribution. Our goal is to estimate this distribution using observational data.
Experimental Setup
To achieve our goal, a special type of experimental setup was designed to obtain the observational data required by the method proposed in the next section. Specifically, we obtained data in the following manner:
(i) Choose m number of spatial positions:
(ii) Repeatedly sample k number of θ:
(iii) At each θj, m number of observations
Because we used m sensors, we obtained the following m observation equations:
After the previous steps, we obtain m× k observations
Ensemble Bayesian Method
The Bayesian approach has been widely used in data assimilation and is the method proposed in this study. Subsequently, we provide an overview of the Bayesian inference.
Bayesian Inference
Let X and Y be the quantities of interest that cannot and can be directly observed, respectively,a of which the observations make up our data. From the perspective of Bayesian inference, there exists a joint probability distribution of all unobserved and observable quantities denoted by p(x,y). Bayesian inference determines the conditional distribution of the unobserved quantities of interest given the observation data. This is formally accomplished by applying the Bayesian formula:
We obtain the formula for the posterior distribution, which we will sample later. Some commonly used sampling algorithms include adaptive rejection sampling. However, in many cases, there is no analytical solution for the posterior formula, which makes it difficult to use many sampling algorithms. To overcome this problem, we introduce the MCMC method, which is a technique for sampling probability distributions. Here, we consider the Metropolis-Hasting algorithm, which has unique advantages for posterior distribution sampling.
Metropolis-Hastings Algorithm
The Classical Metropolis-Hastings algorithm [40] was first proposed in the early 1950s and extended by W.K. Hastings in 1970. As the posterior distribution that we derive later is continuous, the algorithm for the continuous state space is given below.
Let π be a probability density function that must be sampled and πs be the value of π in s. Let T be a transition function for any irreducible Markov chain with the same state space as π and Ts,t be the value of a conditional density function in t given condition s. Furthermore, we assumed that we knew how to sample from T. Chain T is used as a proposal chain, generating the elements of a sequence that the algorithm decides whether to accept. The steps of the Metropolis-Hastings algorithm are presented in Algorithm 1.
Input: initial values μ0. | |||
Output: μ1,μ2, …. | |||
for |
|||
Choose a new candidate state μ′ according to T; that is, choose μ′ with probability density |
|||
Calculate the acceptance probability | |||
Choose a number w from uniform distribution U(0,1); | |||
if w≤α then | |||
μu←μ′; | |||
else | |||
μu←μu-1; | |||
end | |||
end |
For the Metropolis–Hastings proposal distribution, if we choose a symmetric distribution with the current state as the symmetric point, such as a uniform distribution on an interval of length two centered at the current state, or a Gaussian distribution with the expectation of the current state, we can simplify Eq. (1) to:
When π is the joint probability density function of multivariate random variables, the proposed chain T from which we know how to sample, is difficult to obtain. In this case, the single-component Metropolis-Hastings algorithm is used. We denote
Input: initial values |
|||
Output: |
|||
for |
|||
for |
|||
Choose a candidate state β(v) of μ(v) according to |
|||
We calculate the acceptance probability |
|||
Choose a number w from a uniform distribution U(0,1). | |||
if |
|||
else | |||
end | |||
end | |||
end |
In reality, we can directly choose a one-dimensional distribution from which we know how to sample, such as uniform or Gaussian distributions, as a one-dimensional proposal distribution
Ensemble Bayesian Algorithm
In this section, we derive the core method combined with the experimental setup (observation data) in Sect. 2.2, which includes three steps: Bayesian inference, Metropolis-Hastings sampling, and performance evaluation.
Bayesian Inference
Because
For each θj, there are several observations
As
From the previous process, we obtain posterior distributions of all μj. These are joint probability density functions of multivariate random variables and are continuous; therefore, they are suitable for the single-component Metropolis-Hastings algorithm presented in Algorithm 2.
Metropolis-Hastings Sampling
As mentioned earlier, we intend to use a single-component Metropolis-Hastings algorithm to construct a Markov chain
(i) Choose a new state β(v+1) according to the Gaussian proposal distribution
(ii) Decide whether to accept β(v+1) or not. Let
Let Ncon be the number of convergence steps, Ntot be the total number of iteration steps. Denote
MCMC is known to suffer from the problem of dependence between samples because the generated samples are implemented through a Markov chain. However, our technical path is to examine these samples as a whole and then achieve an overall estimation of the parameters; thus, the impact of these correlations on the method is minimal.
Performance Evaluation
To quantify the quality of the estimation, we introduced the Kullback-Leibler divergence [41], which has been widely used in information theory. Let p(x) and q(x) be two probability density functions over Rn. The Kullback-Leibler divergence (KL divergence) between the distribution p(x) and q(x) is defined as
Because the Gaussian distribution is the most common type of distribution and is uniquely determined by its expectation and variance, we propose estimating the parameter using the Gaussian distribution with the expectation and variance being the mean and variance of the frequency distribution, respectively. We then evaluated the performance of the estimation by comparing the KL divergence of the prior distribution with the real distribution and that of the Gaussian distribution used for the estimation with the real distribution.
Suppose p(x) and q(x) are the density functions of
Numerical Examples
In this section, we examine the effectiveness of the proposed method. Two models in nuclear reactor physics, namely the finite cylindrical reactor and the 2D IAEA benchmark problem, are tested below. The first test case, adapted from [42], has an analytical solution; the 2D IAEA benchmark problem is one of a classical benchmark problem in nuclear physics [42], and has only a numerical solution. To verify the effectiveness of the distribution estimation method, some parameters in these models were assumed to be uncertain in the following tests:
The Finite Cylindrical Reactor
A finite cylindrical reactor is a multiplying system of a uniform reactor in the shape of a cylinder of physical radius R and height H. Its behavior is modeled using the following two-dimensional monoenergetic diffusion equation:
By replacing the Laplacian with its cylindrical form, as shown in Fig. 1, the analytical solution of (16) is as follows when keff=1:
202312/1001-8042-34-12-019/alternativeImage/1001-8042-34-12-019-F001.jpg)
Our goal is to estimate the distribution of certain uncertain parameters from the observations. Later, we compare two alternative schemes and examine the convergence of the method by applying them to single-parameter estimates, followed by multiparameter estimates. We set A=20,He=18 and use the ensemble Bayesian method proposed in the previous section to estimate the distribution of the parameter Re. In other words, we consider the following system:
To prepare the observation data for our experiments, we first select m positions at equal intervals in [0.5,7.5] and repeatedly obtain the observations for the corresponding positions under k pairs of μ sampled from the real distribution. Suppose
Comparison of Two Alternatives Schemes
In this subsection, we compare these two schemes. Because
Figure 2(a) and (b) present the results of the first and second schemes, respectively, with m=200, k=200, Ncon=100. We can see that although the frequency distributions obtained by these two schemes have almost the same expectation, their variances are quite different; that is, the second scheme will obtain a variance much closer to the real case. We believe this was caused by the loss of observation information during the averaging step.
202312/1001-8042-34-12-019/alternativeImage/1001-8042-34-12-019-F002.jpg)
More importantly, the estimation of the second scheme is intuitively much better than that of the first. This is because more samples are obtained in the second scheme, resulting in a more robust estimation.
Convergence Analysis
In this subsection, we discuss the effect of the parameters of the method on the result (i.e., the number of sensors m, number of samples k, and number of convergence steps of the Metropolis-Hastings algorithm Ncon) and analyze the convergence of the method by setting
Figure 3 shows the effect of Ncon on the performance of the proposed method under different settings of (m,k). This shows that the Metropolis-Hastings sampling algorithm used in the ensemble Bayesian method converges rapidly. When Ncon is somewhat large, such as Ncon=100, the increase hardly affects the performance of the method.
202312/1001-8042-34-12-019/alternativeImage/1001-8042-34-12-019-F003.jpg)
Figure 4 shows the effect of k on the performance of the method under different settings of m when Ncon = 100 is fixed. This shows that when k is small, there is a fluctuation in performance, and as k increases, the performance tends to stabilize; that is, the expectation and variance tend to be closer to the real values, which are much closer than the prior values.
202312/1001-8042-34-12-019/alternativeImage/1001-8042-34-12-019-F004.jpg)
Figure 5 shows the effect of m on the performance of the method when Ncon=100 and k=200 are fixed. This shows that as m increases, the performance of the method improves; that is, the expectation and variance converge to the neighborhoods of the real values, and this expectation trend is particularly pronounced.
202312/1001-8042-34-12-019/alternativeImage/1001-8042-34-12-019-F005.jpg)
After applying the ensemble Bayesian method by setting
Estimation of several parameters
In this section, we test the effectiveness of the method for estimating several parameters simultaneously because several parameters are simultaneously uncertain in many cases. Now, we use the ensemble Bayesian method proposed in this study to simultaneously estimate the distributions of three parameters A,Re and He. Consider the following system:
We experimented with the simulation datasets. To prepare the observational data for our experiments, we first selected 200 positions at equal intervals in [0.5,7.5] and repeatedly obtained observations for the corresponding positions under 200 pairs of (μ(1),μ(2),μ(3)) sampled from the real joint distribution.
Suppose the real joint distribution of (μ(1),μ(2),μ(3)) be 3-dimensional Gaussian distribution with expectation
Prior distribution
After applying the ensemble Bayesian method by setting m=200, k=200, Ncon=100, we obtained three frequency distributions, as shown in Fig. 6. We find that the expectations and variances of all these frequency distributions are significantly close to the expectations and variances of the real marginal distributions. Intuitively, the frequency distributions provide an effective estimation; therefore, the ensemble Bayesian method is also suitable for the simultaneous estimation of several parameters. From the perspective of the KL divergence, the estimation also shows a significant improvement compared to the prior distribution. Equation (21) provides the calculations for the KL divergences.
202312/1001-8042-34-12-019/alternativeImage/1001-8042-34-12-019-F006.jpg)
the 2D IAEA benchmark problem
[42] gives the definition of the 2D IAEA benchmark problem and [44] gives its implementation with neutronic code. Figure 7 illustrates the geometry of the reactor. Note that only one-quarter is given because the rest can be inferred by symmetry along the x and y axes. We denote this quarter by Ω, which is composed of four subregions with different physical properties: Neumann boundary conditions for the left and bottom boundaries and the mixed boundary condition for the external border.
202312/1001-8042-34-12-019/alternativeImage/1001-8042-34-12-019-F007.jpg)
The 2D IAEA benchmark problem as modeled by two-dimension two-group diffusion equations. Specifically, the flux
Region | D1 (cm) | D2 (cm) | ν ∑f,1 (cm-1) | ν∑f,2 (cm-1) | χ1 | χ2 | Material | |||
---|---|---|---|---|---|---|---|---|---|---|
Ω1 | 1.50 | 0.40 | 0.02 | 0.01 | 0.080 | 0.00 | 0.135 | 1 | 0 | Fuel 1 |
Ω2 | 1.50 | 0.40 | 0.02 | 0.01 | 0.085 | 0.00 | 0.135 | 1 | 0 | Fuel 2 |
Ω3 | 1.50 | 0.40 | 0.02 | 0.01 | 0.130 | 0.00 | 0.135 | 1 | 0 | Fuel 2 + Rod |
Ω4 | 2.00 | 0.30 | 0.04 | 0.00 | 0.010 | 0.00 | 0.000 | 0 | 0 | Reflector |
Later, we assumed D2 in Ω4 was uncertain and used the proposed ensemble Bayesian method to estimate its distribution, as it is probably one of the most important parameters. We assume that all other parameters are certain and set them according to Table 1 except for D2 in Ω4. We denote this as μ, which is what we are interested in.
We experimented with simulation datasets. To prepare the observation data for our experiments, we first selected 45 positions (i.e., (10,10),(30,10),(30,30),…, (170,10),…,(170,170)) in area Ω and repeatedly obtained the observations for the corresponding positions under 200 pairs of μ sampled from the real distribution.
Moreover, the two-dimensional two-group diffusion Eq. (22) must be solved in the algorithm process. This was achieved by employing the generic high-quality finite element solver FreeFem++ [45].
Suppose that the real distribution of μ is a Gaussian distribution with expectation
The prior distribution π (μ) is also a Gaussian distribution with expectation
After applying the ensemble Bayesian method with setting Ncon=100, we obtained the frequency distribution shown in Fig. 8. We find that the expectation of the frequency distribution is significantly close to the real expectation, and the variance is also closer to the real variance than the prior distribution. Furthermore, the frequency distribution intuitively provided an effective estimation. From the perspective of the KL divergence, the estimation also shows a significant improvement compared to the prior estimation. Equation (25) describes the KL divergence calculations. In this case, we used the engineering software Freefem++ to implement our algorithm because this model only has numerical solutions; therefore, the proposed ensemble Bayesian method has potential for engineering applications.
202312/1001-8042-34-12-019/alternativeImage/1001-8042-34-12-019-F008.jpg)
Conclusion
In this study, we considered model parameters from the perspective of random variables in the context of nuclear reactor engineering and proposed a general form of parameter distribution inference problem. In the context of this parameter distribution estimation problem, we conducted a preliminary exploration and proposed an ensemble Bayesian method to estimate the parameters by obtaining frequency distributions, combined with a special type of experimental setup. Simultaneously, we introduced the KL divergence theory to quantify the estimation performance of the method. Various numerical experiments were conducted, including a finite cylindrical reactor model, which has an analytical solution, and the 2D IAEA benchmark problem, which only has a numerical solution. From the results of these tests, the following conclusions were drawn:
• For those two models, the frequency distributions intuitively provide effective estimation for the parameters. The expectations are significantly close to the real ones, and the variances are also more accurate than the prior distributions. From the perspective of KL divergence, the method also has good performance;
• The ensemble Bayesian method can estimate several parameters simultaenously, even if there are correlations between them;
• When the model has only numerical solution, we use engineering software Freefem++ to implement our algorithm. The method also works well in this case, which means it has potential for engineering applications;
• The convergence speed of the method is fast. Thus, in general, the ensemble Bayesian method we propose has optimistic application prospects.
The proposed method shows high potential for engineering applications to correct prior distributions when using the data assimilation technique for parameter estimation.
Further studies could investigate the performance of the proposed method under moderate-scale parameters, the performance for non-Gaussian parameters, and characteristic parameters that are likely to be difficult to reproduce. As an important review of machine learning (ML) in nuclear physics, [46] not only describes the different methodologies used in ML algorithms and techniques and some applications, particularly for low- and intermediate-energy nuclear physics, but also provides a valuable summary and outlook on the possible application directions and improvements of ML algorithms in low- and intermediate-energy nuclear physics. Inspired by the review, we will improve the efficiency of the method, and our initial idea is to improve the efficiency of the Metropolis-Hastings sampling method used in this study. Furthermore, we will determine how to involve more physics-informed or physics-guided to develop new machine learning algorithms.
Modeling and simulation challenges pursued by the consortium for advanced simulation of light water reactors (casl)
. J. Comput. Phys. 313, 367-376 (2016). https://doi.org/10.1016/j.jcp.2016.02.043Data assimilation from operational and industrial applications to complex systems
. Math. Today. 150–152 (2009).., Robustness of nuclear core activity reconstruction by data assimilation
. Nucl. Instrum. Meth. A. 629, 282-287 (2011). https://doi.org/10.1016/j.nima.2010.09.180Uncertainty quantification and data assimilation (uq/da) study on a vera core simulator component for crud analysis casl-i-2013-0184-000
. Milestone Report for L 2,.Uncertainty quantification, sensitivity analysis, and data assimilation for nuclear systems simulation
. Nucl. Data. Sheets. 109, 2785-2790 (2008). https://doi.org/10.1016/j.nds.2008.11.010Methods and issues for the combined use of integral experiments and covariance data: Results of a nea international collaborative study
. Nucl. Data. Sheets. 118, 38-71 (2014). https://doi.org/10.1016/J.NDS.2014.04.005Advancing inverse sensitivity/uncertainty methods for nuclear fuel cycle applications
. Nucl. Data. Sheets. 123, 51-56 (2015). https://doi.org/10.1016/j.nds.2014.12.009Overview of global data assimilation developments in numerical weather-prediction centres
. Q. J. Roy. Meteor. Soc. 131, 3215-3233 (2005). https://doi.org/10.1256/qj.05.129The ecmwf operational implementation of four-dimensional variational assimilation. i: Experimental results with simplified physics
. Q. J. Roy. Meteor. Soc. 126, 1143-1170 (2000). https://doi.org/10.1002/qj.49712656415A strategy for operational implementation of 4d-var, using an incremental approach
. Q. J. Roy. Meteor. Soc. 120, 1367-1387 (1994). https://doi.org/10.1002/qj.49712051912Sequential data assimilation with a nonlinear quasi-geostrophic model using monte carlo methods to forecast error statistics
. J. Geophys. Res-oceans. 99, 10143-10162 (1994). https://doi.org/10.1029/94JC00572Data assimilation using an ensemble kalman filter technique
. Mon. Weather. Rev. 126, 796-811 (1998). https://doi.org/10.1175/1520-0493(1998)126<0796:DAUAEK>2.0.CO;2Atmospheric data assimilation with an ensemble kalman filter: Results with real observations
. Mon. Weather. Rev. 133, 604-620 (2005). https://doi.org/10.1175/MWR-2864.1Machine learning with data assimilation and uncertainty quantification for dynamical systems: a review
. IEEE/CAA J. Autom. Sinica. 10, 1361-1387 (2023). https://doi.org/10.1109/JAS.2023.123537Generalised latent assimilation in heterogeneous reduced spaces with machine learning surrogate models
. J. Sci. Comput. 94, 11 (2022). https://doi.org/10.1007/s10915-022-02059-4Analysis methods for numerical weather prediction
. Q. J. Roy. Meteor. Soc. 112, 1177-1194 (1986). https://doi.org/10.1002/QJ.49711247414A bayesian tutorial for data assimilation
. Physica. D. 230, 1-16 (2007). https://doi.org/10.1016/J.PHYSD.2006.09.017Sensitivity and uncertainty analyses applied to criticality safety validation. Tech. Rep.
Applications of integral benchmark data
. Nucl. Sci. Eng. 178,. https://doi.org/10.13182/NSE14-33A-priori and a-posteriori covariance data in nuclear cross section adjustments: Issues and challenges
. Nucl. Data. Sheets. 123, 41-50 (2015). https://doi.org/10.1016/j.nds.2014.12.008Nuclear data correlation between different isotopes via integral information
. J. Comput. Phys. 4, 7 (2018). https://doi.org/10.1051/EPJN/2018006Mocaba: A general monte carlo-bayes procedure for improved predictions of integral functions of nuclear data
. Ann Nucl Energy. 77, 514-521 (2014). https://doi.org/10.1016/j.anucene.2014.11.038On the use of integral experiments for uncertainty reduction of reactor macroscopic parameters within the tmc methodology
. Prog. Nucl. Energ. 88, 43-52 (2016). https://doi.org/10.1016/J.PNUCENE.2015.11.015Development and application of data assimilation methods in reactor physics
. Ph.D. thesis (Uncertainty Quantification of Two-Phase Flow and Boiling Heat Transfer Simulations Through a Data-Driven Modular Bayesian Approach
. Int. J. Heat. Mass. Tran. 138, 1096-1116 (2019). https://doi.org/10.1016/J.IJHEATMASSTRANSFER.2019.04.075Validation and Uncertainty Quantification of Multiphase-CFD Solvers: A Data-Driven Bayesian Framework Supported by High- Resolution Experiments
. Nucl. Eng. Des. 354, 110200 (2019). https://doi.org/10.1016/J.NUCENGDES.2019.110200Integrated framework for model assessment and advanced uncertainty quantification of nuclear computer codes under bayesian statistics
. Reliab. Eng. Syst. Safe. 189, 357-377 (2019). https://doi.org/10.1016/J.RESS.2019.04.020Stochastic relaxation, gibbs distributions, and the bayesian restoration of images
. IEEE. T. Pattern. Anal. PAMI-6, 721-741 (1984). https://doi.org/10.1109/TPAMI.1984.4767596Hybrid monte carlo
. Phys. Lett. B. 195, 216-222 (1987). https://doi.org/10.1016/0370-2693(87)91197-XThe no-u-turn sampler: adaptively setting path lengths in hamiltonian monte carlo
. J. Mach. Learn. Res. 15, 1593-1623 (2011). https://doi.org/10.5555/2627435.2638586Studies of nuclear β-decay half-lives with bayesian neural network approach
. Nuclear Techniques 46, 080013 (2023).Understanding the metropolis-hastings algorithm
. Am. Stat. 49, 327-335 (1995). https://doi.org/10.1080/00031305.1995.10476177On information and sufficiency
. Ann. Math. Stat. 22, 79-86 (1951). https://doi.org/10.1214/AOMS/1177729694Argonne code center: benchmark problem book. Tech. Rep.
info@nuclear power.com, Diffusion equation-finite cylindrical reactor
. https://www.nuclear-power.com/nuclear-power/reactor-physics/neutron-diffusion-theory/finite-cylindrical-reactor/New development in freefem++
. J. Numer. Math. 20, 251-265 (2012). https://doi.org/10.1515/jnum-2012-0013Machine learning in nuclear physics at low and intermediate energies
. Sci. China. Phys. Mech. 66, 282001 (2023).The authors declare that they have no competing interests.