logo

Sample size adaptive strategy for time-dependent Monte Carlo particle transport simulation

NUCLEAR ENERGY SCIENCE AND ENGINEERING

Sample size adaptive strategy for time-dependent Monte Carlo particle transport simulation

Dan-Hua ShangGuan
Wei-Hua Yan
Jun-Xia Wei
Zhi-Ming Gao
Yi-Bing Chen
Zhi-Cheng Ji
Nuclear Science and TechniquesVol.34, No.4Article number 58Published in print Apr 2023Available online 25 Apr 2023
59801

When multiphysics coupling calculations contain time-dependent Monte Carlo particle transport simulations, these simulations often account for the largest part of the calculation time, which is insufferable in certain important cases. This study proposes an adaptive strategy for automatically adjusting the sample size to fulfil more reasonable simulations. This is realized based on an extension of the Shannon entropy concept and is essentially different from the popular methods in time-independent Monte Carlo particle transport simulations, such as controlling the sample size according to the relative error of a target tally or by experience. The results of the two models show that this strategy can yield almost similar results while significantly reducing the calculation time. Considering the efficiency, the sample size should not be increased blindly if the efficiency cannot be enhanced further. The strategy proposed herein satisfies this requirement.

Time-dependent Monte Carlo particle transport simulationShannon entropyAdaptive strategy
1

Introduction

Multiphysics processes have been extensively studied in several important fields. Owing to its complexity, an analytical solution is hardly possible, and numerical calculations are treated as efficient tools instead of expensive experiments. When time-dependent processes governed by different physical rules are coupled, one apparent solution is to divide the entire calculation into many steps, in which different control equations should be solved individually. Some data must be transmitted from one process to another to mimic the coupling effect. In certain important cases, the time-dependent particle transport is an indispensable part of the multiphysics process.

The coupling between fluid mechanics and neutron transport was used as an example, which was used to demonstrate the sample-size adaptive strategy. In many cases, other physical processes must also be considered. However, considering only these two processes is not trivial and is meaningful in certain important situations. The governing equations for fluid mechanics are as follows: ρt+ρu=0, (1) ρut+ρuu=p, (2) ρEt+ρEu=pu+Q. (3) In the above equations, ρ is the density, p is the pressure, u is the fluid velocity, E=12uu+I is the energy, I is the material-specific internal energy, and Q is the feedback quantity from neutron transport. The fluid pressure p, density ρ, and internal energy I are related by the equation of state, whose general form is p=f(ρ,I). (4) When neutrons fly in a medium, nuclear energy is released by reactions between the neutrons and the nucleus. These reactions change the temperature, density, and velocity of the medium. However, because these reactions rely on their relative velocities, the thermal and hydrodynamic motions of the nucleus influence the neutron transport process. Therefore, the correct transport equation should include the distribution function of the nucleus, except for the neutron flux, which should be determined by the corresponding transport equation. This created a complex problem involving a group of simultaneous transport equations. Fortunately, because the local thermal equilibrium hypothesis is suitable, the use of Maxwell distribution to describe the thermal motion of a nucleus is sufficiently precise. After averaging, the coefficients of the neutron transport equation rely only on the relative velocity of the neutron and the hydrodynamic motion of the medium. Therefore, the neutron transport equation that considers the effect of fluid motion is given by [1] Nt+vN+dNdtVN+V(r,V)N(r,V,t)   =fVNdV+q(r,V,t), (5) where v is the neutron velocity, V=vu is the relative velocity, N(r,V,t)drdV is the mean neutron number in the volume element drdV around (r,V) at time t, and fVN=(r,V)f(r,VV)VN(r,V,t), (r,V) is the total macro cross section. In the total macro cross section, (r,V)f(r,VV)=ixxi(r,V)fxi(r,VV), (6) where xi(r,V) is the probability of x reaction taking place with i nucleus when a neutron travels a unit distance, fxi(r,VV) is the probability of generating neutrons with velocity V when a neutron takes place x reaction with i nucleus at point r, and q(r,V,t) is an external source. At the beginning of each step, the fluid calculation provides a new transport environment for neutron transport, and the time-dependent particle transport problem is calculated to obtain feedback quantities for the fluid calculation. This cycle is repeated several times to obtain the required quantities. Based on the above control equations, the Jointed numerical simulation software for mUlti-PhysIcs and multi-maTerials problems under ExtReme conditions (JUPITER) platform was developed by a team at the Institute of Applied Physics and Computational Mathematics. It utilizes mature framework technology to support software development and has been confirmed and verified by many models.

Owing to the accuracy of the Monte Carlo method for geometry [2], physics modeling, and other merits, this method is often used to simulate particle transport [3]; however, it is extremely time-consuming. When there is a problem with multi-materials with large deformations and the scale of the model's grid number is large, several steps are required for a reliable calculation, and each step contains a Monte Carlo simulation of a large model. Therefore, in this case, the calculation time is insufferable even with large-scale parallelization using a powerful supercomputer. Similar to the traditional Monte Carlo simulation of the time-independent particle transport problem, the core objective is to achieve the highest possible efficiency. This means finding a method with high calculation accuracy but not time-consuming. In other words, when efficiency flattens with an increase in sample size, there is no need to perform more calculations if efficiency, not calculation accuracy, is the goal. For the Monte Carlo method, it is well known that when the sample size tends to infinity, the calculation accuracy can reach any level because of the law of large numbers, which is impossible.

Apparently, it is difficult to set different reasonable sample sizes for each step based on experience because each step is a different particle transport problem. If a fixed, sufficiently large sample size is set for all steps, which is the original sample size setting method of the JUPITER platform, many calculations may be wasted because the efficiency factor will be flat as the sample size is increased to a certain level. Therefore, experience-based methods are not considered to be efficient. Moreover, because the Monte Carlo simulation provides several feedback quantities with very different fluctuation levels, the sample size controlled by the relative error of one tally may not be sufficient or may be excessively large for another tally. Therefore, error-based methods [4] are unsuitable. Because the calculation of time-dependent particle transport is expensive, Monte Carlo simulations still need to develop an adaptive strategy to scientifically set the sample size, even with the rapid progress of supercomputers.

The remainder of this paper is organized as follows: The second section describes all aspects of a sample size adaptive strategy based on an extension of the Shannon entropy concept. The third section contains some active numerical results from the calculations of the two models. The final section provides the concluding remarks.

2

Sample size adaptive strategy

2.1
Research basics

Entropy is a notion existing in both physics and mathematics. Clausius is considered the father of the concept of entropy. The important contributions to the development of this concept were made by Boltzmann, Gibbs, and Planck. Moreover, mathematicians, including Shannon, Kolmogorov, Sinai, Rényi, Holevo, and Tsallis, were also preoccupied with the concept. On the history of different paths to entropy concept, see, e.g., [5-7]. Therefore, there are various definitions of entropy. In this study, the entropy concept used is the one proposed by Shannon in [8], which has some novel applications in various fields [9, 10]. For example, Ref. [9] showed that Shannon entropy is an efficient dynamic indicator that provides a direct measure of the diffusion rate and, thus, a timescale for the instabilities arising when dealing with chaos.

It is worth pointing out that the Shannon entropy was first introduced into Monte Carlo particle transport simulation as a posterior indicator for convergence of fission source distribution in eigenvalue calculation of nuclear systems [11-14]. Suppose there are N cells in total for some nuclear systems and M fission neutrons at some moment. Let mi be the number of fission neutrons in cell i. Hence, ∑imi=M. Let pi=miM and Hori=i=1Mpiln(pi), (7) we obtain the Shannon entropy Hori. For the power iteration method of eigenvalue calculation, if there are K iteration steps and Horik,k=1,,K are entropies of each step, then the convergence of this entropy sequence is a necessary condition for the convergence of the fission source distribution. It is primarily used as follows: When the entire calculation has been completed and the Shannon entropy of each step has been obtained, a step number n is selected by judging the entropy sequence after this step is perceived to have converged. If the number of the first active steps is greater than n, this calculation is considered reasonable. Otherwise, it is judged that the inactive iteration steps are insufficient, and the results are systematically biased.

In recent years, there have been some attempts to explore on-the-fly judgment methods [15-18] for the convergence of the fission source distribution, which belongs to the time-independent particle transport problem. It has already been noted [19] that the Shannon entropy concept can be problematic when it indicates the convergence of the fission source distribution in a slow-convergence system. Fortunately, our concerns did not include this type of problem.

2.2
Analysis for an reasonable adaptive strategy for sample size

Two important facts should be considered. (1) The surviving particles of the current step should be transferred to the next step as an external source. The attributes of these particles are samples of unknown distribution that change with the number of steps. This distribution can be called survival particle distribution for convenience; (2) Monte Carlo simulation of particle transport should provide global tallies (one tally per cell) for mechanical calculation as a source item. These facts have a general sense of many multiphysics coupling processes. It would be interesting to develop a sample size adaptive strategy to balance small errors and calculation costs. The basic considerations of this study are as follows: Suppose a sufficiently large sample size of one step is divided into batches, and each batch contains a fixed number of samples, the Monte Carlo simulation should finish all batches one at a time. However, if the number of surviving particles is sufficient, and global tallies have converged according to some judgment rules after some of these batches have been finished, the remaining batches can be omitted because the safely finished batches are sufficient to obtain reasonable results. This point can be reasonably verified by the fact that when stops in advance in one step, the global efficiency of global tallies, according to some index, is flat with batches that have already been finished.

To realize these ideas, three questions must be answered. The first question is whether the surviving particles are sufficient. The second one is by which indicator the global tallies can be judged to converge sufficiently. The last one is which on-the-fly diagnostic can be used to stop the Monte Carlo simulation in advance. The solutions to these problems are presented in the following three subsections. These methods are based on the Shannon entropy concept.

2.3
Indicator for judging whether survival particles are enough

Based on the definition of Shannon entropy, an indicator can be designed to show if the number of surviving particles are sufficient. Suppose there are N cells in total for the system and G energy groups so that the total number of phase–space grids is NG. Let Vi(i=1,,N) be the volume of cell i and mij be the number of surviving particles in cell i and energy group j. If defining pij=mij/Viijmij/Vi (8) and Ht=ijpijln(pij), (9) then there are L Ht values in total for L finished batches. Here, Ht is the entropy corresponding to survival particle distribution. Note that pij ≥ 0 and ijpij=1; thus, pij can be considered as a discrete probability distribution. When batches are individually simulated, the Ht values form a sequence. If the sequence converges, it can be said that these L batches have made the surviving particles sufficient. Apparently, finer phase-space grids represent stricter standards. In Eq. (8), dividing by Vi is not trivial because it represents the concept of number density. Note that mij in batch k should contain mij in all batches k1 satisfying k1 lt;k.

2.4
Indicator for judging whether global tallies have converged sufficiently

When particle transport is coupled with other processes, the Monte Carlo simulation must provide feedback quantities to those processes as an input. The precision of these inputs, also known as tallies in the Monte Carlo method, significantly influence the confidence of the coupling calculation. In most cases, these quantities can be expressed as integrals of the flux and the corresponding functions. Therefore, if global flux tallies converge sufficiently, these feedback quantities are credible to a certain degree. Focusing on global flux tallies can also avoid the problem of dealing with different types of tallies in different multiphysics processes. If Fi is the flux in cell i, let pi=Fii=1NFi (10) and Hf=i=1Npiln(pi). (11) Here, Hf is the entropy corresponding to global flux. The convergence of Hf sequence formed by finished batches indicates the global convergence of the flux tallies. In addition, Fi in batch k should contain the contributions of the particles in all batches k1 that satisfy k1 lt;k. To the best of our knowledge, this is the first time an indicator like Eq. (11) has been introduced into the Monte Carlo simulation of particle transport to indicate the convergence of global tallies. This has the advantage that there is no need to calculate the relative error. This advantage saves significant memory and calculation costs, particularly for large-scale models.

2.5
On-the-fly diagnostic for judging whether entropy sequence has converged

Based on the stochastic oscillator indicator, which is commonly used in the technical analysis of financial markets, a posterior diagnostic method has been proposed to assess fission source convergence in Monte Carlo criticality calculations [18]. The stochastic oscillator indicator of the nth iteration step of criticality calculation is defined as follows: Kn=HnHminn,pHmaxn,pHminn,p, (12) where Hminn,p and Hmaxn,p are the minimum and maximum Shannon entropies, respectively, over the last p steps, and Hn is the entropy of the current nth step. It has been proven that when the entropy sequence converges, this stochastic oscillator index fluctuates around 0.5 stochastically [18]. The posterior rule for accessing the convergence is given by the following inequality: |Kn0.5|<ϵ and |1mi=0m1Kn+i0.5|<ϵ, (13) In other words, this criterion requires that both Kn and its average over the next m steps should be within ϵ of 0.5. If the active cycle starts after the iteration step, satisfying the aforementioned inequality, the entire calculation should be accepted. However, this rule can be modified to be an on-the-fly diagnostic [20] for the convergence of the entropy sequence corresponding to the survival particle distribution and global flux tally, as explained in the last two subsections. This point is as follows: If we decompose all samples into S batches and simulate all batches individually, we can obtain a sequence of entropy values and stochastic oscillator indicators called Hi (i=1,,S) and Ki (i=p,,S). When completing the nth (nm+p1) batch and the following inequality is satisfied: |Knm+10.5|<ϵ and |1mi=0m1Kni0.5|<ϵ, (14) the surviving particle distribution and global flux tallies can then be perceived to have converged. The parameters p,m were set to 20 and 50, respectively, but the ϵ value was set by the user. The smaller the ϵ is, the stricter the rule is. Therefore, a conservative or radical strategy can be developed by using different ϵ values.

2.6
Description of the adaptive strategy

The entire mechanism for automatically adjusting the sample size is summarized as follows: In each step of the time-dependent Monte Carlo particle transport simulation, a sufficiently large sample size is initially set and divided into M batches. After finishing one batch, new Ht and Hf values are obtained. When a finished batch number exceeds m+p-1, the on-the-fly diagnostic is invoked to decide if it needs to stop the Monte Carlo simulation in advance according to both the entropy sequences. Only when both judgments return a true value, the simulation of this step is stopped in advance. If stopped in advance in step L, there will not be enough survival particles to be used in step L+1 although the current survival particles can be seen as sufficiently sampled from the survival particle distribution. In this case, we randomly select from the current surviving particle bank to supply more surviving particles as source particles in step L+1. This operation is considered reasonable based on the probability theory. Note that in parallel computing, frequently calculating these entropies should use the method proposed in [21]; otherwise, the time cost of obtaining these entropies will offset the benefit of the sample size adjustment. In other words, this method uses local data in each process to calculate the local entropy and the average of the entropies of all processes as the final entropy value. Thus, massive data broadcasts can be avoided. It can be verified that when the sample size approaches infinity, the entropy is the same as the original entropy. Even with a finite sample size, this new entropy can be used to indicate the convergence of the corresponding survival particle distribution or global tallies. Although there is a statistical estimation method for the Shannon entropy [22], we have not adopted it because of its complexity; however, this will be the focus of our future studies.

It is worthwhile to provide four annotations. (1) When an under-sampling problem [23] exists for a specific model, an insufficient history number causes the Shannon entropy corresponding to the survival particle distribution to fluctuate more drastically. Therefore, the inequality Eq. (14) cannot be satisfied easily, and the entire calculation will not be stopped in advance. (2) Theoretically, the method in this study may fail for some symmetric and oscillating models. However, for time-dependent particle transport simulation, strict symmetric and oscillating models are rare; hence, it is not a serious issue. (3) The Fourier fundamental mode coefficient works similar to the traditional Shannon entropy; therefore, the recently proposed Fourier diagnosis [24] can be a substitute for the Shannon entropy diagnosis. Because we adopted an efficient method for calculating Shannon entropy in MPI parallel environments and did not perform a numerical comparison, we do not know which method is better. (4) The recently proposed batch-size growth method [25, 26] is consistent with our method. However, our work aims to improve the efficiency of time-dependent particle transport simulations using multiphysics calculations. The two aforementioned studies aim to improve the efficiency of time-independent criticality calculations. The existing difficulties are different for these two types of problems. Theoretically, the concepts proposed in these studies are suitable for transplantation. These performances can only be compared by future numerical experiments.

3

Numerical results

It is worth mentioning again that our goal in designing this sample size adaptive strategy is to avoid unnecessary calculations when the results are sufficiently good when evaluated by a standard. In each step, when the original sample size is divided into several batches, and each batch is simulated individually according to the original plan, we expect that this strategy can stop in advance when the efficiency evaluated by some standard becomes flat. Apparently, because of the inevitable stochastic nature of this strategy, we could not precisely capture the stop point. However, capturing it approximately can still save considerable calculation time while keeping the results approximately unchanged when compared to the original case. The two models discussed in the next sections were used to demonstrate this effect. The Arbitrary Lagrange-Euler (ALE) algorithm was used to perform mechanical calculations.

3.1
Model one

Model one was a sector piece with an open angle of 2 and a radius of 6.45 cm. There are two concentric layers. Each layer is composed of a mixture of U235 and U238 but with different densities. In total, 600 cells were used. The initial sample size was 12,800,000 samples per step, which was sufficiently large for this model. These samples were divided into 640 batches with 20,000 samples per batch. The calculations used a total of 32 cores. We perform two independent calculations with ϵ = 0.1 and ϵ = 0.05. All the other conditions were identical to those described above. Figure 1 shows how the neutron number(Np) in the model changes with time for three cases, with and without sample size adaptation but with two different ϵ values). Table 1 shows the comparison of neutron numbers (Np) at the end of the calculations for these three cases and the time costs. It can be observed that the proposed sample size adaptive strategy significantly decreases the calculation time while maintaining an almost unchanged result. When the judging rule is stricter, similar to the ϵ =0.05 case, the result is closer to the original case; however, the reduction in the calculation time is smaller. For a more detailed comparison, we randomly chose a step to determine whether the adaptive strategy could stop the calculation in advance when the efficiency flattened. We used the following equation: Global_FOM=NTi=1NRi2 (15) to evaluate the efficiency of global flux tallies [27]. Here, N=600 is the total cell number, T is the calculation time, and Ri is the relative error of the flux in cell i. In this step, the ϵ=0.1 case stops at the 416th batch, whereas the ϵ=0.05 case cannot stop in advance; that is, all 640 batches are simulated in this case. From Fig. 2, we see that the sample size adaptive strategy stops in advance when the global efficiency is almost flat with batch numbers. Considering the efficiency, when global efficiency becomes flat, it is not necessary to do more simulations. Figure 3 and Fig. 4 show the two Shannon entropies (Ht and Hf) for the same step. As expected, the sample size adaptive strategy stopped the entire calculation when these entropies flattened. Figure 5 shows the real sample size that was simulated in different cases. The reduction in calculation time was evidently due to a decrease in the sample size. However, the results were almost the same and the efficiency remained almost unchanged because the simulated sample size was already sufficient. We cannot say which is the best ϵ because of inevitable randomness; however, it can be said that a small ϵ means a conservative strategy, whereas a large ϵ means a radical strategy. This is a characteristic of most of the algorithms. However, the results indicate that ϵ can maintain the concerned physical quantities. Therefore, if used appropriately, the method proposed in this study can definitely improve the calculation efficiency and provide a tool for different users.

Table 1
Comparison of neutron number(Np) for different cases at 0.252 microsecond
Different cases Np Time cost (hours)
without sample size adaptive 2.058× 1024 1.75
sample size adaptive with ϵ=0.1 2.049× 1024 0.93
sample size adaptive with ϵ=0.05 2.059× 1024 1.73
Show more
Fig. 1
Comparison of neutron number
pic
Fig. 2
Global efficiency of all flux tallies in some step
pic
Fig. 3
Shannon entropy Ht in some step
pic
Fig. 4
Shannon entropy Hf in some step
pic
Fig. 5
The real sample size in different cases
pic
3.2
Model two

Model two is a cuboid which contains 200,000 hexahedron cells. It is divided into three layers that contain four types of isotopes of U (234U, 235U, 236U, 238U) but with different densities. The mechanical calculation is done in one step, and the Monte Carlo particle transport simulation is iterated for 100 steps to calculate λ=1N(t)dN(t)dt. (16) We performed 20 calculations without sample size adaptation but with different random number seeds to obtain the reference answer. Each calculation used 1,600 batches with 30,000 samples per batch, and 32 cores were utilized. From Table 2, we can see that the sample size adaptive strategy obtains a reasonable λ value while significantly decreasing the calculation time at approximately only 22% of the original case without adaptive sample size adjustment. Note that the number in parentheses is the standard deviation of 20 λ values, and the λ value of the sample size adaptive strategy is located in the 95% confidence interval obtained from these 20 λ values.

Table 2
Comparison of λ for different cases
Different cases λ Time cost (hours)
without sample size adaptive 3.7727(0.0238) 3.04
sample size adaptive with ϵ=0.1 3.7709 0.67
Show more
4

Conclusion

Based on the extension of the Shannon entropy concept, two indicators are designed to show whether samples of survival particle distribution are sufficient and whether global flux tallies have converged sufficiently. An on-the-fly diagnostic method based on the stochastic oscillator indicator for the convergence of these two indicators is proposed to avoid manual intervention, thus forming a complete adaptive strategy. This is the first time that it has become possible to adjust the sample size according to certain definite and reasonable rules. Based on the results of the two models displayed in the previous section, this self-adjustment mechanism maintains reasonable precision while reducing the calculation time. This indicator and mechanism can be easily generalized to other multiphysics calculations, including time-dependent Monte Carlo particle transport simulations.

References
1. S.H. Du, S.F. Zhang, T.G. Feng et al., Computer simulation of transport problems, (Hunan Science and Technology Press, China, 1989)
2. X. Wang, J.L. Li, Z. Wu et al.,

CMGC - A CAD to Monte Carlo geometry conversion code, Nuc

. Sci. Tech. 31, 82 (2020). doi: 10.1007/s41365-020-00793-8
Baidu ScholarGoogle Scholar
3. L. Deng, G. Li, B.Y. Zhang et al.,

A high fidelity general purpose 3-D Monte Carlo particle transport program JMCT3.0

. Nuc. Sci. Tech. 33, 108 (2022). doi: 10.1007/s41365-022-01092-0
Baidu ScholarGoogle Scholar
4. X-5 Monte Carlo Team, MCNP-A General Monte Carlo N-Particle Transport Code, Version 5, LA-UR-03-1987, 2003
5. F. Balibrea,

On Clausius, Boltzmann and Shannon notions of entropy

. J. Mod. Phys. 7, 219-227 (2016). doi: 10.4236/jmp.2016.72022
Baidu ScholarGoogle Scholar
6. L. Benguigui,

The different paths to entropy

. Euro. J. Phys. 34, 303-321 (2013). doi: 10.1088/0143-0807/34/2/303
Baidu ScholarGoogle Scholar
7. D. Hammer, A. Romashchenko, A. Shen et al.,

Inequalities for Shannon entropy and Kolmogorov complexity

. J. Compu. Sys. Sci. 60, 442-464 (2000). doi: 10.1006/jcss.1999.1677
Baidu ScholarGoogle Scholar
8. C.E. Shannon,

A mathematical theory of communication

. Mob. Compu. Comm. Rev. 5 3–55,(2001). doi: 10.1145/584091.584093
Baidu ScholarGoogle Scholar
9. P.M. Cincotta, C.M. Giordano, R.A. Silva et al.,

The Shannon entropy: An efficient indicator of dynamical stability

. Phys. D 417, 132816 (2021). doi: 10.1016/j.physd.2020.132816
Baidu ScholarGoogle Scholar
10. R. M. Yulmetyev, N. A. Emelyanova, F. M. Gafarov,

Dynamical Shannon entropy and information Tsallis entropy in complex systems

. Phys. A 341, 649-676 (2004). doi: 10.1016/j.physa.2004.03.094
Baidu ScholarGoogle Scholar
11. T. Ueki,

Stationarity diagnostics with relative entropy and Wilcoxon signed rank initerated-source Monte Carlo methods

. Nucl. Sci. Eng. 160, 242 (2008).doi: 10.13182/NSE160-242
Baidu ScholarGoogle Scholar
12. T. Ueki, F.B. Brown,

Stationarity modeling and informatics-based diagnostics in Monte Carlo criticality calculations

. Nucl. Sci. Eng. 148, 38 (2005).doi: 10.13182/NSE04-15
Baidu ScholarGoogle Scholar
13. F.B. Brown, On the use of Shannon entropy of the fission distribution for assessing convergence of Monte Carlo criticality calculations. LA-UR-06-3737 (2006)
14. Z.G. Li, K. Wang, Y.C. Guo et al.,

Forced propagation method for Monte Carlo fission source convergence acceleration in the RMC

. Nuc. Sci. Tech. 32, 27 (2021). doi: 10.1007/s41365-021-00868-0
Baidu ScholarGoogle Scholar
15. T. Ueki,

On-the-fly diagnostics of particle population in iterated-source Monte Carlo Methods

. Nucl. Sci. Eng. 158, 15-27 (2008). doi: 10.13182/NSE08-A2735
Baidu ScholarGoogle Scholar
16. Y. Naito, J. Yang,

The sandwich method for determining source convergence in Monte Carlo calculatio

. J. Nucl. Sci. Tech. 41, 559 (2004).doi: 10.1080/18811248.2004.9715519
Baidu ScholarGoogle Scholar
17. I. Kim, H. Kim, Y. Kim,

Deterministic truncation of the Monte Carlo transport solution for reactor eigenvalue and pinwise power distribution

. Nucl. Sci. Eng. 194, 14-31 (2020). doi: 10.1080/00295639.2019.1654815
Baidu ScholarGoogle Scholar
18. G. Aldrich, S. Dutta, J. Woodring, OpenMC in situ source convergence detection. LA-UR-16-23217 (2016). doi: 10.2172/1253484
19. T. Ueki, F.B. Brown, D.K. Parsons et al.,

Autocorrelation and dominance ratio in Monte Carlo criticality calculations

. Nucl. Sci. Eng. 145, 279-290 (2003). doi: 10.13182/NSE03-04
Baidu ScholarGoogle Scholar
20. D.H. ShangGuan, Z.C. Ji, L. Deng et al.,

New strategy for global tallying in Monte Carlo criticality calculation

. Acta. Phys. Sin. 68, 122801 (2019).doi: 10.7498/APS.68.20182276
Baidu ScholarGoogle Scholar
21. D.H. ShangGuan, L. Deng, B.Y. Zhang et al.,

Efficient method of calculating Shannon entropy of non-static transport problem in message passing parallel programming environment

. Acta. Phys. Sin. 65, 142801 (2016).doi: 10.7498/aps.65.142801
Baidu ScholarGoogle Scholar
22. A. Bulinski, D. Dimitrov,

Statistical estimation of the Shannon entropy

. Acta. Math. Sin.(English Series) 35,17-46 (2019).doi: 10.1007/s10114-018-7440-z
Baidu ScholarGoogle Scholar
23. R.J. Brissenden, A.R. Garlick,

Biases in the estimation of Keff and its error by Monte Carlo methods

. Ann. Nucl. Energy 13, 63-83 (1986).doi: 10.1016/0306-4549(86)90095-2
Baidu ScholarGoogle Scholar
24. M.R. Omar, J.A. Karim,

Fission source stationarity diagnostics using the Fourier fundmental mode coefficient

. Prog. Nucl. Energy (English Series). 146,104164 (2022).doi: 10.1016/j.pnucene.2022.104164
Baidu ScholarGoogle Scholar
25. Q.Q. Pan, N. An, T.F. Zhang et al.,

Single-step Monte carlo criticality algorithm

. Comp.Phys. Comm. 279,108439 (2022).doi: 10.1016/j.cpc.2022.108439
Baidu ScholarGoogle Scholar
26. Q.Q. Pan, T.F. Zhang, X.J. Liu et al.,

Optimal batch size growth for wielandt method and superhistory method, Nucl

. Sci. Eng. 196, 183-192 (2022).doi: 10.1080/00295639.2021.1968223
Baidu ScholarGoogle Scholar
27. Q.Q. Pan,

Redevelopment of shielding module and research on advanced variance reduction methods based on RMC code.

Ph.D thesis, Tsinghua University (2020)
Baidu ScholarGoogle Scholar