Introduction
Multiphysics processes have been extensively studied in several important fields. Owing to its complexity, an analytical solution is hardly possible, and numerical calculations are treated as efficient tools instead of expensive experiments. When time-dependent processes governed by different physical rules are coupled, one apparent solution is to divide the entire calculation into many steps, in which different control equations should be solved individually. Some data must be transmitted from one process to another to mimic the coupling effect. In certain important cases, the time-dependent particle transport is an indispensable part of the multiphysics process.
The coupling between fluid mechanics and neutron transport was used as an example, which was used to demonstrate the sample-size adaptive strategy. In many cases, other physical processes must also be considered. However, considering only these two processes is not trivial and is meaningful in certain important situations. The governing equations for fluid mechanics are as follows:
Owing to the accuracy of the Monte Carlo method for geometry [2], physics modeling, and other merits, this method is often used to simulate particle transport [3]; however, it is extremely time-consuming. When there is a problem with multi-materials with large deformations and the scale of the model's grid number is large, several steps are required for a reliable calculation, and each step contains a Monte Carlo simulation of a large model. Therefore, in this case, the calculation time is insufferable even with large-scale parallelization using a powerful supercomputer. Similar to the traditional Monte Carlo simulation of the time-independent particle transport problem, the core objective is to achieve the highest possible efficiency. This means finding a method with high calculation accuracy but not time-consuming. In other words, when efficiency flattens with an increase in sample size, there is no need to perform more calculations if efficiency, not calculation accuracy, is the goal. For the Monte Carlo method, it is well known that when the sample size tends to infinity, the calculation accuracy can reach any level because of the law of large numbers, which is impossible.
Apparently, it is difficult to set different reasonable sample sizes for each step based on experience because each step is a different particle transport problem. If a fixed, sufficiently large sample size is set for all steps, which is the original sample size setting method of the JUPITER platform, many calculations may be wasted because the efficiency factor will be flat as the sample size is increased to a certain level. Therefore, experience-based methods are not considered to be efficient. Moreover, because the Monte Carlo simulation provides several feedback quantities with very different fluctuation levels, the sample size controlled by the relative error of one tally may not be sufficient or may be excessively large for another tally. Therefore, error-based methods [4] are unsuitable. Because the calculation of time-dependent particle transport is expensive, Monte Carlo simulations still need to develop an adaptive strategy to scientifically set the sample size, even with the rapid progress of supercomputers.
The remainder of this paper is organized as follows: The second section describes all aspects of a sample size adaptive strategy based on an extension of the Shannon entropy concept. The third section contains some active numerical results from the calculations of the two models. The final section provides the concluding remarks.
Sample size adaptive strategy
Research basics
Entropy is a notion existing in both physics and mathematics. Clausius is considered the father of the concept of entropy. The important contributions to the development of this concept were made by Boltzmann, Gibbs, and Planck. Moreover, mathematicians, including Shannon, Kolmogorov, Sinai, Rényi, Holevo, and Tsallis, were also preoccupied with the concept. On the history of different paths to entropy concept, see, e.g., [5-7]. Therefore, there are various definitions of entropy. In this study, the entropy concept used is the one proposed by Shannon in [8], which has some novel applications in various fields [9, 10]. For example, Ref. [9] showed that Shannon entropy is an efficient dynamic indicator that provides a direct measure of the diffusion rate and, thus, a timescale for the instabilities arising when dealing with chaos.
It is worth pointing out that the Shannon entropy was first introduced into Monte Carlo particle transport simulation as a posterior indicator for convergence of fission source distribution in eigenvalue calculation of nuclear systems [11-14]. Suppose there are N cells in total for some nuclear systems and M fission neutrons at some moment. Let mi be the number of fission neutrons in cell i. Hence, ∑imi=M. Let
In recent years, there have been some attempts to explore on-the-fly judgment methods [15-18] for the convergence of the fission source distribution, which belongs to the time-independent particle transport problem. It has already been noted [19] that the Shannon entropy concept can be problematic when it indicates the convergence of the fission source distribution in a slow-convergence system. Fortunately, our concerns did not include this type of problem.
Analysis for an reasonable adaptive strategy for sample size
Two important facts should be considered. (1) The surviving particles of the current step should be transferred to the next step as an external source. The attributes of these particles are samples of unknown distribution that change with the number of steps. This distribution can be called survival particle distribution for convenience; (2) Monte Carlo simulation of particle transport should provide global tallies (one tally per cell) for mechanical calculation as a source item. These facts have a general sense of many multiphysics coupling processes. It would be interesting to develop a sample size adaptive strategy to balance small errors and calculation costs. The basic considerations of this study are as follows: Suppose a sufficiently large sample size of one step is divided into batches, and each batch contains a fixed number of samples, the Monte Carlo simulation should finish all batches one at a time. However, if the number of surviving particles is sufficient, and global tallies have converged according to some judgment rules after some of these batches have been finished, the remaining batches can be omitted because the safely finished batches are sufficient to obtain reasonable results. This point can be reasonably verified by the fact that when stops in advance in one step, the global efficiency of global tallies, according to some index, is flat with batches that have already been finished.
To realize these ideas, three questions must be answered. The first question is whether the surviving particles are sufficient. The second one is by which indicator the global tallies can be judged to converge sufficiently. The last one is which on-the-fly diagnostic can be used to stop the Monte Carlo simulation in advance. The solutions to these problems are presented in the following three subsections. These methods are based on the Shannon entropy concept.
Indicator for judging whether survival particles are enough
Based on the definition of Shannon entropy, an indicator can be designed to show if the number of surviving particles are sufficient. Suppose there are N cells in total for the system and G energy groups so that the total number of phase–space grids is NG. Let
Indicator for judging whether global tallies have converged sufficiently
When particle transport is coupled with other processes, the Monte Carlo simulation must provide feedback quantities to those processes as an input. The precision of these inputs, also known as tallies in the Monte Carlo method, significantly influence the confidence of the coupling calculation. In most cases, these quantities can be expressed as integrals of the flux and the corresponding functions. Therefore, if global flux tallies converge sufficiently, these feedback quantities are credible to a certain degree. Focusing on global flux tallies can also avoid the problem of dealing with different types of tallies in different multiphysics processes. If Fi is the flux in cell i, let
On-the-fly diagnostic for judging whether entropy sequence has converged
Based on the stochastic oscillator indicator, which is commonly used in the technical analysis of financial markets, a posterior diagnostic method has been proposed to assess fission source convergence in Monte Carlo criticality calculations [18]. The stochastic oscillator indicator of the nth iteration step of criticality calculation is defined as follows:
Description of the adaptive strategy
The entire mechanism for automatically adjusting the sample size is summarized as follows: In each step of the time-dependent Monte Carlo particle transport simulation, a sufficiently large sample size is initially set and divided into M batches. After finishing one batch, new Ht and Hf values are obtained. When a finished batch number exceeds m+p-1, the on-the-fly diagnostic is invoked to decide if it needs to stop the Monte Carlo simulation in advance according to both the entropy sequences. Only when both judgments return a true value, the simulation of this step is stopped in advance. If stopped in advance in step L, there will not be enough survival particles to be used in step L+1 although the current survival particles can be seen as sufficiently sampled from the survival particle distribution. In this case, we randomly select from the current surviving particle bank to supply more surviving particles as source particles in step L+1. This operation is considered reasonable based on the probability theory. Note that in parallel computing, frequently calculating these entropies should use the method proposed in [21]; otherwise, the time cost of obtaining these entropies will offset the benefit of the sample size adjustment. In other words, this method uses local data in each process to calculate the local entropy and the average of the entropies of all processes as the final entropy value. Thus, massive data broadcasts can be avoided. It can be verified that when the sample size approaches infinity, the entropy is the same as the original entropy. Even with a finite sample size, this new entropy can be used to indicate the convergence of the corresponding survival particle distribution or global tallies. Although there is a statistical estimation method for the Shannon entropy [22], we have not adopted it because of its complexity; however, this will be the focus of our future studies.
It is worthwhile to provide four annotations. (1) When an under-sampling problem [23] exists for a specific model, an insufficient history number causes the Shannon entropy corresponding to the survival particle distribution to fluctuate more drastically. Therefore, the inequality Eq. (14) cannot be satisfied easily, and the entire calculation will not be stopped in advance. (2) Theoretically, the method in this study may fail for some symmetric and oscillating models. However, for time-dependent particle transport simulation, strict symmetric and oscillating models are rare; hence, it is not a serious issue. (3) The Fourier fundamental mode coefficient works similar to the traditional Shannon entropy; therefore, the recently proposed Fourier diagnosis [24] can be a substitute for the Shannon entropy diagnosis. Because we adopted an efficient method for calculating Shannon entropy in MPI parallel environments and did not perform a numerical comparison, we do not know which method is better. (4) The recently proposed batch-size growth method [25, 26] is consistent with our method. However, our work aims to improve the efficiency of time-dependent particle transport simulations using multiphysics calculations. The two aforementioned studies aim to improve the efficiency of time-independent criticality calculations. The existing difficulties are different for these two types of problems. Theoretically, the concepts proposed in these studies are suitable for transplantation. These performances can only be compared by future numerical experiments.
Numerical results
It is worth mentioning again that our goal in designing this sample size adaptive strategy is to avoid unnecessary calculations when the results are sufficiently good when evaluated by a standard. In each step, when the original sample size is divided into several batches, and each batch is simulated individually according to the original plan, we expect that this strategy can stop in advance when the efficiency evaluated by some standard becomes flat. Apparently, because of the inevitable stochastic nature of this strategy, we could not precisely capture the stop point. However, capturing it approximately can still save considerable calculation time while keeping the results approximately unchanged when compared to the original case. The two models discussed in the next sections were used to demonstrate this effect. The Arbitrary Lagrange-Euler (ALE) algorithm was used to perform mechanical calculations.
Model one
Model one was a sector piece with an open angle of 2 and a radius of 6.45 cm. There are two concentric layers. Each layer is composed of a mixture of U235 and U238 but with different densities. In total, 600 cells were used. The initial sample size was 12,800,000 samples per step, which was sufficiently large for this model. These samples were divided into 640 batches with 20,000 samples per batch. The calculations used a total of 32 cores. We perform two independent calculations with ϵ = 0.1 and ϵ = 0.05. All the other conditions were identical to those described above. Figure 1 shows how the neutron number(Np) in the model changes with time for three cases, with and without sample size adaptation but with two different ϵ values). Table 1 shows the comparison of neutron numbers (Np) at the end of the calculations for these three cases and the time costs. It can be observed that the proposed sample size adaptive strategy significantly decreases the calculation time while maintaining an almost unchanged result. When the judging rule is stricter, similar to the ϵ =0.05 case, the result is closer to the original case; however, the reduction in the calculation time is smaller. For a more detailed comparison, we randomly chose a step to determine whether the adaptive strategy could stop the calculation in advance when the efficiency flattened. We used the following equation:
Different cases | Np | Time cost (hours) |
---|---|---|
without sample size adaptive | 2.058× 1024 | 1.75 |
sample size adaptive with ϵ=0.1 | 2.049× 1024 | 0.93 |
sample size adaptive with ϵ=0.05 | 2.059× 1024 | 1.73 |
-202304/1001-8042-34-04-011/alternativeImage/1001-8042-34-04-011-F001.jpg)
-202304/1001-8042-34-04-011/alternativeImage/1001-8042-34-04-011-F002.jpg)
-202304/1001-8042-34-04-011/alternativeImage/1001-8042-34-04-011-F003.jpg)
-202304/1001-8042-34-04-011/alternativeImage/1001-8042-34-04-011-F004.jpg)
-202304/1001-8042-34-04-011/alternativeImage/1001-8042-34-04-011-F005.jpg)
Model two
Model two is a cuboid which contains 200,000 hexahedron cells. It is divided into three layers that contain four types of isotopes of U (234U, 235U, 236U, 238U) but with different densities. The mechanical calculation is done in one step, and the Monte Carlo particle transport simulation is iterated for 100 steps to calculate
Different cases | λ | Time cost (hours) |
---|---|---|
without sample size adaptive | 3.7727(0.0238) | 3.04 |
sample size adaptive with ϵ=0.1 | 3.7709 | 0.67 |
Conclusion
Based on the extension of the Shannon entropy concept, two indicators are designed to show whether samples of survival particle distribution are sufficient and whether global flux tallies have converged sufficiently. An on-the-fly diagnostic method based on the stochastic oscillator indicator for the convergence of these two indicators is proposed to avoid manual intervention, thus forming a complete adaptive strategy. This is the first time that it has become possible to adjust the sample size according to certain definite and reasonable rules. Based on the results of the two models displayed in the previous section, this self-adjustment mechanism maintains reasonable precision while reducing the calculation time. This indicator and mechanism can be easily generalized to other multiphysics calculations, including time-dependent Monte Carlo particle transport simulations.
CMGC - A CAD to Monte Carlo geometry conversion code, Nuc
. Sci. Tech. 31, 82 (2020). doi: 10.1007/s41365-020-00793-8A high fidelity general purpose 3-D Monte Carlo particle transport program JMCT3.0
. Nuc. Sci. Tech. 33, 108 (2022). doi: 10.1007/s41365-022-01092-0On Clausius, Boltzmann and Shannon notions of entropy
. J. Mod. Phys. 7, 219-227 (2016). doi: 10.4236/jmp.2016.72022The different paths to entropy
. Euro. J. Phys. 34, 303-321 (2013). doi: 10.1088/0143-0807/34/2/303Inequalities for Shannon entropy and Kolmogorov complexity
. J. Compu. Sys. Sci. 60, 442-464 (2000). doi: 10.1006/jcss.1999.1677A mathematical theory of communication
. Mob. Compu. Comm. Rev. 5 3–55,(2001). doi: 10.1145/584091.584093The Shannon entropy: An efficient indicator of dynamical stability
. Phys. D 417, 132816 (2021). doi: 10.1016/j.physd.2020.132816Dynamical Shannon entropy and information Tsallis entropy in complex systems
. Phys. A 341, 649-676 (2004). doi: 10.1016/j.physa.2004.03.094Stationarity diagnostics with relative entropy and Wilcoxon signed rank initerated-source Monte Carlo methods
. Nucl. Sci. Eng. 160, 242 (2008).doi: 10.13182/NSE160-242Stationarity modeling and informatics-based diagnostics in Monte Carlo criticality calculations
. Nucl. Sci. Eng. 148, 38 (2005).doi: 10.13182/NSE04-15Forced propagation method for Monte Carlo fission source convergence acceleration in the RMC
. Nuc. Sci. Tech. 32, 27 (2021). doi: 10.1007/s41365-021-00868-0On-the-fly diagnostics of particle population in iterated-source Monte Carlo Methods
. Nucl. Sci. Eng. 158, 15-27 (2008). doi: 10.13182/NSE08-A2735The sandwich method for determining source convergence in Monte Carlo calculatio
. J. Nucl. Sci. Tech. 41, 559 (2004).doi: 10.1080/18811248.2004.9715519Deterministic truncation of the Monte Carlo transport solution for reactor eigenvalue and pinwise power distribution
. Nucl. Sci. Eng. 194, 14-31 (2020). doi: 10.1080/00295639.2019.1654815Autocorrelation and dominance ratio in Monte Carlo criticality calculations
. Nucl. Sci. Eng. 145, 279-290 (2003). doi: 10.13182/NSE03-04New strategy for global tallying in Monte Carlo criticality calculation
. Acta. Phys. Sin. 68, 122801 (2019).doi: 10.7498/APS.68.20182276Efficient method of calculating Shannon entropy of non-static transport problem in message passing parallel programming environment
. Acta. Phys. Sin. 65, 142801 (2016).doi: 10.7498/aps.65.142801Statistical estimation of the Shannon entropy
. Acta. Math. Sin.(English Series) 35,17-46 (2019).doi: 10.1007/s10114-018-7440-zBiases in the estimation of Keff and its error by Monte Carlo methods
. Ann. Nucl. Energy 13, 63-83 (1986).doi: 10.1016/0306-4549(86)90095-2Fission source stationarity diagnostics using the Fourier fundmental mode coefficient
. Prog. Nucl. Energy (English Series). 146,104164 (2022).doi: 10.1016/j.pnucene.2022.104164Single-step Monte carlo criticality algorithm
. Comp.Phys. Comm. 279,108439 (2022).doi: 10.1016/j.cpc.2022.108439Optimal batch size growth for wielandt method and superhistory method, Nucl
. Sci. Eng. 196, 183-192 (2022).doi: 10.1080/00295639.2021.1968223Redevelopment of shielding module and research on advanced variance reduction methods based on RMC code.
Ph.D thesis,