1. Introduction
With the increase in the computing power of high-performance computing platforms, Monte Carlo neutronics-thermal hydraulics coupling has become an ideal approach for obtaining accurate results for the design and analysis of reactors. Traditional methods of linear interpolation with point-wise nuclear data require a large amount of memory resources to provide temperature-dependent microscopic cross-sections for simulations [1]. On-the-fly Doppler broadening methods have been introduced to reduce the memory cost and enable thermal-hydraulics coupled reactor analysis [2]. Several on-the-fly Doppler broadening methods have been proposed to meet both the efficiency and memory requirements. Three methods, namely, Target Motion Sampling (TMS) [3,4], windowed multipole [5], and a method based on regression model and fitting [6] have been proposed for the evaluation of cross-sections across the resolved resonance energy range. Walsh studied an on-the-fly Doppler broadening method for the unresolved resonance energy range [7]. Pavlou and Ji proposed a thermal energy range method [8].
For heavy nuclides such as U-235, the data in the resolved resonance energy range account for most of the nuclear data. Among the methods applicable in this range, the TMS method requires cross-section data at a given temperature, such as 0 K [9,10], while the windowed multipole method divides the resolved resonance energy range into energy windows and uses approximations to evaluate cross-sections [1], Yesilyurt et al. used thirteen parameters for broad cross-sections at 0 K to perform Doppler broadening for any temperature in the range of 77 K to 3200 K [6]. This method was further developed by Liu et al., and the number of parameters was made flexible [11]. Both the TMS method and the method proposed by Yesilyurt et al. require ACE data in their processes, and the latter requires a larger memory for storing the broadening parameters. The range of nuclides for which the windowed multipole method is applicable is limited.
In this paper, a new on-the-fly Doppler broadening method based on BP neural networks, called hybrid windowed networks (HWN), is proposed. The total memory requirements are reduced by approximately 65% compared with ACE data at the expense of efficiency over the resolved energy range. The BP neural networks are used to evaluate the cross-sections. The neural network method, which is a type of machine learning method, simulates the structure of biological neurons through establishing artificial neural networks [12]. By relying mainly on multi-layer neuron nodes with various weights and biases, neutron networks can be used to solve complicated problems such as image [13] or audio [14] processing. Neural networks with simple structures also exhibit a satisfactory performance in data fitting [15]. The structures and training parameters of the networks reported herein were carefully determined to meet the needs of cross-section training.
In this method, the resolved resonance energy range is divided into windows. The networks trained for each window can be used independently so that the method can be easily combined with other on-the-fly Doppler broadening methods. The application range of this method can be set by the users to avoid unacceptable losses of efficiency.
The results confirm the feasibility of evaluating complex cross-section parameters through the use of neural networks. The potential of neural networks for memory saving is demonstrated in this work. Neural networks can be used to evaluate some of the parameters in the calculation of other developed on-the-fly Doppler broadening methods. Larger memory savings and higher accuracy might be achieved by incorporating the physics of Doppler broadening into the method.
The principle of BP neural networks and the HWN method are introduced in Sect. 2. In Sect. 3, the results of numerical tests conducted to verify the effectiveness, accuracy, and efficiency of the method are reported and discussed.
2. HWN Method
In the HWN method, the resolved energy is divided into energy windows based on the number of extreme points in the cross-section. In each window, two BP networks are used to calculate the cross-section at 200 K and broaden the cross-section to temperatures in the range of 250 K to 1600 K. ACE data are used to train the BP networks. The networks for each window can be used independently; thus, the scope of the method can be set easily.
Section 2.1 reviews the principle of BP neural networks. Section 2.2 describes the training and calculation process of the two networks within a window. Section 2.3 introduces the division of the resolved energy range and the parameter determination process.
2.1. BP Neural Network
An efficient and memory-saving method for evaluating cross-sections is needed for on-the-fly Doppler broadening. Neural networks, which are widely used in machine learning, need to be trained before they are used. Once the parameters of the networks are determined, the networks can be used to calculate the output from the given input. After training, the amount of calculations required is greatly reduced; therefore, this method can be used for on-the-fly Doppler broadening. A combination of the input and expected results is used in the training process, during which the weights and biases of all the neurons are adjusted. The deviation between the outputs of the network and the expected results is gradually reduced during training until the network meets the requirements.
Many studies on neural networks such as convolutional neural networks [16] and deep neural networks [17] have been conducted. Such networks have many hidden layers and neurons, as well as complex structures, resulting in low computational efficiencies. Because computing speed is important in Monte Carlo codes, a neural network with a simple structure is more suitable.
In this study, the back propagation (BP) network was used. The error back propagation algorithm proposed by Rumelhart [18] introduced in the following paragraphs is used for training this network. As shown in Fig. 1, a BP network consists of an input layer, hidden layers, and an output layer. Each layer contains a certain number of neurons. The simplicity of the structure and calculation steps of this type of network ensures its high computational efficiency.
-202106/1001-8042-32-06-008/alternativeImage/1001-8042-32-06-008-F001.jpg)
The hidden and output layer nodes of the BP neural network are respectively described by
In the equations,
The weights and thresholds of the BP neural network in calculations are generally expressed in matrix form. Equations (1) and (2) are therefore expressed as Eqs. (3) and (4), respectively.
The subscript “pre” means the previous layer, which can be a hidden layer or an input layer. Subscript “h” and “o” means hidden layer and output layer, respectively. In Eq. (3),
The neural networks in this study were trained using MATLAB. The output of the neural networks approached the target value over successive iterations. All the weights and biases in the network were adjusted during the training process, while the structure of the network, including the number of hidden layers and nodes in each layer, and the transfer function, remained unchanged.
The error back propagation algorithm was used for training the network. The error between the result in the training data and the corresponding result calculated by the network for a given input in the training data was propagated back to the parameters of each layer. The process is briefly described as follows.
The set of input vectors and the corresponding target vectors are denoted as
The square of errors between P and T are summed to defined R, which is the error of the network. The factor 1/2 is to simplify the following process.
The goal of the training is to reduce the error. R is therefore expanded using Eq. (4) into the node parameters of the output layer as follows:
Equation 8 can be further expanded using Eq. (1) as follows:
The expansion ends here if the network has one hidden layer. Otherwise, R can be expanded layer by layer. The algorithm is illustrated for a one-hidden-layer network for which the transfer function of the output layer is given by
The partial derivatives of R in Eq. (8) are
The partial derivatives are called the gradient values of the error. The parameters are updated using the gradient values in each training Iteration as follows:
where
The results of Eq. (13) and (14) for all the parameters can be collectively expressed as
The training of the other parameter matrices can be described in a similar manner. The error plays a role in the parameter adjustment of each layer through the result deviation
2.2. Training and Calculation
This section describes how the two networks were trained and used in an energy window. The method for dividing the resolved energy is introduced in the next section. Both the energy division and the sequential computation using two networks are designed to reduce the complexity of the training target.
A temperature is chosen as the base temperature for the cross-section of a nuclide. The cross-section at this temperature, denoted as
The HWN method is similar to the method proposed by Yesilyurt et al. [6], which also uses fitting parameters for on-the-fly Doppler broadening. However, the broadening coefficient
To reduce the memory required, a network denoted as Network 1 is trained to calculate the cross-section at temperature
where
Network 2 is trained to calculate the broadening coefficient,
To reduce the error, the
The difference between Eqs. (20) and (18) is that the cross-section at
where
By rewriting Eq. (20), the equation for on-the-fly Doppler broadening is obtained as
Equation (22) describes the Doppler broadening process. Network 1 is first used to calculate the cross-section at the given energy
The values of
Temperature range (K) | Field of study |
---|---|
77 – 293.6 | Cold neutron physics |
293.6 – 550 | Benchmarking calculations |
550 – 1600 | Reactor operation |
1600 – 3200 | Accident condition |
2.3. Window Division and Parameter Determination
The cross-section curve in the resolved resonance energy range of heavy nuclides at a single temperature is very complex because of the large number of resonance peaks. The addition of the temperature dimension further increases the complexity. Therefore, it is impractical to train the neural network directly using the ACE data over the entire resolved resonance energy range as the input.
Dividing the resolved resonance energy range into energy windows can significantly reduce the difficulty of training in each window. Because similar processes are carried out for each window, the data should be divided evenly according to the training difficulty. Dividing the resonant peaks equally between the different windows is an easy and effective method. Each window has the same number of maximum points that correspond mainly to the positions of the resonance peaks within the resolved resonance energy range. The edges of the windows are set as the minimum points of the cross-section curve.
Because of the Doppler broadening effect, the resonant peaks are broadened at temperatures above 0 K. At
The number of maximum points in each window was carefully determined. A smaller number of points for a given network would result in a higher number of divided windows and a corresponding increase in the memory required. In addition, if large number of peaks are included, the amount of data in each window will be extremely large, and the accuracy of the networks will be decreased. The number of points was set to 15 based on experience and testing. The aforementioned process was applied to the total cross-section curve at the base temperature of
Relatively large deviations near the boundary of the windows were often observed during network training. To avoid this, the windows were extended along both edges, as is shown in Fig. 2. The data in the extended window were used for neural network training, whereas the acquired neural network was used only within the range of the original window. The windows next to the unresolved resonance energy range and thermal energy range were only extended in the direction of the resolved resonance energy range.
-202106/1001-8042-32-06-008/alternativeImage/1001-8042-32-06-008-F002.jpg)
The training results were greatly affected by the structure of the neural networks and the training parameters. The network parameters were determined according to the test results.
The training function training was used in network training. This approach has the advantages of a faster convergence speed and better accuracies. The transfer function of the hidden layer nodes is the tansig function in MATLAB, which is defined as
A BP network with one hidden layer is sufficient for solving many problems. Networks with multiple hidden layers may show better performance in some cases [19]. A one-hidden-layer BP network was used for Network 1 (described in Sect. 2.2) to reduce the memory requirements. For Network 2, a two-hidden-layer network was used to avoid overfitting in the training of K(E, T).
Tests were performed to determine the number of neurons in each hidden layer. The number of neurons in the hidden layer of Network 1 was determined first. The total cross-section data of U-235 at 0 K were divided into 181 windows, and 19 equally spaced windows were selected. Networks with 80, 90, 100, 110, 120, and 130 nodes in the hidden layer were trained using the data in the windows. Each network was trained with the data from each window 10 times, and the minimum value of the maximum absolute relative error was calculated. The geometric means of the error values for the 19 windows are shown in Fig. 3.
-202106/1001-8042-32-06-008/alternativeImage/1001-8042-32-06-008-F003.jpg)
In general, the accuracy of the network increased with the number of nodes, and it was at an acceptable level when the number of nodes was 130. As shown in Fig. 3, a further increase in the number of nodes had a limited effect on the improvement of accuracy. Networks with 130 nodes in the hidden layers were used for most windows. A larger number of nodes was used in a few windows in which the relative error was extremely large.
Training and comparisons were performed to determine the number of nodes in the two hidden layers of Network 2. A reasonable maximum epoch number, training time limit, and accuracy target were set for training. Data from the second window of the U-235 total cross-section were used to evaluate networks with different combinations of node numbers. The performance was determined using the percentage of data points where the absolute relative error with respect to the input data was less than 0.1%. The results are compared in Table 2. The results indicate that the best combination has 40 nodes in the first hidden layer and 20 nodes in the second hidden layer.
Number of nodes in the first hidden layer |
Performance of networks with |
||||
---|---|---|---|---|---|
10 | 13 | 15 | 20 | 25 | |
35 | 57.1 | 55.9 | 61.1 | 58.6 | 58.5 |
40 | 56.9 | 59.8 | 57.0 | 64.8 | 59.6 |
45 | 56.1 | 57.6 | 58.5 | 59.5 | 56.8 |
50 | 59.2 | 57.7 | 61.6 | 56.0 | 58.1 |
55 | 57.7 | 61.4 | 63.4 | 59.6 | 52.6 |
3. Numerical Tests of HWN Method
The HWN method was implemented in the Reactor Monte Carlo (RMC) code [20] for on-the-fly Doppler broadening of U-235. Data from the ENDF/B-VII.0 database were processed by NJOY [21] with an accuracy of no less than 0.001 to obtain the training ACE data. The resolved resonance energy range was divided into 80 energy windows, and the networks were trained for each window. Cross-section data at 25-K intervals in the range of 250–1650 K were used to calculate the training target
In Sect. 3.1, the accuracy and memory requirements of the HWN method are demonstrated by comparing the calculated microscopic cross-section with the ACE data. In Section 3.2, the results of two macroscopic tests performed to verify the accuracy and efficiency of the method are presented.
3.1. Microscopic Accuracy and Memory Requirement Comparison
The HWN method was applied to U-235 and U-238, which are representative heavy nuclides with important resonances. The total cross-section, elastic scattering cross-section, absorption cross-section of U-235, and total cross-section of U-238 were compared with the ACE data at the same temperature. The cross-sectional results and absolute relative errors with respect to the ACE data are plotted in Fig. 4, where the comparisons were made in the energy regions with strong resonances at 300 K and 700 K. It can be observed that the errors are low for most data points and that the cross-sections evaluated are accurate at both temperatures. The relative errors fluctuate with the energy. The fluctuations are caused by the characteristics of neural networks with many nodes.
-202106/1001-8042-32-06-008/alternativeImage/1001-8042-32-06-008-F004.jpg)
The HWN method and the windowed multipole method are both methods with new ways, which do not use continuous-energy data, of storing the cross-section data. Therefore, their memory requirements are lower than that of the ACE data. The relative errors of the HWN and windowed multiple [1] methods are compared in Table 3. The results show that the maximum relative errors are similar and that the average relative errors of both methods are within 0.1%.
Relative error | HWN method | Windowed multipole |
---|---|---|
Max relative error | ~1% | ~1% |
Average error | <0.1% | <0.1% |
The theoretical memory consumption of the network parameters and ACE cross-section data are compared. In the continuous-energy Monte Carlo code using the ACE data, the cross-sections are stored in the form of double-precision floating-point numbers. Each double-precision floating-point number occupies eight bytes of memory. Both the energy grid and cross-section values are needed for cross-section evaluation. For the total cross-section, elastic scattering cross-section, and absorption cross-section, four double-precision floating-point numbers, which occupy 32 bytes of memory, are needed for each energy point. In comparison, most of the parameters in the HWN method are stored in the form of double-precision floating-point numbers, and a few parameters are integers. The memory consumptions of the two methods are calculated using the number of parameters and the memory space needed for each parameter.
The ACE data at 0 K processed by NJOY were used for comparison. If on-the-fly Doppler broadening is not introduced to the Monte Carlo code, point-wise cross-sections at more than a dozen temperatures will be needed for thermal-hydraulics coupled analysis. The method proposed by Yesilyurt et al. uses parameters that require several times the memory needed for the ACE data at 0 K. Point-wise data are also needed in the TMS method. As a result, the HWN method shows a significant reduction of the memory requirement compared to the 0 K ACE data.
The comparison results are listed in Table 4. The results show that for the three cross-sections of U-235, the HWN method could reduce the memory requirements of cross-section data in the resolved resonance energy range by 66.1% as compared with the case of the 0 K ACE data. The memory requirement reduction was 65.9% over the entire energy range.
Data saving mode for cross-section | Theoretical memory requirement (MB) | Optimization ratio (%) |
---|---|---|
0 K ACE data in resolved resonance energy range | 7.38 | - |
HWN | 2.50 | 66.1 |
0 K ACE data in the whole range | 7.40 | - |
HWN and 0 K ACE data outside resolved resonance range | 2.52 | 65.9 |
The networks for each window can be used independently because the training process of each window is independent. It is easy to set the scope of the method when the method is implemented in a Monte Carlo code. It is not necessary to store the 0 K ACE data within the selected windows if the HWN method is used. However, the speed of the cross-section evaluation in these windows will decrease. The efficiency drop is described in Sect. 4. Table 4 clearly shows that the resolved resonance energy range accounts for most the nuclear data. The memory optimization ratio is significant when the HWN method is used in some of the windows. Users can decide the efficiency to be compromised for saving memory.
3.2. Comparison of Macroscopic Test Results
The HWN method was used for Doppler broadening in all windows of the resolved resonance energy in the HWN cases, while ACE data from the processed ENDF/B-VII.0 database at the same temperature were used in the ACE case. All calculations were performed using an Intel i7-9750H CPU without parallelism.
3.2.1. Concentric spheres
The first comparison example consisted of two concentric spheres, as shown in Fig. 5. The radii of the inner and outer spheres were 6.782 cm and 11.862 cm, respectively. The inner sphere was filled with Material 1. The space between the two spheres was filled with Material 2. The outside of the outer sphere was set to vacuum. The nuclide compositions of the materials are listed in Table 5. The temperature of each material was set to 300 K.
-202106/1001-8042-32-06-008/alternativeImage/1001-8042-32-06-008-F005.jpg)
Materials | Nuclides | Atomic density ( |
---|---|---|
Material 1 | U-235 | 4.4917×10-2 |
U-238 | 2.5993×10-3 | |
U-234 | 4.9210×10-4 | |
Material 2 | U-235 | 3.4428×10-4 |
U-238 | 4.7470×10-2 | |
U-234 | 2.6299×10-6 |
The calculation parameters are listed in Table 6, and the results are presented in Table 7. The deviation in
Parameters | Value |
---|---|
Neutrons per cycle | 100000 |
Inactive cycles | 400 |
Active cycles | 1600 |
Case | Standard deviation | Calculation time (min) | Time ratio | |
---|---|---|---|---|
ACE | 0.995747 | 0.000048 | 597.3738 | 1.00 |
HWN | 0.995748 | 0.000048 | 601.7461 | 1.01 |
3.2.2. PWR assembly
The second comparison example is the PWR assembly model shown in Fig. 6. The model comprised an infinite cylinder with a cross-section of 21.42 cm × 21.42 cm. There were 264 fuel rods and 25 pipes arranged in a 17 × 17 square. The fuel rods were cylinders with diameters of 0.8192 cm. Each fuel rod was surrounded by a 0.082-mm air layer and a 0.572-mm zirconium wall. The inner and outer diameters of the pipes were 1.138 and 1.2294 cm, respectively. The pipes were filled with water, and the material of their walls was zirconium. The remainder of the assembly was filled with water. The nuclide compositions of the fuels are listed in Table 8. The temperature of all the materials was set to 700 K.
-202106/1001-8042-32-06-008/alternativeImage/1001-8042-32-06-008-F006.jpg)
Nuclide | Atomic density (1024 atoms/cm-3) |
---|---|
U-235 | 6.9100 × 10-3 |
U-238 | 2.2062×10-1 |
O-16 | 4.5510×10-1 |
The calculation parameters are listed in Table 9, and the results are presented in Table 10. The difference in
Parameters | Value |
---|---|
Neutrons per cycle | 40000 |
Inactive cycles | 400 |
Active cycles | 1600 |
Case | keff | Standard deviation | Calculation time (min) | Time ratio |
---|---|---|---|---|
ACE | 1.349355 | 0.000068 | 2174.0543 | 1.00 |
HWN | 1.349255 | 0.000062 | 2961.4330 | 1.39 |
The neutron flux spectrum of the fuel in a fuel rod adjacent to the central pipe and the neutron flux inside each fuel rod and pipe were calculated for both the ACE and HWN cases. The results are presented in Fig. 7. The blocks in Fig. 7c do not represent the actual geometry, rather they the corresponding positions.
-202106/1001-8042-32-06-008/alternativeImage/1001-8042-32-06-008-F007.jpg)
Figure 7a shows that the neutron flux spectra of the fuel rods are the same. Figure 7b shows that the relative deviations of the fluxes in most statistical intervals are within three times the standard deviation of the ACE case. Figure 7c compares the neutron fluxes of all the fuel rods in the assembly. There is no significant difference between the ACE and HWN cases. The blank grid squares represent the pipes whose fluxes are not shown in this figure. A numerical comparison shows that the deviation of most fuel rods and pipes is within twice the standard deviation, and the deviation of the remaining fuel rods and pipes is within three times the standard deviation except for a few fuel rods and pipes. The comparison results therefore demonstrate the accuracy of the proposed HWN method.
4. Conclusion
In this study, a hybrid windowed networks method for on-the-fly Doppler broadening was proposed and implemented in the RMC code. The resolved resonance energy range is divided into energy windows. In each window, two BP networks are trained to calculate the cross-section at the base temperature and broaden the cross-section to any temperature within the range of 250 K to 1600 K. The structures of the neural networks and training parameters are determined through calculations. Networks for the total cross-section, absorption cross-section, and elastic scattering cross-section of U-235 were trained.
Microscopic cross-section comparison and macroscopic tests were performed to verify the utility and effectiveness of the HWN method. A comparison between the cross-sections evaluated by this method and the ACE data shows the high accuracy of the proposed method. Macroscopic tests were conducted using RMC to verify the accuracy and efficiency of the method. The calculation time ratio between the HWN method and the ACE data for the PWR assembly calculation was 1.39. If the method is used with all the windows of the resolved resonance energy range, the theoretical memory consumption for U-235 nuclide can be reduced to 33.9% of the memory needed for ACE cross-section interpolation at 0 K. The theoretical memory consumption is reduced to 34.1% of the ACE data at 0 K if the ACE data outside the resolved resonance energy range are also included.
The HWN can be combined with other on-the-fly Doppler broadening methods or linear interpolation with point-wise nuclear data. Using the HWN method, users can compromise efficiency according to the memory saving requirement. If the predicted neutron flux is high in some windows, the use of this method in the remaining windows can significantly reduce the memory cost without comprising efficiency to a great extent.
The method proposed in this study should be further studied to improve its effectiveness. The calculation speed may be greatly improved by optimizing the calculation process, particularly the evaluation of the nonlinear transfer function. The accuracy of this method can be further improved by extending the training time or by choosing more suitable training parameters.
The HWN method is applicable to any heavy nuclide with a resolved resonance energy range. Because the training is performed with point-wise data, the method can also be applied to nuclides for which the windowed multipole method is inapplicable. The method can be applied to more nuclides, especially those that are important in reactor simulations or those for which it is difficult to apply the windowed multipole method.
The potential of the neural networks used in the HWN method for reducing the memory usage in the evaluation of complex parameters was demonstrated. The introduction of neural networks into other developed on-the-fly Doppler broadening methods may result in greater memory savings and higher accuracy.
Direct Doppler broadening in Monte Carlo simulations using the multipole representation
. Ann. Nucl. Energy. 64, 78-85 (2014). doi: 10.1016/j.anucene.2013.09.043Analysis of BEAVRS two-cycle benchmark using RMC based on full core detailed model
. Prog. Nucl. Energ. 98, 301-312 (2017). doi: 10.1016/j.pnucene.2017.04.009Effect of the Target Motion Sampling temperature treatment method on the statistics and performance
. Ann. Nucl. Energy. 82, 217-225 (2015). doi: 10.1016/j.anucene.2014.08.033The Serpent Monte Carlo code: Status, development and applications in 2013
. Ann. Nucl. Energy. 82, 142-150 (2015). doi: 10.1016/j.anucene.2014.08.024Efficiency and accuracy evaluation of the windowed multipole direct doppler baradening method.
,On-the-Fly Doppler Broadening for Monte Carlo Codes
. Nucl. Sci. Eng. 171, 239-257 (2012). doi: 10.13182/NSE11-67On-the-fly Doppler broadening of unresolved resonance region cross sections
. Prog. Nucl. Energ. 101, 444-460 (2017). doi: 10.1016/j.pnucene.2017.05.032On-the-fly sampling of temperature-dependent thermal neutron scattering data for Monte Carlo simulations
. Ann. Nucl. Energy. 71, 411-426 (2014). doi: 10.1016/j.anucene.2014.04.028Development of on-the-fly temperature-dependent cross-sections treatment in RMC code
. Ann. Nucl. Energy. 94, 144-149 (2016). doi: 10.1016/j.anucene.2016.02.026Generation of the windowed multipole resonance data using Vector Fitting technique
. Ann. Nucl. Energy. 112, 30-41 (2018). doi: 10.1016/j.anucene.2017.09.042Study of on-the-fly Doppler broadening in JMCT program
. Acta Phys. Sin.-Ch. Ed. 65, 42-47 (2016). doi: 10.7498/aps.65.092501Mathematical theory of neural networks
.Deconvolutional neural network for image super-resolution
. Neural Networks. 132, 394-404 (2020). doi: 10.1016/j.neunet.2020.09.017Automated recovery of damaged audio files using deep neural networks
. Digit. Invest. 30, 117-126 (2019). doi: 10.1016/j.diin.2019.07.007Fitting analysis and research of measured data of SAW micro-pressure sensor based on BP neural network
. Measurement. 155, 107533 (2020). doi: 10.1016/j.measurement.2020.107533Simultaneous fault diagnosis of wind turbine using multichannel convolutional neural networks
. ISA Transactions 108, 230-239 (2021). doi: 10.1016/j.isatra.2020.08.021Training of deep neural networks for the generation of dynamic movement primitives
. Neural Networks. 127, 121-131 (2020). doi: 10.1016/j.neunet.2020.04.010Learning representations by back-propagating errors
. Nature 323, 533-536 (1986). doi: 10.1038/323533a0Advancement from neural networks to deep learning in software effort estimation: Perspective of two decades
. Comp. Sci. Rev. 38, 100288 (2020). doi: 10.1016/j.cosrev.2020.100288RMC – A Monte Carlo code for reactor core analysis
. Ann. Nucl. Energy. 82, 121-129 (2015). doi: 10.1016/j.anucene.2014.08.048Methods for processing ENDF/B-VII with NJOY
. Nucl. Data Sheets. 111, 2739-2890 (2010). doi: 10.1016/j.nds.2010.11.001