Prediction of nuclear charge density distribution with feedback neural network

NUCLEAR PHYSICS AND INTERDISCIPLINARY RESEARCH

Prediction of nuclear charge density distribution with feedback neural network

Tian-Shuai Shang
Jian Li
Zhong-Ming Niu
Nuclear Science and TechniquesVol.33, No.12Article number 153Published in print Dec 2022Available online 06 Dec 2022
6002

Nuclear charge density distribution plays an important role in both nuclear and atomic physics, for which the two-parameter Fermi (2pF) model has been widely applied as one of the most frequently used models. Currently, the feedforward neural network has been employed to study the available 2pF model parameters for 86 nuclei, and the accuracy and precision of the parameter-learning effect are improved by introducing A1/3 into the input parameter of the neural network. Furthermore, the average result of multiple predictions is more reliable than the best result of a single prediction and there is no significant difference between the average result of the density and parameter values for the average charge density distribution. In addition, the 2pF parameters of 284 (near) stable nuclei are predicted in this study, which provides a reference for the experiment.

Charge density distributionTwo-parameter Fermi modelFeedforward neural network approach
1

Introduction

Starting from when the nucleus was discovered by Rutherford [1], its charge density distribution has been studied because it is critical for analyzing the nuclear structure. The nuclear charge density distribution provides direct information regarding the Coulomb energy of the nucleus, which allows it to be used to calculate the charge radius. Additionally, studies regarding high-momentum tails (HMT) caused by short-range correlations (SRC) found that the percentage of tails is closely related to the distribution of the nuclear charge density [2, 3]. An accurate estimation of the neutron and proton density distributions is crucial for the study of asymmetric nuclear matter in nuclear Astrophysics. The nuclear symmetry energy and its density-dependence play an important role in understanding the physics of several terrestrial nuclear experiments and astrophysical observations [4, 5]. Considering configurational information entropy (CIE), the charge distributions in projectile fragmentation reactions may be good probes for determining the thickness of the neutron skin of neutron-rich nuclei [6], among other characteristics such as the density dependence of the symmetry energy [7, 8]. In contrast, the charge density distribution also has a high status in atomic physics [9, 10]. For example, if the charge density distribution in the nucleus is known, the deformation of the nucleus can be calculated and its influence on the electrons in the atom can then be determined [11, 12].

Hofstadter measured the charge density of protons in the 1950s and described the density distribution of certain nuclei based on the findings [13]. Thus far, electron scattering experiments (elastic and inelastic) have become an effective method for measuring the nuclear structure [14-17]. After obtaining the density distribution data, the following two methods can be used to describe the shape of the charge density: a model-dependent analysis (e.g., two/three parametric Fermi models and two/three parametric Gaussian models), and model-independent analysis (e.g., Fourier Bessel and sum-of-Gaussian analyses) [18]. However, the experimental data regarding nuclear charge density remains limited. Considering the model-related analysis method as an example, fewer than 300 nuclei with charge density parameters were confirmed [19-21], which are mainly concentrated near stable nuclei.

In recent decades, several microscopic models of nuclear structure have been successfully established, nearly all of which can calculate the density distribution information, such as the ab-initio (Green's function Monte Carlo method [22], self-consistent Green's function method [23], coupled-cluster method [24], lattice chiral effective field theory [25], and Nocore shell model) and the density functional theory (DFT) [26]. Both of these can accurately describe the nuclear ground-state properties. However, as the nuclear mass number increases, the expansion of the configuration space limits the calculation range of ab-initio and the shell models. The systematicity of the calculation and accurately describing is a significant challenge. The density functional theory, such as the Skyrme-Hartree-Fock (SHF) method [27-29] and Covariant Density Functional Theory (CDFT) [30-36], are the most widely used and effective microscopic models for studying nuclear properties. The DFT, with a small number of parameters, allows a very successful description of the ground-state properties of nuclei all over the nuclear chart [26]. Although its calculation range on the nuclear chart is far beyond that of ab-initio and shell models, its prediction of the charge density distribution is occasionally inaccurate.

Compared to the aforementioned microscopic theoretical models, empirical models are more commonly used to describe the distribution of the nuclear charge density, such as the Fermi and Gaussian models in the model-dependent analysis method, among which the two-parameter Fermi (2pF) model is one of the most widely used. 2pF can describe the stability of the central density of larger nuclei and demonstrates the exponential decay of the surface density. More importantly, the 2pF model only requires two parameters to describe the nuclear charge density and is easy to use. However, only a few nuclei have the 2pF parameters according to Refs. [19-21], and other nuclei need to be extrapolated from existing experimental data. This can be easily performed using machine learning methods.

Machine Learning (ML) utilizes computers to simulate or achieve human learning activities. It is one of the most intelligent and cutting-edge research fields in artificial intelligence (AI). In recent decades, the prodigious development of machine learning applications has impacted several fields such as image recognition [37, 38] and language translation [39, 40]. Many algorithms have been developed for machine learning to closely resemble the human mind, the core of which is the Back Propagation (BP) algorithm, which is the most powerful and popular machine learning tool. Other algorithms such as the Decision Tree (DT), Naive Bayesian Model (NBM), SVM and Cluster have also been used in several areas, providing powerful tools for particle physics [41-43] and condensed matter physics [44, 45], among others.

In the 1990s, ML along with neural networks was applied to the modeling of observational data in nuclear physics [46, 47] and have been widely used in various fields, such as to determine the ground state properties of nuclei including the nuclear mass (binding energy), stability, separation energy, and branching ratio of radioactive decay, among others [46-56]. Others include the excited state [57-60], charge radius [61-63], α decay [64, 65], β decay [66-68], magnetic moment [69], nuclear reactions and cross-sections [70-74], nuclear structural data [75-79], giant dipole resonance [80], β decay one-neutron emission probabilities [81], density functionals for nuclear systems [82], and nuclear data evaluation [83]. Among these, the feedforward neural network (FNN) is the most widely used. [46, 47, 50-54, 63, 59, 60, 68]. This neural network demonstrates a significant learning ability because it can learn any function by adjusting the appropriate hyperparameters. Therefore, the FNN is adopted to study the 2pF experimental data and for predictions of other nuclei.

The basic formulas of the FNN approach are presented in Sect. 2, the prediction results of the charge density distribution are discussed in Sect. 3, and the summary and perspectives are presented in Sect. 4.

2

TWO-PARAMETER FERMI MODEL AND FEEDFORWARD NEURAL NETWORK APPROACH

2.1
TWO-PARAMETER FERMI MODEL

In most cases, a two-parameter Fermi distribution: ρ2pF=N01+ercz (1) is assumed for the charge distribution. The parameter c is the half-density radius and z is the diffuseness of the nuclear surface. N0 is the normalization factor that satisfies the following: Z=0ρ2pF4πr2dr, (2) where Z is the number of protons.

2.2
FEEDFORWARD NEURAL NETWORK APPROACH

The feedback neural network can be categorized under the ML sub-area. FNN mimics human brain functionality to provide outputs as a consequence of the input computation. It is composed of processing units called neurons, which have an adaptive synaptic weights [60]. The FNN framework is illustrated in Fig. 1, which is a multilayer neural network consisting of an input layer, hidden layers, and an output layer. The number of hidden layers can vary and the neurons are fully connected between the layers.

Fig. 1
(Color online) A schematic diagram of a neural network with four input variables, two output variables, and two hidden layers (five neurons H=5 in each hidden layer). There are also four input and two output variables.
pic

The output of each layer of the neural network is denoted as [a1, a2, a3, ..., an], where a1 is the input of the network and an is the output of the network, and the number of each layer's neurons are labeled as [N1, N2, N3, ..., Nn]. For the hidden layer, the output ai is calculated using the following formula: ai=f(Wiai1+bi), (3) where Wi is the weight matrix between the i-1th and ith layers with shapes of Ni × Ni-1, and bi is the bias vector of the ith layer. The activation function f performs a nonlinear mapping of the input, which is an important cause of FNN fitting most functions. In this study, the activation functions of hidden layers are considered the hyperbolic tangent, tanh: tanh(x)=sinhxcoshx=exexex+ex. (4) In the training procedure, the mean squared error (MSE) is used as the loss function as follows: Loss(ytar,ypre)=1Nsi=1Ns(ytarypre)2, (5) which is used to quantify the difference between the model predictions ypre and the experimental values ytar. Here, Ns denotes the size of the training set. The learning process minimizes the loss function via a proper optimization method. A back-propagation algorithm by Levenberg-Marquardt [84, 85] was used to train the FNN. The FNN modifies its weights until an acceptable error level between the predicted and the desired outputs is achieved. The stochastic gradient descent (SGD) method [86] is used in this study to obtain the optimal parameters Wi and bi in the network. The SGD is a popular alternative to the gradient descent (GD) method, which is one of the most widely used training algorithms.

Because the charge radius of the nucleus can be described by R = r0 Z1/3 in most cases [87], considering Z1/3 in the input will help fit the parameter c of 2pF. The following alternative formula for the radius has been occasionally used in other previous studies[13, 88, 89]: R = r0 A1/3, where A is the mass number. Therefore, A1/3 has inputs apart from Z, N, Z1/3 to ascertain the effect of this additional input. For simplicity, FNN-I3 and FNN-I4 are used to represent the FNN approaches with x = (Z, N, Z1/3) and x = (Z, N, Z1/3, A1/3), respectively. To simultaneously predict the two parameters c and z in 2pF, FNN with a double hidden layer structure is adopted.

The experimental data for 2pF are obtained from Refs. [19-21]. There are 86 nuclei remaining and their experimental 2pF data are recorded in the dataset. If the 2pF parameter of a nucleus is obtained from multiple independent experiments, the most recent data will be used.

3

RESULTS AND DISCUSSION

To obtain reliable results, the neural network is repeatedly trained (1000 times), and each training used randomly divided training and validation sets, among which the training set accounted for 80% of the dataset (86 nuclei) and the validation set accounted for 20%. Moreover, at the beginning of each training session, the network parameters are randomly reinitialized. After each training, FNN-I3 and FNN-I4 obtain the prediction results of parameters c and z on the validation set, which can be compared with the experimental results from Refs. [19-21] to obtain a mean-squared-error (MSE).

Table 1 lists the MSE statistics of the parameter c and z predictions of the FNN-I3 and FNN-I4 approaches for the training and validation sets. As the result of FNN is affected by parameter initialization, certain training results deviate far from the experimental value. In this study, the following upper limits of MSE are set for the indicated parameters: 0.2 for parameter c, and 0.005 for parameter z. Note, the two nuclei (98In and 102Sb) that are used in the dataset are distant from the others on the nuclide chart and are unstable. Removing these two nuclei from the dataset will not affect the results by more than 0.002 fm2 and the relative error is less than 5%. These errors can be considered the result of removing the data from the dataset and do not represent any particularities of the two nuclei. Results with an MSE greater than the upper limit are eliminated. The number of accepted predictions is shown in the last column. The standard deviation (SD) is also shown in the table, which can be obtained as follows: SD=1N1i=1N(yipreytar)2, (6) where N denotes the number of predictions adopted. yipre indicates the predicted values and ytar is the target value.

Table 1
MSE statistics of parameters c and z on the training and validation sets; the datasets are randomly divided. SD represents the standard deviation calculated using Eq. (6).
  Input Mean (fm2) SD (fm2) Count
Parameter c
Training set Z, N, Z1/3 0.039 0.037 899
  Z, N, Z1/3, A1/3 0.024 0.031
Validation set Z, N, Z1/3 0.037 0.035 891
  Z, N, Z1/3, A1/3 0.023 0.028
Parameter z Input Mean (fm2) SD (fm2) Count
Training set Z, N, Z1/3 0.001 0.00030 998
  Z, N, Z1/3, A1/3 0.001 0.00027
Validation set Z, N, Z1/3 0.002 0.00027 998
  Z, N, Z1/3, A1/3 0.002 0.00024  
Show more
The count indicates the number of results whose error is within the acceptable range (0.2 for c and 0.005 for z)

For parameter c, the mean squared deviation of FNN-I3 is clearly significantly larger than that of FNN-I4, and the SD of FNN-I4 is smaller. Compared to FNN-I3, the mean MSE of FNN-I4 is reduced from 0.03913 fm2 to 0.02408 fm2 on the training sets and from 0.03655 fm2 to 0.02286 fm2 on the validation sets, which is significantly precise for the predictions of parameter c. The nuclear charge radius is closely related to the mass and proton numbers. Therefore, the aforementioned results indicate that the FNN approach is reliable to improve the accuracy of the nuclear charge density distribution predictions based on experimental data by including the known effects of physics in the input layer. Note, different combinations of the physical quantities are used as the inputs, and it is found that parameter c significantly correlated with A1/3 and Z1/3 (Pearson correlation coefficients of A1/3 - c and Z1/3 - c are 0.9953 and 0.9969, respectively), whereas parameter z is not sensitive to it (Pearson correlation coefficients are -0.1625 for A1/3 - c and -0.1609 for Z1/3 - c). Therefore, the result of parameter z is not considered in the following analysis.

To further evaluate the prediction ability of the FNN method, the MSE statistics of parameter c for the fixed training and validation sets are demonstrated in Table 2. The columns in Table 2 indicate the same meaning as those in Table 1. As the value of parameter c ranges from 2.4 fm to 7.0 fm, the FNN approach can be considered to be reliable.

Table 2
Mean squared error statistics of parameter c on the training and validation sets; the dataset is fixed.
Parameter c Input Mean (fm2) SD (fm2) Count
Training set Z, N, Z1/3 0.035 0.031 894
  Z, N, Z1/3, A1/3 0.024 0.030
Validation set Z, N, Z1/3 0.038 0.033 900
  Z, N, Z1/3, A1/3 0.028 0.032  
Show more

Figure 2 presents the prediction of the charge density distribution of certain nuclei with a fixed training and validation set. The average charge density distributions are obtained by applying the 100 FNN models with the best performance on the training set, which indicates the smallest loss function value, and applied it to the validation set to obtain the average charge density distributions. The error bands are obtained from the standard deviation of the density distribution values of the 100 FNN models. Figure 2 clearly demonstrates that the error bands of FNN-I4 are significantly narrower than those of FNN-I3, indicating that FNN-I4 has a higher precision than FNN-I3. Moreover, in most cases, the average prediction distributions of FNN-I4 are closer to the density distribution obtained using the experimental parameters compared to those of FNN-I3. In conclusion, FNN-I4 has a higher accuracy than FNN-I3.

Fig. 2
(Color online) Charge density distributions obtained by FNN-I3 and FNN-I4. The first row contains nuclei selected from the training set and the remaining nuclei are those selected from the validation set. The density distribution determined by the experimental parameters are indicated by solid black lines. The density distributions obtained by the FNN-I3 and FNN-I4 methods are indicated by the blue hatched and red hatched regions, and their mean predicted values are indicated by the blue dashed and red dotted lines, respectively.
pic

To illustrate the rationality of using the average density distribution as the prediction result, we compare the average result of multiple predictions and the result with the minimum (maximum) error on the validation set, as shown in Fig. 3. The 2pF model uses parameters to control the charge density distribution; in addition to averaging the charge density distribution curve to obtain the average predicted density distribution, it can be obtained by averaging the predicted parameters. Thus, Figure 3 presents the prediction results obtained using these two averaging methods.

Fig. 3
(Color online) The charge density distributions from different methods. The charge distributions obtained by using the experimental parameters are denoted by the solid black lines. The distributions obtained by using the mean predicted density and mean predicted parameter are indicated by the blue and red solid lines, respectively. The distributions obtained by the parameters that minimize (maximize) the loss function are denoted by the orange (purple) dashed lines. The selection of nuclei is the same as that shown in Fig. 2.
pic

The following useful information can be obtained from Fig. 3: First, the distribution of the prediction density obtained by the two averaging methods nearly coincide, which allows us to conveniently describe the prediction result without considering which method to use. Second, the prediction results of the single minimum error network are not better than the results obtained by averaging. Because the effect of randomness is significantly reduced by averaging the results, the following work in this study will use the averaging method to obtain the prediction results.

Therefore, it is necessary to further explore the differences between the two averaging methods. Figure 4 presents the average curves and error bands of the charge density distributions obtained by the two methods for certain nuclei, and the selection of these nuclei is the same as that shown in Fig. 2. Apparently, the error bands obtained by averaging the density curves are narrower than those obtained by the other methods, although the average curves of the two are significantly coincident. This is because even a small change in the parameter has a large impact on the density distribution; thus, the uncertainty of the parameter is amplified by mapping. Therefore, the predicted charge density distribution and error band are obtained by the average density curves later in this study.

Fig. 4
(Color online) The charge density distributions and error bands obtained by the average density distributions (Den) and the average parameters (Par) of multiple predictions, respectively, are indicated by the blue dashed lines (blue hatched regions) and red dotted lines (red hatched regions). The density curves determined by the experimental parameters are indicated by the solid black lines. The selection of nuclei is the same as that shown in Fig. 2.
pic

As the interpolation ability of FNN-I4 is verified, all the sets are adopted as learning sets to assess the predictive power of the neural network. Because the second and fourth moments of the charge density distribution (〈r2〉 and 〈r4〉) are important for understanding the nuclear structure (for example, using statistical correlation analysis, the diffraction radius R and surface thickness σ are demonstrated to be sufficiently determined by 〈r2〉 and 〈r4〉, especially for heavy nuclei [90, 91]), the learning effect of FNN-I4 can be evaluated using 〈r2〉 and 〈r4〉.

A comparison of the FNN-I4 predicted and experimental values of 〈r2〉 and 〈r4〉 are shown in Fig. 5(a) and Fig. 5(b), respectively. Because the neural network directly outputs the parameters of the 2pF model, the predicted values of the second and fourth moments are calculated approximatively with a high precision as follows [92]: r2=4πN0c55(1+103π2z2c2+73π4z4c4), (7) r4=4πN0c77(1+7π2z2c2+493π4z4c4+313π6z6c6), (8) N0=34πc3(1+π2z2c2)1. (9) As shown in Fig. 5, the FNN results agree exceptionally well with the experimental values, especially for the light-medium mass nuclei. The rms deviations (σ) between the experimental charge radii and the results of the FNN for 〈r2〉 and 〈r4〉 are 0.7045 fm2 and 42.83 fm4, respectively. Incidentally, the RMS deviation of the charge radii is 0.7041 fm. The large error between the predicted and experimental results for heavier nuclei is acceptable because there are few heavier nuclei.

Fig. 5
(Color online) The differences between the experimental second (a) and fourth (b) moments of the charge density distribution and FNN training results (red dot).
pic

The distributions of the learning and prediction sets are shown in Fig. 7, where the solid blue square represents the learning set, with a total of 86 nuclei. The solid red squares represent the prediction set, which has 284 nuclei. The nuclei in the prediction set are selected as follows: considering the nuclei in the stable nuclei as the endpoints, this isotope chain is filled, and the filled nuclides are included in the prediction set; the nuclei that are included in the learning set were eliminated. In Fig. 6, the charge radii of the prediction set nuclei, obtained from the FNN-I4 approach, are compared with the experimental values. The data of the experimental values are obtained from Ref. [93]. Among the 284 nuclei in the prediction set selected in this study, 230 have an experimental charge radii.

Fig. 7
(Color online) learning set and prediction set in the nuclear chart. The learning sets are indicated by the solid blue square, including all 86 nuclei, and the prediction sets are indicated by the solid red square with a total of 284 nuclei. The two blue dashed lines indicate the proton and neutron drip-lines; the numbers represent magic numbers.
pic
Fig. 6
(Color online) Comparison of predicted rms charge radii(Rpre) and experimental values (Rexp) for nuclei with available data.
pic

Figure 6 demonstrates that nearly all of the points are close to the line y=x, which indicates that the FNN-I4 predictions are significantly close to the experimental values, with the rms deviation (σ) between the experimental charge radii and the result of the FNN is 0.07693 fm. Certain nuclei significantly deviate from the experimental values; they are either light nuclei near the oxygen isotope chain or heavy nuclei with A > 208. These nuclei may have different physical properties that are not sufficiently reproduced by the current learning set. With the increase in the experimental data available for learning, the prediction accuracy of FNN-I4 in these regions may be improved. In summary, the FNN-I4 method presents a good prediction accuracy. The deviations of most predicted nuclei are less than 0.1 fm, which indicates that the FNN-I4 method is reliable for predicting the nuclear charge distribution. Therefore, the interpolation ability of the FNN-I4 method is also significantly promising.

We also attempted to use the FNN-I4 method to predict the charge density distribution of calcium isotopes and obtained the charge radius from it. The experimental values of the charge radius are obtained from Ref. [93]. For the entire calcium isotope chain, only 40Ca is included in the learning set, with the other nuclei serving as the prediction set. A few of the results are presented in Fig. 8. The uncertainty of the charge density in the nucleus center increases when it is distant from the nucleus of the learning set; therefore, only the prediction results near 40Ca can be regarded as reliable, limiting the extrapolation ability of the FNN-I4 method. However, most of the predicted values are in good agreement with the experimental values, except that it is difficult to predict a sudden drop in the charge radius around 48Ca. Considering that there is no corresponding situation in the existing learning set, adding learning samples can help the FNN approach in extracting physical information, which is helpful and necessary to improve the prediction accuracy.

Fig. 8
(Color online) Charge density distributions of 38-49Ca obtained by the FNN-I4 method. The predicted charge radius (Rpre) and experimental values (Rexp) of each nucleus are also shown in the figure; the values in the brackets after the Rpre indicate uncertainty. Those without experimental values are replaced by "—"
pic
4

summary

In summary, we have employed the feedforward neural network approach to predict the nuclear charge density distribution, and the results obtained from the FNN are clearly demonstrated to be in good agreement with the experimental data. The nuclear 2pF model parameters obtained from Refs. [19-21] are studied. By adding the input variable of A1/3 in the input layer, the machine learning method can accurately describe the 2pF model parameters of the nuclei (in the case of randomly dividing the dataset, the deviation in the training set is 0.02408 fm2 for parameter c and 0.00115 fm2 for parameter z), and presents exceptional results in the validation set (0.02286 fm2 for parameter c and 0.00188 fm2 for parameter z), which verifies the extrapolation ability of the FNN. Then, without any experimental values, the charge density distributions (described by the 2pF parameters) of the 284 nuclei are calculated using the FNN method. In addition, the density distribution of the calcium isotopes and the corresponding charge radii are obtained using the FNN method.

Thus far, experimental data regarding the ground-state density distribution of spherical nuclei remains limited. Compared to traditional theoretical methods, the neural network method not only reduces the complexity of research and avoids complex multi-body problems, but also has a great prediction ability with less computational cost. In the future, as the data available for neural networks to learn increase, the prediction ability will also be improved. Directly learning density distributions rather than using the model parameters may further improve the prediction.

References
[1] E. Rutherford,

The scattering of alpha and beta particles by matter and the structure of the atom

. Philosophical Magazine 21, 669-688 (1911). doi: 10.1080/14786440508637080
Baidu ScholarGoogle Scholar
[2] Z. Yang, X. Shang, G. Yong, et al.,

Nucleon momentum distributions in asymmetric nuclear matter

. Physical Review C 100, 054325 (2019). doi: 10.1103/PhysRevC.100.054325
Baidu ScholarGoogle Scholar
[3] X. Shang, J. Dong, W. Zuo, et al.,

Exact solution of the brueckner-bethe-goldstone equation with three-body forces in nuclear matter

. Physical Review C 103, 034316 (2021). doi: 10.1103/PhysRevC.103.034316
Baidu ScholarGoogle Scholar
[4] C. Horowitz,

Neutron rich matter in the laboratory and in the heavens after gw170817

. Annals of Physics 411, 167992 (2019). doi: 10.1016/j.aop.2019.167992
Baidu ScholarGoogle Scholar
[5] Y. Chen,

Nuclear matter and neutron star properties with the extended nambu-jona-lasinio model

. Chinese Physics C 43, 035101 (2019). doi: 10.1088/1674-1137/43/3/035101
Baidu ScholarGoogle Scholar
[6] C.W. Ma, Y. Liu, H. Wei, et al.,

Determination of neutron-skin thickness using configurational information entropy

. Nuclear Science and Techniques 33, 6 (2022). doi: 10.1007/s41365-022-00997-0
Baidu ScholarGoogle Scholar
[7] B. Li, N. Tang, Y.H. Zhang, et al.,

Production of p-rich nuclei with z=20-25 based on radioactive ion beams

. Nuclear Science and Techniques 33, 55 (2022). doi: 10.1007/s41365-022-01048-4
Baidu ScholarGoogle Scholar
[8] L. Li, F.Y. Wang, Y.X. Zhang,

Isospin effects on intermediate mass fragments at intermediate energy-heavy ion collisions

. Nuclear Science and Techniques 33, 58 (2022). doi: 10.1007/s41365-022-01050-w
Baidu ScholarGoogle Scholar
[9] D. Andrae, Nuclear Charge Density and Magnetization Distributions, (Springer Berlin Heidelberg, Berlin, Heidelberg, 2017), pp. 51-81. doi: 10.1007/978-3-642-40766-6_23
[10] A. Patoary, N. Oreshkina,

Finite nuclear size effect to the fine structure of heavy muonic atoms

. European Physical Journal D 72, 54 (2018). doi: 10.1140/epjd/e2018-80545-9
Baidu ScholarGoogle Scholar
[11] L. Visscher, K. Dyall,

Dirac-fock atomic electronic structure calculations using different nuclear charge distributions

. Atomic Data and Nuclear Data Tables 67, 207-224 (1997). doi: 10.1006/adnd.1997.0751
Baidu ScholarGoogle Scholar
[12] D. Andrae,

Finite nuclear charge density distributions in electronic structure calculations for atoms and molecules

. Physics Reports 336, 413-525 (2000). doi: 10.1016/S0370-1573(00)00007-7
Baidu ScholarGoogle Scholar
[13] R. Hofstadter,

Electron scattering and nuclear structure

. Reviews of Modern Physics 28, 214-254 (1956). doi: 10.1103/RevModPhys.28.214
Baidu ScholarGoogle Scholar
[14] H.F. Ehrenberg, R. Hofstadter, U. Meyer-Berkhout, et al.,

High-energy electron scattering and the charge distribution of carbon-12 and oxygen-16

. Phys. Rev. 113, 666-674 (1959). doi: 10.1103/PhysRev.113.666
Baidu ScholarGoogle Scholar
[15] W. Kim, J.P. Connelly, J.H. Heisenberg, et al.,

Ground-state charge distribution and transition charge densities of the low-lying states in 86sr

. Physical Review. C 46, 1656-1666 (1992). doi: 10.1103/PhysRevC.46.1656
Baidu ScholarGoogle Scholar
[16] U. Meyer-Berkhout, K.W. Ford, A.E. Green,

Charge distrutions of nuclei of the 1p shell

. Annals of Physics 8, 119-171 (1959). doi: 10.1016/0003-4916(59)90065-X
Baidu ScholarGoogle Scholar
[17] W. Nan, B. Guo, C.J. Lin, et al.,

First proof-of-principle experiment with the post-accelerated isotope separator on-line beam at brif: measurement of the angular distribution of 23na + 40ca elastic scattering

. Nuclear Science and Techniques 32, 53 (2021). doi: 10.1007/s41365-021-00889-9
Baidu ScholarGoogle Scholar
[18] Y. Chu,

Theoretical investigation on elastic electron scatering from some unstable nuclei

. Thesis (2011)
Baidu ScholarGoogle Scholar
[19] C. De Jager, H. De Vries, C. De Vries,

Nuclear charge- and magnetization-density-distribution parameters from elastic electron scattering

. Atomic Data and Nuclear Data Tables 14, 479-508 (1974). doi: 10.1016/S0092-640X(74)80002-1
Baidu ScholarGoogle Scholar
[20] H. De Vries, C. De Jager, C. De Vries,

Nuclear charge-density-distribution parameters from elastic electron scattering

. Atomic Data and Nuclear Data Tables 36, 495-536 (1987). doi: 10.1016/0092-640X(87)90013-1
Baidu ScholarGoogle Scholar
[21] G. Fricke, C. Bernhardt, K. Heilig, et al.,

Nuclear ground state charge radii from electromagnetic interactions

. Atomic Data and Nuclear Data Tables 60, 177-285 (1995). doi: 10.1006/adnd.1995.1007
Baidu ScholarGoogle Scholar
[22] J. Carlson, S. Gandolfi, F. Pederiva, et al.,

Quantum monte carlo methods for nuclear physics

. Rev. Mod. Phys. 87, 1067-1118 (2015). doi: 10.1103/RevModPhys.87.1067
Baidu ScholarGoogle Scholar
[23] W. Dickhoff, C. Barbieri,

Self-consistent greens function method for nuclei and nuclear matter

. Progress in Particle and Nuclear Physics 52, 377-496 (2004). doi: 10.1016/j.ppnp.2004.02.038
Baidu ScholarGoogle Scholar
[24] G. Hagen, T. Papenbrock, M. Hjorth-Jensen, et al.,

Coupled-cluster computations of atomic nuclei

. Reports on Progress in Physics 77, 096302 (2014). doi: 10.1088/0034-4885/77/9/096302
Baidu ScholarGoogle Scholar
[25] D. Lee,

Lattice simulations for few- and many-body systems

. Progress in Particle and Nuclear Physics 63, 117-154 (2009). doi: 10.1016/j.ppnp.2008.12.001
Baidu ScholarGoogle Scholar
[26] M. Bender, P.H. Heenen, P.G. Reinhard,

Self-consistent mean-field models for nuclear structure

. Rev. Mod. Phys. 75, 121-180 (2003). doi: 10.1103/RevModPhys.75.121
Baidu ScholarGoogle Scholar
[27] W. Richter, B. Brown,

Nuclear charge densities with the skyrme hartree-fock method

. Physical Review C 67, 034317 (2003). doi: 10.1103/PhysRevC.67.034317
Baidu ScholarGoogle Scholar
[28] S. Abbas, S. Salman, S. Ebrahiem, et al.,

Investigation of the nuclear structure of some ni and zn isotopes with skyrme- hartree-fock interaction

. Baghdad Science Journal 19, 914-921 (2022). doi: 10.21123/bsj.2022.19.4.0914
Baidu ScholarGoogle Scholar
[29] A. Abdullah,

Matter density distributions and elastic form factors of some two-neutron halo nuclei

. Pramana-Journal of Physics 89, 43 (2017). doi: 10.1007/s12043-017-1445-5
Baidu ScholarGoogle Scholar
[30] P. Ring,

Relativistic mean field theory in finite nuclei

. Progress in Particle and Nuclear Physics 37, 193-263 (1996). doi: 10.1016/0146-6410(96)00054-3
Baidu ScholarGoogle Scholar
[31] D. Vretenar, A. Afanasjev, G. Lalazissis, et al.,

Relativistic hartree-bogoliubov theory: static and dynamic aspects of exotic nuclear structure

. Physics Reports 409, 101-259 (2005). doi: 10.1016/j.physrep.2004.10.001
Baidu ScholarGoogle Scholar
[32] J. Meng, H. Toki, S. Zhou, et al.,

Relativistic continuum hartree bogoliubov theory for ground-state properties of exotic nuclei

. Progress in Particle and Nuclear Physics 57, 470-563 (2006). doi: 10.1016/j.ppnp.2005.06.001
Baidu ScholarGoogle Scholar
[33] J. Li, J. Meng,

Nuclear magnetic moments in covariant density functional theory

. Frontiers of Physics 13, 132109 (2018). doi: 10.1007/s11467-018-0842-7
Baidu ScholarGoogle Scholar
[34] J. Meng, J. Peng, S.Q. Zhang, et al.,

Progress on tilted axis cranking covariant density functional theory for nuclear magnetic and antimagnetic rotation

. Frontiers of Physics 8, 55-79 (2013). doi: 10.1007/s11467-013-0287-y
Baidu ScholarGoogle Scholar
[35] S. Shen, H. Liang, W.H. Long, et al.,

Towards an ab initio covariant density functional theory for nuclear structure

. Progress in Particle and Nuclear Physics 109, 103713 (2019). doi: 10.1016/j.ppnp.2019.103713
Baidu ScholarGoogle Scholar
[36] J. Meng, Relativistic Density Functional for Nuclear Structure, (WORLD SCIENTIFIC, 2016). doi: 10.1142/9872
[37] K. He, X. Zhang, S. Ren, et al., in

2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Deep residual learning for image recognition

. IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778. doi: 10.1109/cvpr.2016.90
Baidu ScholarGoogle Scholar
[38] G. Huang, Z. Liu, L. van der Maaten, et al., in

30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Densely connected convolutional networks

. IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2261-2269. doi: 10.1109/cvpr.2017.243
Baidu ScholarGoogle Scholar
[39] B. Dzmitry, C. Kyunghyun, B. Yoshua,

Neural machine translation by jointly learning to align and translate

.. arXiv:arXiv:1409.0473
Baidu ScholarGoogle Scholar
[40] M. Baroni, S. Bernardini,

A new approach to the study of translationese: Machine-learning the difference between original and translated text

. Literary and Linguistic Computing 21, 259-274 (2005). doi: 10.1093/llc/fqi039
Baidu ScholarGoogle Scholar
[41] P. Baldi, P. Sadowski, D. Whiteson,

Searching for exotic particles in high-energy physics with deep learning

. Nature Communications 5,. doi: 10.1038/ncomms5308
Baidu ScholarGoogle Scholar
[42] L. Pang, K. Zhou, N. Su, et al.,

An equation-of-state-meter of quantum chromodynamics transition from deep learning

. Nature Communications 9, 210 (2018). doi: 10.1038/s41467-017-02726-3
Baidu ScholarGoogle Scholar
[43] J. Brehmer, K. Cranmer, G. Louppe, et al.,

Constraining effective field theories with machine learning

. Physical Review Letters 121,. doi: 10.1103/PhysRevLett.121.111801
Baidu ScholarGoogle Scholar
[44] J. Carrasquilla, R. Melko,

Machine learning phases of matter

. Nature Physics 13, 431-434 (2017). doi: 10.1038/nphys4035
Baidu ScholarGoogle Scholar
[45] G. Carleo, M. Troyer,

Solving the quantum many-body problem with artificial neural networks

. Science 355, 602-606 (2017). doi: 10.1126/science.aag2302
Baidu ScholarGoogle Scholar
[46] S. Gazula, J. Clark, H. Bohr,

Learning and prediction of nuclear stability by neural networks

. Nuclear Physics A 540, 1-26 (1992). doi: 10.1016/0375-9474(92)90191-L
Baidu ScholarGoogle Scholar
[47] K. Gernoth, J. Clark, J. Prater, et al.,

Neural network models of nuclear systematics

. Physics Letters B 300, 1-7 (1993). doi: 10.1016/0370-2693(93)90738-4
Baidu ScholarGoogle Scholar
[48] Z. Niu, H. Liang,

Nuclear mass predictions based on bayesian neural network approach with pairing and shell effects

. Physics Letters B 778, 48-53 (2018). doi: 10.1016/j.physletb.2018.01.002
Baidu ScholarGoogle Scholar
[49] Z.M. Niu, H.Z. Liang,

Nuclear mass predictions with machine learning reaching the accuracy required by r-process studies

. Phys. Rev. C 106, L021303 (2022). doi: 10.1103/PhysRevC.106.L021303
Baidu ScholarGoogle Scholar
[50] J.W. Clark, K.A. Gernoth, S. Dittmar, et al.,

Higher-order probabilistic perceptrons as bayesian inference engines

. Phys. Rev. E 59, 6161-6174 (1999). doi: 10.1103/PhysRevE.59.6161
Baidu ScholarGoogle Scholar
[51] S. Athanassopoulos, E. Mavrommatis, K. Gernoth, et al.,

Nuclear mass systematics using neural networks

. Nuclear Physics A 743, 222-235 (2004). doi: 10.1016/j.nuclphysa.2004.08.006
Baidu ScholarGoogle Scholar
[52] J. Clark, H. Li,

Application of support vector machines to global prediction of nuclear properties

. International Journal of Modern Physics B 20, 5015-5029 (2006). doi: 10.1142/S0217979206036053
Baidu ScholarGoogle Scholar
[53] K. Gernoth, J. Clark,

Neural networks that learn to predict probabilities: Global models of nuclear stability and decay

. Neural Networks 8, 291-311 (1995). doi: 10.1016/0893-6080(94)00071-S
Baidu ScholarGoogle Scholar
[54] N. Costiris, E. Mavrommatis, K. Gernoth, et al.,

Decay systematics: A global statistical model for half-lives

. Phys. Rev. C 80, 044332 (2009). doi: 10.1103/PhysRevC.80.044332
Baidu ScholarGoogle Scholar
[55] X.C. Ming, H.F. Zhang, R.R. Xu, et al.,

Nuclear mass based on the multi-task learning neural network method

. Nuclear Science and Techniques 33, 48 (2022). doi: 10.1007/s41365-022-01031-z
Baidu ScholarGoogle Scholar
[56] Z.P. Gao, Y.J. Wang, H.L. Lv, et al.,

Machine learning the nuclear mass

. Nuclear Science and Techniques 32, 109 (2021). doi: 10.1007/s41365-021-00956-1
Baidu ScholarGoogle Scholar
[57] Y. Wang, X. Zhang, Z. Niu, et al.,

Study of nuclear low-lying excitation spectra with the bayesian neural network approach

. Physics Letters B 830, 137154 (2022). doi: 10.1016/j.physletb.2022.137154
Baidu ScholarGoogle Scholar
[58] S. Akkoyun, H. Kaya, Y. Torun,

Estimations of first 2(+) energy states of even-even nuclei by using artificial neural networks

. Indian Journal of Physics 96, 1791-1797 (2022). doi: 10.1007/s12648-021-02099-w
Baidu ScholarGoogle Scholar
[59] R.D. Lasseri, D. Regnier, J.P. Ebran, et al.,

Taming nuclear complexity with a committee of multilayer neural networks

. Physical Review Letters 124, 162502 (2020). doi: 10.1103/PhysRevLett.124.162502
Baidu ScholarGoogle Scholar
[60] S. Akkoyun, N. Laouet, F. Benrachi,

Improvement studies of an effective interaction for n=z sd-shell nuclei by neural networks

. arXiv e-prints arXiv:2001.08561 (2020).
Baidu ScholarGoogle Scholar
[61] R. Utama, W.C. Chen, J. Piekarewicz,

Nuclear charge radii: density functional theory meets bayesian neural networks

. Journal of Physics G-Nuclear and Particle Physics 43,. doi: 10.1088/0954-3899/43/11/114002
Baidu ScholarGoogle Scholar
[62] Y. Ma, C. Su, J. Liu, et al.,

Predictions of nuclear charge radii and physical interpretations based on the naive bayesian probability classifier

. Physical Review C 101,. doi: 10.1103/PhysRevC.101.014304
Baidu ScholarGoogle Scholar
[63] D. Wu, C. Bai, H. Sagawa, et al.,

Calculation of nuclear charge radii with a trained feed-forward neural network

. Physical Review C 102, 054323 (2020). doi: 10.1103/PhysRevC.102.054323
Baidu ScholarGoogle Scholar
[64] U. Rodriguez, C. Vargas, M. Goncalves, et al.,

Alpha half-lives calculation of superheavy nuclei with q(alpha)-value predictions based on the bayesian neural network approach

. Journal of Physics G-Nuclear and Particle Physics 46, 115109 (2019). doi: 10.1088/1361-6471/ab2c86
Baidu ScholarGoogle Scholar
[65] U. Banos Rodriguez, C. Zuniga Vargas, M. Goncalves, et al.,

Bayesian neural network improvements to nuclear mass formulae and predictions in the superheavy elements region

. EPL (Europhysics Letters) 127, 42001 (2019). doi: 10.1209/0295-5075/127/42001
Baidu ScholarGoogle Scholar
[66] Z. Niu, H. Liang, B. Sun, et al.,

Predictions of nuclear beta-decay half-lives with machine learning and their impact on r-process nucleosynthesis

. Physical Review C 99, 064307 (2019). doi: 10.1103/PhysRevC.99.064307
Baidu ScholarGoogle Scholar
[67] N. Costiris, E. Mavrommatis, K. Gernoth, et al.,

Decoding beta-decay systematics: A global statistical model for beta(-) half-lives

. Physical Review C 80, 044332 (2009). doi: 10.1103/PhysRevC.80.044332
Baidu ScholarGoogle Scholar
[68] N. Costiris, E. Mavrommatis, K. Gernoth, et al.,

Statistical global modeling of beta-decay halflives systematics using multilayer feedforward neural networks and support vector machines

. arXiv e-prints arXiv:0809.0383 (2008).
Baidu ScholarGoogle Scholar
[69] Z. Yuan, D. Tian, J. Li, et al.,

Magnetic moment predictions of odd-a nuclei with the bayesian neural network approach

. Chinese Physics C 45, 124107 (2021). doi: 10.1088/1674-1137/ac28f9
Baidu ScholarGoogle Scholar
[70] C.W. Ma, D. Peng, H.L. Wei, et al.,

Isotopic cross-sections in proton induced spallation reactions based on the bayesian neural network method

. Chinese Physics. C 44, 014104 (2020). doi: 10.1088/1674-1137/44/1/014104
Baidu ScholarGoogle Scholar
[71] C.W. Ma, D. Peng, H.L. Wei, et al.,

A Bayesian-neural-network prediction for fragment production in proton induced spallation reaction

. Chinese Physics C 44, 124107 (2020). arXiv:2007.15416,doi: 10.1088/1674-1137/abb657
Baidu ScholarGoogle Scholar
[72] D. Peng, H.L. Wei, X.X. Chen, et al.,

Bayesian evaluation of residual production cross sections in proton-induced nuclear spallation reactions

. Journal of Physics G: Nuclear and Particle Physics 49, 085102 (2022). doi: 10.1088/1361-6471/ac7069
Baidu ScholarGoogle Scholar
[73] C.W. Ma, H.L. Wei, X.Q. Liu, et al.,

Nuclear fragments in projectile fragmentation reactions

. Progress in Particle and Nuclear Physics 121, 103911 (2021). doi: 10.1016/j.ppnp.2021.103911
Baidu ScholarGoogle Scholar
[74] C.W. Ma, X.B. Wei, X.X. Chen, et al.,

Precise machine learning models for fragment production in projectile fragmentation reactions using bayesian neural networks

. Chinese Physics C 46, 074104 (2022). doi: 10.1088/1674-1137/ac5efb
Baidu ScholarGoogle Scholar
[75] S. Akkoyun, T. Bayram, S. Kara, et al.,

An artificial neural network application on nuclear charge radii

. Journal of Physics G: Nuclear and Particle Physics 40, 055106 (2013). doi: 10.1088/0954-3899/40/5/055106
Baidu ScholarGoogle Scholar
[76] T. Bayram, S. Akkoyun, S. Okan Kara,

A study on ground-state energies of nuclei by using neural networks

. Annals of Nuclear Energy 63, 172-175 (2014). doi: 10.1016/j.anucene.2013.07.039
Baidu ScholarGoogle Scholar
[77] S. Akkoyun, T. Bayram,

Estimations of fission barrier heights for ra, ac, rf and db nuclei by neural networks

. International Journal of Modern Physics E 23, 1450064 (2014). doi: 10.1142/S0218301314500645
Baidu ScholarGoogle Scholar
[78] R. Utama, J. Piekarewicz, H. Prosper,

Nuclear mass predictions for the crustal composition of neutron stars: A bayesian neural network approach

. Physical Review C 93, 014311 (2016). doi: 10.1103/PhysRevC.93.014311
Baidu ScholarGoogle Scholar
[79] L. Neufcourt, Y. Cao, W. Nazarewicz, et al.,

Bayesian approach to model-based extrapolation of nuclear observables

. Physical Review C 98, 034318 (2018). doi: 10.1103/PhysRevC.98.034318
Baidu ScholarGoogle Scholar
[80] X. Wang, L. Zhu, J. Su,

Providing physics guidance in bayesian neural networks from the input layer: The case of giant dipole resonance predictions

. Phys. Rev. C 104, 034317 (2021). doi: 10.1103/PhysRevC.104.034317
Baidu ScholarGoogle Scholar
[81] D. Wu, C. Bai, H. Sagawa, et al.,

beta-delayed one-neutron emission probabilities within a neural network model

. Physical Review C 104, 054303 (2021). doi: 10.1103/PhysRevC.104.054303
Baidu ScholarGoogle Scholar
[82] X.H. Wu, Z.X. Ren, P.W. Zhao,

Nuclear energy density functionals from machine learning

. Phys. Rev. C 105, L031303 (2022). doi: 10.1103/PhysRevC.105.L031303
Baidu ScholarGoogle Scholar
[83] E. Alhassan, D. Rochman, A. Vasiliev, et al.,

Iterative bayesian monte carlo for nuclear data evaluation

. Nuclear Science and Techniques 33, 50 (2022). doi: 10.1007/s41365-022-01034-w
Baidu ScholarGoogle Scholar
[84] K. Levenberg,

A methord for the solution of certain non-linear problems in least squares

. Quarterly of Applied Mathematics 2, 164-168 (1944).
Baidu ScholarGoogle Scholar
[85] D. Marquardt,

An algorithm for least-squares estimation of nonlinear parameters

. Journal of the Society for Industrial and Applied Mathematics 11, 431-441 (1963). doi: 10.1137/0111030
Baidu ScholarGoogle Scholar
[86] H. Robbins,

A stochastic approximation method

. Annals of Mathematical Statistics 22, 400-407 (2007).
Baidu ScholarGoogle Scholar
[87] T. Chin-Yuen,

Radius of nuclear charge distribution and nuclear binding energy

. Acta Physica Sinica 13, 357-364 (1957).
Baidu ScholarGoogle Scholar
[88] R. Hofstadter, B. Hahn, A. Knudsen, et al.,

High-energy electron scattering and nuclear structure determinations

. ii. Physical Review 95, 512-515 (1954). doi: 10.1103/PhysRev.95.512
Baidu ScholarGoogle Scholar
[89] D. Yennie, D. Ravenhall, R. Wilson,

Phase-shift calculation of high-energy electron scattering

. Physical Review 95, 500-512 (1954). doi: 10.1103/PhysRev.95.500
Baidu ScholarGoogle Scholar
[90] P. Reinhard, W. Nazarewicz, R. Ruiz,

Beyond the charge radius: The information content of the fourth radial moment

. Physical Review C 101, 021301 (2020). doi: 10.1103/PhysRevC.101.021301
Baidu ScholarGoogle Scholar
[91] T. Naito, G. Colo, H. Liang, et al.,

Second and fourth moments of the charge density and neutron-skin thickness of atomic nuclei

. Physical Review C 104, 024316 (2021). doi: 10.1103/PhysRevC.104.024316
Baidu ScholarGoogle Scholar
[92] V. Shabaev,

Finite nuclear size corrections to the energy levels of the multicharged ions

. Journal of Physics B 26, 1103-1108 (1993).
Baidu ScholarGoogle Scholar
[93] I. Angeli, K. Marinova,

Table of experimental nuclear ground state charge radii: An update

. Atomic Data and Nuclear Data Tables 99, 69-95 (2013). doi: 10.1016/j.adt.2011.12.006
Baidu ScholarGoogle Scholar