logo

A Genetic-Algorithm-based Neural Network Approach for Radioactive Activity Prediction

LOW ENERGY ACCELERATOR, RAY AND APPLICATIONS

A Genetic-Algorithm-based Neural Network Approach for Radioactive Activity Prediction

WANG Lei
TUO Xianguo
YAN Yucheng
LIU Mingzhe
CHENG Yi
LI Pingchuan
Nuclear Science and TechniquesVol.24, No.6Article number 060201Published in print 01 Dec 2013
37201

In this paper, a genetic-algorithm-based artificial neural network (GAANN) model radioactivity prediction is proposed, which is verified by measuring results from Long Range Alpha Detector (LRAD). GAANN can integrate capabilities of approximation of Artificial Neural Networks (ANN) and of global optimization of Genetic Algorithms (GA) so that the hybrid model can enhance capability of generalization and prediction accuracy, theoretically. With this model, both the number of hidden nodes and connection weights matrix in ANN are optimized using genetic operation. The real data sets are applied to the introduced method and the results are discussed and compared with the traditional Back Propagation (BP) neural network, showing the feasibility and validity of the proposed approach.

Long range alpha detectorGenetic algorithmsRadioactivityprediction

1 Introduction

In recent years, with decommissioning of a large number of nuclear facilities and increasing demand of waste treatments, the existed radioactive pipes need to be treated well. In order to sort and treat the alpha radioactive pipes before disposal, radioactivity survey is to be done to assess whether pipes meet limit requirements. Alpha particles released by the nuclides in the pipelines are absorbed partially by the pipe walls, which makes it difficult to detect the alpha pollution situation in the pipelines by external direct testing methods. Recently, Long-Range Alpha Detector (LRAD) technology provides an approach to get the alpha pollution information inside the pipelines. The LRAD detection system normally consists of five units, that is, sample detection (ion chamber and measurement chamber), air-driven part, power supply, signal acquisition and signal processing unit. This technique can detect indirectly alpha radioactivity by collecting ions inside pipelines that are produced by alpha particles. Thus, it can overcome defects of direct alpha detection such as short range and failing to penetrate facility walls. Many scholars have made preliminary studies to confirm that distance, length, diameter, radioactivity, wind speed and air flux influence final measuring results[15]. Our statistical analysis to LRAD experimental results have shown that there is a nonlinearly relationship between testing parameters and measuring results.

This paper puts forward a kind of improved genetic algorithm to deal with the uncertainty correction. The experiment data are collected as the following experiment conditions: the bias voltage of the current ionization chamber bias is 200 V. The diameters of carbon steel pipes are 43 mm, 48 mm, 58mm, 66 mm, and 78 mm, and their are 10‒160 cm which can be adjusted by rotary joints. The alpha sources are 24.05 Bq and 3200.00 Bq to establish the data training model and example detection model. The results show that the improved genetic algorithm can effectively improve the precision of prediction.

2 Methodology

The GAANN model has three layers with m nodes in the input layer, h nodes in the hidden layer, and n nodes in the output layer. Firstly, the model is implemented in order to determine a basic state space of connection weights matrix. Secondly, the number of hidden nodes and connection weights matrix are encoded into a mixed string which consists of integer value and real value[7,8]. In this paper the experimental data is divided into three parts: training sample φ11, φ12, cross-validation sample φ21, φ22 and testing sample φ31, φ32. Here we introduce our scheme:

Step 1: Initialize connection weights which are within [‒1,1] for training sample φ11, φ12. Adjust the weights until the desired tolerance of error ε11, ε12 is obtained. The maximum and minimum of weights are denoted as umax and umin, respectively. The value of weights are taken within [uminδ1, umax+δ2], where δ1, δ2 are adjustment parameters.

minEi=12ϕi[yk(t)y^k(t)]2<εi1 (1)

where i = 1, 2, 3 which corresponds to three data sets. ŷk(t), yk(t) are the desired output and real data.

Step 2: Encode connection weights and number of hidden nodes. The hidden nodes are encoded as binary code string, 1 with connection to input and output nodes and 0 with no connection. The weights are encoded as float string, with string length H=m×h+ h+h×n+n (m is the number of input nodes, n is the number of output nodes, h is the number of hidden nodes). Each string corresponds to a chromosome consisted of some gene sections, tabulated as follows:

Step 3: Initialize a population of chromosomes. The length L of each chromosome equals to G+H, where G is the length of binary code of the number of hidden nodes and H is the length of real-valued code of connection weights.

Table 1
Schematic diagram of encoding chromosome. Part A is encoded in binary type, and other parts in real value. These values change during training period
1, …, 1 0.2…, 0.7 0.3, …,0.1 0.2, … ,0.3 0 .9, … , 0.8
A B C D E
Show more

Step 4: Calculate fitness individually according to Equation 2 below.

F=1/(1+minE) (2)

Step 5: Copy the highest fitness individual directly to a new offspring and select other individuals by the method of spinning the roulette wheel[9].

Step 6: Use basic crossover and mutation operations to the control code, namely, if a hidden node is deleted (added) according to mutation operation, the corresponding control code is encoded 0. The crossover and mutation operators of weights are encoded as follows:

Crossover operation with probability pc

Xit+1=ciXit+(1ci)Xi+1t (3) Xi+1t+1=(1ci)Xit+i)Xi+1t (4)

where Xit, Xi+1t are a pair of individuals before crossover, Xit+1, Xi+1t+1 are a pair of individuals after crossover, ci is taken as random value within [0,1].

Mutation operation with probability pm

Xti+1=Xti+ci (5)

where Xit is individual before mutation, Xit+1 is individual after mutation, ci is taken as random value within (uminδ1Xit , umax + δ2 + Xit ).

Step 7: Generate the new population and replace the current population. The above procedures (step 4‒7) are repeated until convergence conditions (min E2<εk2 and min E3<εk3) are satisfied, where k=1, 2, 3 which corresponds to three data sets.

Step 8: Decode the highest fitness individual, obtain corresponding number of the hidden nodes and connection weights, and output the prediction results.

3 Model implementation and results

In order to evaluate the performance of our model, we choose the representative 239Pu strong source (32.00 Bq) and the weak source (24.05 Bq), respectively. And then establish a prediction model, in which the measured distance, tube length, diameter, wind speed and air flow construct the input layer, the ionization voltage works as output layer. In this case, we choose ionization voltage value which can represent radioactivity intensity as a prediction of the output value. In the experiments we find the linear relationship between radioactivity and ionization voltage value. After measuring the ionization voltage value, we take it for comparison with the standard source. And the radioactive intensity can be calculated.

Table 2
Tabulated required model parameters
 Name of  variables Value
Training sample φ11 128
Validation sample φ12 27
Testing sample φ13 27
Training sample φ21 105
Validation sample φ22 22
Testing sample φ23 22
Number of input nodes 6
Number of hidden nodes 6
Number of output nodes 1
Learning rate 0.05
Training epochs 1000
Error ε11 of sample φ11 0.05
Error ε12 of sample φ12 0.10
Error ε13 of sample φ13 0.10
Error ε21 of sample φ21 0.15
Error ε22 of sample φ22 0.30
Error ε23 of sample φ23 0.30
Crossover probability pc 0.50
Mutation probability pm 0.10
Population 20
Iteration number 50
Training goal 10–5
Show more
Fig. 1
Measurement and prediction values under 3200.00 Bq.
pic

In the simulations a three-layered BP neural network is first employed to estimate basic state space of connection weights. The minimum and maximum values of weights are obtained, –1.21 and 0.96, respectively. Let δ1=–0.09 and δ2 = 0.04, therefore, the range of weights is assumed to be within [–1.3, 1.0]. The number of input neurons is 5 and the hidden nodes are 6. The activation function adopted here from input to hidden layer is Sigmoid, while from hidden to output layer is Purelin function. For the proposed hybrid neural network, the following system parameters in Table 2 are applied to training samples and prediction.

In order to test the model performance, we set training error ε11=0.05, validation error ε12=0.10, testing error ε13=0.10, under 3200.00 Bq alpha source. The training results are shown in Fig.1.

As under the weak source environment, the measurement error is big. We set training error ε21=0.15, validation error ε22=0.30, testing error ε23=0.30 under 24.05 Bq, as shown in Fig.2.

For the purpose of comparison with other neural network models, such as the basic BP neural network, three types of errors, which are commonly found in many papers discussing these models, are also used here. Three types of errors are the mean absolute percentage error (MAPE), the maximum absolute percentage error (MAXAPE), the minimum absolute percentage error (MINAPE).

As the initial connection weights and threshold have certain of randomness, so we have made two predicted models 50 times.

Fig. 2
Measurement and prediction values under 24.05 Bq.
pic
Table 3
Error comparisons of GA-based neural network and traditional neural
GAANN Traditional BP network
Confidence interval 0.2247‒0.2816 0.2247‒0.2527
Numbers fallen in confidence interval 15 4
Mean error 0.2373 0.2748
MAXAPE 0.6590 0.7332
MINAPE 0.0025 0.0048
Show more
Table 4
Error comparisons of GA-based neural network and traditional neural network in 3200.00 Bq environment
GAANN Traditional BP network
Confidence interval 0.0667‒0.0747 0.0762~0.0845
Number fallen in confidence interval 12 11
Mean error 0.0707 0.0740
MAXAPE 0.1555 0.1598
MINAPE 0.0098 0.0006
Show more

We took its 95% confidence interval as neural network ensemble. The parameters list in Table 2. Characteristic parameters are tabulated in Table 3 and 4. We can observe that the capability of approximation of this model is better than the traditional one for weak and strong radioactive sources. But in weak radioactive source, error precision is large. This is a shortcoming of the proposed model, which pushes us to present a better model and will report the results in future publication.

Fig.3
Measurement and prediction values under 24.05 Bq.
pic
Fig.4
Measurement and prediction values under 3200.00 Bq.
pic

4 Conclusion

In this paper, a GA-based neural network approach has been proposed for LRAD radioactivity survey research. Network structure is optimized and connection weights are adjusted through the implementation of genetic operators. The experiments with LRAD data have shown that the predictive performance of the proposed model is better than that of the traditional BP neural network. However, weak radioactive source exhibits the poor effect. The consideration of further improvements in model performance should include the following factors such as different time windows, different prediction horizon, crossover and mutation operators, classification of data sets, etc. This work is now under progress.

References
1 MacArthur D W, Allander K S, Bounds J S, et al. Los Alamos National Laboratory publication LA-12199-MS, Los Alamos, 1991, 1: 6-8.
2 Rawool-Sullivan M W, Allander K S, Bounds J A, et al. LA-UR-94-3632, Washington, 1994, 1:1-7.
3 Johnson J D, Allander K S, Bounds J A, et al. Nucl Instrum Meth Phys Res A, 1994, 353: 486-488.
4 MacArthur D W, Allander K S, Bounds J A, et al. Nucl Tech (US), 1993, 102: 270-276.
5 Tuo X G, Li Z, Mu K L, et al. J Nucl Sci Technol, 2008, 1: 282-285.
6 Ahmad A L, Azid I A, Yusof A R, et al. Comput Chem Eng, 2004, 28: 2709-2715.
7 Kima G H, Yoona J E, An S H, et al. Building Environ, 2004, 39:1333-1340.
8 Ozkaya B, Demi A, Bilgili M S, Environ Modeling Software, 2007, 22: 815-822.
9 Messai N, Thomas P, Lefebvre, D, et al.

A neural Network Approach for freeway Traffic Flow Prediction

. In Proceedings of the 2002 IEEE International Conference on Control Applications, Glasgow, Scotland, U.K, September 2002, 18-20.
Baidu ScholarGoogle Scholar