1 Introduction
In recent years, with decommissioning of a large number of nuclear facilities and increasing demand of waste treatments, the existed radioactive pipes need to be treated well. In order to sort and treat the alpha radioactive pipes before disposal, radioactivity survey is to be done to assess whether pipes meet limit requirements. Alpha particles released by the nuclides in the pipelines are absorbed partially by the pipe walls, which makes it difficult to detect the alpha pollution situation in the pipelines by external direct testing methods. Recently, Long-Range Alpha Detector (LRAD) technology provides an approach to get the alpha pollution information inside the pipelines. The LRAD detection system normally consists of five units, that is, sample detection (ion chamber and measurement chamber), air-driven part, power supply, signal acquisition and signal processing unit. This technique can detect indirectly alpha radioactivity by collecting ions inside pipelines that are produced by alpha particles. Thus, it can overcome defects of direct alpha detection such as short range and failing to penetrate facility walls. Many scholars have made preliminary studies to confirm that distance, length, diameter, radioactivity, wind speed and air flux influence final measuring results[1‒5]. Our statistical analysis to LRAD experimental results have shown that there is a nonlinearly relationship between testing parameters and measuring results.
This paper puts forward a kind of improved genetic algorithm to deal with the uncertainty correction. The experiment data are collected as the following experiment conditions: the bias voltage of the current ionization chamber bias is 200 V. The diameters of carbon steel pipes are 43 mm, 48 mm, 58mm, 66 mm, and 78 mm, and their are 10‒160 cm which can be adjusted by rotary joints. The alpha sources are 24.05 Bq and 3200.00 Bq to establish the data training model and example detection model. The results show that the improved genetic algorithm can effectively improve the precision of prediction.
2 Methodology
The GAANN model has three layers with m nodes in the input layer, h nodes in the hidden layer, and n nodes in the output layer. Firstly, the model is implemented in order to determine a basic state space of connection weights matrix. Secondly, the number of hidden nodes and connection weights matrix are encoded into a mixed string which consists of integer value and real value[7,8]. In this paper the experimental data is divided into three parts: training sample φ11, φ12, cross-validation sample φ21, φ22 and testing sample φ31, φ32. Here we introduce our scheme:
Step 1: Initialize connection weights which are within [‒1,1] for training sample φ11, φ12. Adjust the weights until the desired tolerance of error ε11, ε12 is obtained. The maximum and minimum of weights are denoted as umax and umin, respectively. The value of weights are taken within [umin–δ1, umax+δ2], where δ1, δ2 are adjustment parameters.
where i = 1, 2, 3 which corresponds to three data sets. ŷk(t), yk(t) are the desired output and real data.
Step 2: Encode connection weights and number of hidden nodes. The hidden nodes are encoded as binary code string, 1 with connection to input and output nodes and 0 with no connection. The weights are encoded as float string, with string length H=m×h+ h+h×n+n (m is the number of input nodes, n is the number of output nodes, h is the number of hidden nodes). Each string corresponds to a chromosome consisted of some gene sections, tabulated as follows:
Step 3: Initialize a population of chromosomes. The length L of each chromosome equals to G+H, where G is the length of binary code of the number of hidden nodes and H is the length of real-valued code of connection weights.
1, …, 1 | 0.2…, 0.7 | 0.3, …,0.1 | 0.2, … ,0.3 | 0 .9, … , 0.8 |
---|---|---|---|---|
A | B | C | D | E |
Step 4: Calculate fitness individually according to Equation 2 below.
Step 5: Copy the highest fitness individual directly to a new offspring and select other individuals by the method of spinning the roulette wheel[9].
Step 6: Use basic crossover and mutation operations to the control code, namely, if a hidden node is deleted (added) according to mutation operation, the corresponding control code is encoded 0. The crossover and mutation operators of weights are encoded as follows:
Crossover operation with probability pc
where Xit, Xi+1t are a pair of individuals before crossover, Xit+1, Xi+1t+1 are a pair of individuals after crossover, ci is taken as random value within [0,1].
Mutation operation with probability pm
where Xit is individual before mutation, Xit+1 is individual after mutation, ci is taken as random value within (umin ‒δ1 ‒Xit , umax + δ2 + Xit ).
Step 7: Generate the new population and replace the current population. The above procedures (step 4‒7) are repeated until convergence conditions (min E2<εk2 and min E3<εk3) are satisfied, where k=1, 2, 3 which corresponds to three data sets.
Step 8: Decode the highest fitness individual, obtain corresponding number of the hidden nodes and connection weights, and output the prediction results.
3 Model implementation and results
In order to evaluate the performance of our model, we choose the representative 239Pu strong source (32.00 Bq) and the weak source (24.05 Bq), respectively. And then establish a prediction model, in which the measured distance, tube length, diameter, wind speed and air flow construct the input layer, the ionization voltage works as output layer. In this case, we choose ionization voltage value which can represent radioactivity intensity as a prediction of the output value. In the experiments we find the linear relationship between radioactivity and ionization voltage value. After measuring the ionization voltage value, we take it for comparison with the standard source. And the radioactive intensity can be calculated.
Name of variables | Value |
---|---|
Training sample φ11 | 128 |
Validation sample φ12 | 27 |
Testing sample φ13 | 27 |
Training sample φ21 | 105 |
Validation sample φ22 | 22 |
Testing sample φ23 | 22 |
Number of input nodes | 6 |
Number of hidden nodes | 6 |
Number of output nodes | 1 |
Learning rate | 0.05 |
Training epochs | 1000 |
Error ε11 of sample φ11 | 0.05 |
Error ε12 of sample φ12 | 0.10 |
Error ε13 of sample φ13 | 0.10 |
Error ε21 of sample φ21 | 0.15 |
Error ε22 of sample φ22 | 0.30 |
Error ε23 of sample φ23 | 0.30 |
Crossover probability pc | 0.50 |
Mutation probability pm | 0.10 |
Population | 20 |
Iteration number | 50 |
Training goal | 10–5 |
-201306/1001-8042-24-06-003/alternativeImage/1001-8042-24-06-003-F001.jpg)
In the simulations a three-layered BP neural network is first employed to estimate basic state space of connection weights. The minimum and maximum values of weights are obtained, –1.21 and 0.96, respectively. Let δ1=–0.09 and δ2 = 0.04, therefore, the range of weights is assumed to be within [–1.3, 1.0]. The number of input neurons is 5 and the hidden nodes are 6. The activation function adopted here from input to hidden layer is Sigmoid, while from hidden to output layer is Purelin function. For the proposed hybrid neural network, the following system parameters in Table 2 are applied to training samples and prediction.
In order to test the model performance, we set training error ε11=0.05, validation error ε12=0.10, testing error ε13=0.10, under 3200.00 Bq alpha source. The training results are shown in Fig.1.
As under the weak source environment, the measurement error is big. We set training error ε21=0.15, validation error ε22=0.30, testing error ε23=0.30 under 24.05 Bq, as shown in Fig.2.
For the purpose of comparison with other neural network models, such as the basic BP neural network, three types of errors, which are commonly found in many papers discussing these models, are also used here. Three types of errors are the mean absolute percentage error (MAPE), the maximum absolute percentage error (MAXAPE), the minimum absolute percentage error (MINAPE).
As the initial connection weights and threshold have certain of randomness, so we have made two predicted models 50 times.
-201306/1001-8042-24-06-003/alternativeImage/1001-8042-24-06-003-F002.jpg)
GAANN | Traditional BP network | |
Confidence interval | 0.2247‒0.2816 | 0.2247‒0.2527 |
Numbers fallen in confidence interval | 15 | 4 |
Mean error | 0.2373 | 0.2748 |
MAXAPE | 0.6590 | 0.7332 |
MINAPE | 0.0025 | 0.0048 |
GAANN | Traditional BP network | |
---|---|---|
Confidence interval | 0.0667‒0.0747 | 0.0762~0.0845 |
Number fallen in confidence interval | 12 | 11 |
Mean error | 0.0707 | 0.0740 |
MAXAPE | 0.1555 | 0.1598 |
MINAPE | 0.0098 | 0.0006 |
We took its 95% confidence interval as neural network ensemble. The parameters list in Table 2. Characteristic parameters are tabulated in Table 3 and 4. We can observe that the capability of approximation of this model is better than the traditional one for weak and strong radioactive sources. But in weak radioactive source, error precision is large. This is a shortcoming of the proposed model, which pushes us to present a better model and will report the results in future publication.
-201306/1001-8042-24-06-003/alternativeImage/1001-8042-24-06-003-F003.jpg)
-201306/1001-8042-24-06-003/alternativeImage/1001-8042-24-06-003-F004.jpg)
4 Conclusion
In this paper, a GA-based neural network approach has been proposed for LRAD radioactivity survey research. Network structure is optimized and connection weights are adjusted through the implementation of genetic operators. The experiments with LRAD data have shown that the predictive performance of the proposed model is better than that of the traditional BP neural network. However, weak radioactive source exhibits the poor effect. The consideration of further improvements in model performance should include the following factors such as different time windows, different prediction horizon, crossover and mutation operators, classification of data sets, etc. This work is now under progress.
A neural Network Approach for freeway Traffic Flow Prediction
. In