Introduction
Similar to X-ray imaging, neutron radiography (NR) obtains the internal structural information of an object by measuring the intensity attenuation of a neutron beam as it passes through an object. Owing to the special elemental attenuation characteristics of neutrons compared to X-rays and γ-rays, NR is an important complementary detection method in the aerospace, military, and nuclear industries [1-4]. To date, clear neutron radiographic images have been mainly obtained using reactors and large-scale accelerators with high neutron yields (e.g., >1012 n/s). However, because of the high costs and paucity of reactors and large-scale accelerators, compact neutron radiography (CNR) technology—offering more flexible application scenarios—has globally attracted considerable research attention. Neutron radiographic images obtained via CNR usually suffer from multiple severe distortions, including mixed noise (e.g., Gaussian noise and Poisson noise), geometric unsharpness, and white spots induced by the limited neutron flux, size of the collimator, and imaging detector, which hinder subsequent machine vision tasks such as defect detection and identification [5-15].
Generally, the quality of neutron radiographic images (NRIs) can be improved from both hardware and software perspectives. However, owing to miniaturization constraints and technical bottlenecks, increasing the neutron yield of compact accelerators, as well as upgrading the collimator and imaging detectors, are difficult once a CNR system is established. On the contrary, advanced image-processing techniques can significantly improve the visual quality of NRIs without modifying the hardware setup. Therefore, designing a multi-distortion suppression method from the perspective of image processing is highly recommended to further promote the development and application of CNR.
Image-distortion suppression aims to restore high-quality images with clear details from low-quality images degraded by noise and blur. In general, suppression methods can be classified into the following two categories: traditional suppression algorithms based on prior knowledge and deep learning-based suppression algorithms. Several successful demonstrations of traditional suppression techniques have initially been reported [16]. In 2015, Qiao et al. proposed a noise and blur suppression method for NRIs by employing BM3D frames and a nonlinear variance stabilization algorithm based on the Anscombe transform [17]. They decomposed multiple distortion suppression into two sub-problems: Gaussian noise removal and deblurring. Although this method has shown improved perceptual visual quality, the noise suppression step may indirectly increase image blur to some extent. Subsequently, Zhao et al. proposed a robust principal component analysis (RPCA) with a sparse representation method to identify and remove special white-spot distortion that typically exists in radiographic images [11]. Furthermore, the well-known software ImageJ is widely used by professionals to suppress white spots through a convenient manual threshold adjustment [18]. However, traditional suppression techniques can only be used to address simple restoration tasks with limited distortion types (i.e., < 2). In addition, it has the latent risk that suppressing one distortion usually results in other unnecessary additional distortions. Deep learning (DL) technology has excellent nonlinear fitting ability; therefore, learning-based methods have significant potential in suppressing multiple distortions (e.g., noisy images with mixed blur and white spots) with end-to-end processing. For example, DnCNN [19] utilizes convolutional neural networks (CNNs) to mitigate manifold distortions present in natural images. Subsequently, CBDNet [20] was proposed as a two-stage multi-distortion suppression network to enhance the robustness in tackling various distortions. RIDNet [21] was proposed to handle mixed Gaussian and Poisson noise scenarios with optimal efficiency and flexibility. In addition, generative adversarial networks (GANs) [22] can generate realistic images to augment image datasets and address the issue of limited training samples. Therefore, they have attracted considerable attention recently [23].
However, because of the different distortion types of NRIs compared with natural images, existing image datasets and learning-based distortion suppression methods for natural images are not directly applicable to NRIs. Therefore, in this study, we developed a novel DL-based multi-distortion suppression network with self-built neutron radiographic image datasets to improve the visual quality of real NRIs. The proposed network learns the abstract relationship between latent clear images and degraded images with multiple distortions through end-to-end training, which can realize multi-distortion suppression in a single step without mutual interference. In addition, neutron radiographic image datasets with various types and levels of distortion were built, thereby making the proposed network suitable for NRIs.
The main contributions of this study are summarized as follows:
(1) A novel multi-distortion suppression network, which consists of four cascaded residual attention block (RAB) units and a GAN, was developed.
(2) Large-scale NRI datasets were constructed for the first time to render the proposed network suitable for NR. Mixed noise, geometric unsharpness, and white spots at different levels were randomly combined to enrich the datasets.
(3) Because of the lack of quality assessment methods for NRIs with multi-distortions, several evaluation dimensions were adopted to evaluate the performance of the proposed method, including subjective and objective metrics.
(4) Extensive experiments demonstrated that the proposed method can effectively suppress various noises, blur, and white spots existing in real NRIs and achieve state-of-the-art perceptual visual quality.
The remainder of this paper is organized as follows. Section 2 introduces the proposed multi-distortion suppression method, including the analysis of NRI degradation models, NRI dataset construction, design of multi-distortion suppression network, network training, and testing. Section 3 presents a comprehensive comparison of the results of the experiments and analyses. Finally, we summarize the conclusions drawn from this study in Section 4.
Method
A block diagram of the proposed multi-distortion suppression method for NRIs is shown in Fig. 1. The following two stages are depicted: NRI dataset construction, and design and validation of the proposed multi-distortion suppression network.
-202404-小份文件/1001-8042-35-04-015/alternativeImage/1001-8042-35-04-015-F001.jpg)
Model analysis of degraded NRIs
Owing to the physical limitations of CNR systems, low-flux NRIs typically suffer from multiple severe distortions, including noise, geometric unsharpness, and white spots. As the DL-based method requires a large number of clear and degraded images to learn the abstract relationship, we designed three types of NRI degradation models to simulate as many authentic distortion types as possible; these are expressed in the following equations.
On the basis of (1), we define the second model of NRIs with white spots as follows:
NRI dataset construction
Although there are many public image databases for DL-based image processing, most are designed for natural images. Owing to the scarcity of available neutron sources and the high cost of NR, obtaining NRIs is considerably more difficult than obtaining natural images. To our best knowledge, open-access datasets for NRIs have not been reported. Consequently, natural image datasets have been used as substitutes for NRI processes [26, 27]. However, owing to different imaging principles and distortion types [28], existing natural image datasets cannot provide an ideal effect. Therefore, we built NRI datasets for the first time to enhance the performance of the proposed method for NR.
Well-captured NRIs obtained from the reactor and large-scale accelerators were chosen as ideal clear images (i.e., original images). Then, degradation models (1), (2), and (3) were randomly applied to the original images to obtain distorted images with different levels and types. Both the original and simulated distorted images were used to construct the NRI training datasets. The detailed simulation operations are as follows:
Distortion 1: Gaussian noise
Gaussian noise is the most widespread noise in all image types. We generated a set of 20 random numbers that conformed to a uniform distribution in the range (0–0.03) as the variances of the Gaussian noise. By using the function “imnoise(I, ’gaussian’, m, var_gauss)" in MATLAB, the distortion images with different-level Gaussian noise can be obtained. Here, I denotes the original image, ’gaussian’ denotes the noise type, m denotes the mean value, and var_gauss denotes the variance, respectively. Not all degraded NRIs can be improved to a good visual quality; thus, the parameter selection is empirical according to the image screening condition.
Distortion 2: Poisson noise
As the neutron fluence rate fluctuates significantly in time and space, NRIs inevitably suffer from distortion of the Poisson distribution noise, which is also called shot noise. It can be simulated using the MATLAB function “imnoise(I, ’poisson’)". When the intensity of the Poisson noise becomes sufficiently large, it degenerates into Gaussian noise.
Distortion 3: Gamma noise
Gamma noise is not the main distortion type in NRIs. However, to make the simulated images more consistent with the real NRIs, we further use the MATLAB function “gamrnd(⋅)" to enrich the datasets. Although gamma noise is not necessary for the majority of NRIs, a dataset with gamma noise has been shown to be highly effective in certain special NRIs.
Distortion 4: Unsharpness(i.e., blur)
Ideally, the neutron source and beam in NR systems should be point-like and parallel. However, owing to the size of the neutron sources, the collimator ratio (i.e., L/D, where L is the length of the beam collimator, and D is the diameter of the neutron source), and the distance between the object and the scintillation screen, NRIs are usually shown with geometric unsharpness. As the generation of defocus blur in an imaging system is similar to the geometric unsharpness of NRIs [29], we used defocus blur to approximate the geometric unsharpness. To simulate this process, we randomly select the defocus blur kernel with the size from 1 to 15 for 30 times by using the MATLAB functions “fspecial(⋅)" and “imfilter(⋅)."
Additionally, when a single neutron is absorbed by the scintillation screen, the spot dispersion generated at the scintillation screen causes inherent unsharpness, which can be characterized as a two-dimensional Gaussian function. Therefore, Gaussian blur was employed to approximate the inherent unsharpness. To simulate this process, we randomly select the Gaussian blur kernel with the size from 1 to 15 for 30 times by using the MATLAB functions “fspecial(⋅)" and “imfilter(⋅)". The standard deviation of the two-dimensional Gaussian kernel function can be adjusted with reference to the kernel size (e.g., 1/3 or 1/6).
Distortion 5: White spots
During the NR imaging process, the neutron–nucleus interactions generate high-energy particles (e.g., X-rays and γ-rays), which generate white spots upon collision with the imaging detector. Furthermore, the photosensitive element accumulates charge clouds imbued with radiation energy, gradually diffusing and affecting multiple pixels, yielding white spots. Among the various distortions that exist in NRIs, white spots are the most conspicuous and detrimental to image quality. In previous studies, white spots were simulated using only a few fixed shapes, which were not sufficiently realistic [11]. Therefore, we leveraged the capability of the GAN model to emulate real-world data so as to simulate white spots with enhanced authenticity and diversity [30]. Because the GAN model requires real samples to learn its essence, we collected white-spot samples from real NRIs with multiple distortions to ensure the feature consistency of the simulated white spots. After extensive training with 14,000 iterations, the generated white spots were highly similar to the real white spots and showed a remarkable level of fidelity and authenticity as compared with those in the previous studies. The comparison results are shown in Figure 2.
-202404-小份文件/1001-8042-35-04-015/alternativeImage/1001-8042-35-04-015-F002.jpg)
On the basis of the aforedescribed distortion simulations, we built three multi-distortion datasets of NRIs by using data augmentation techniques. For dataset A, real and clear NRIs were first selected as the original input images, which were then processed using defocus blur, Gaussian blur, Gaussian noise, and Poisson noise according to (1). Then, dataset B was constructed according to (2) with additional white spots compared with dataset A. Finally, we constructed dataset C with additional gamma noise compared with dataset B for some special NRIs. These datasets are illustrated in Figure 3.
-202404-小份文件/1001-8042-35-04-015/alternativeImage/1001-8042-35-04-015-F003.jpg)
Multi-distortion suppression network
The overall architecture of the multi-distortion suppression network based on the GAN framework is shown in Figure 4; it mainly consists of two crucial components, namely, the generator and discriminator. The primary objective of the proposed network is to reconstruct latent clear NRIs from input-degraded NRIs with multi-distortions. The GAN model is employed to learn the true distribution of the input samples Pdata(x) by using the generator G. Meanwhile, the discriminator is used D to determine the authenticity of the generated samples. Through iterative training, G can generate samples with a high degree of fidelity.
-202404-小份文件/1001-8042-35-04-015/alternativeImage/1001-8042-35-04-015-F004.jpg)
Generator architecture
The generator mainly includes the components for feature extraction, residual unit, and upsampling reconstruction. Specifically, feature extraction of degraded input images is performed using a 7×7 convolutional operation. Subsequently, two consecutive downsampling operations—convolution and pooling—are applied to reduce the dimensionality of the feature maps. These feature maps are then fed into four cascaded RAB units to further learn the essential characteristics. To reconstruct latent clear images, two transposed convolutions are incorporated into the network for upsampling. Skip connections play a vital role in connecting feature maps of the same size obtained from both the upsampling and downsampling stages, allowing for the network to accommodate inputs with varying dimensions. Finally, the output image was obtained using a 7×7 convolutional layer.
The first component of the RAB unit is the context block (CB), which addresses the issue of reduced image resolution resulting from excessive pooling operations. The CB employs four dilated convolutions with dilation rates of 1, 2, 3, and 4 to expand the receptive field by injecting holes into the convolution map. This enables the network to capture a broader context without further reducing the image resolution. Following the CB, two residual connections are integrated into the RAB unit. The final component of the RAB unit is the coordinate attention block (CAB), which leverages two one-dimensional global pooling operations performed along the vertical and horizontal directions to aggregate the input features into two separate feature maps. Long-range spatial dependencies along each direction can be captured by encoding specific directional information in the input feature maps. Consequently, attention maps are generated using the saved position information. The two attention maps are then multiplied by the input feature maps to guide the network in focusing on the regions of interest. The CAB can distinguish spatial directions (i.e., coordinates) and generate coordinate-based attention maps, thereby effectively integrating spatial cues into the attention mechanism. The output of the RAB can be expressed as
Discriminator architecture
The discriminator architecture consists of eight convolutional modules. In addition, a leaky ReLU activation function is employed to address the problem of the vanishing gradient. As the network depth increases, the number of learned features progressively expands, whereas the spatial dimensions of the features gradually decrease. Finally, the fake and real probabilities of the reconstructed NRIs are obtained using two fully connected layers and a sigmoid activation function.
Model training
The NRIs of the built datasets are randomly cropped into a series of 128×128 subimages as training samples. We define α and β as the degraded images and corresponding ground truth, respectively. The training pair in each iteration is defined as
Experimental results and analysis
This section details the extensive comparative experiments performed on real NRIs with multiple distortions. Because most of the prevalent learning-based image-distortion suppression algorithms are designed for some common distortions in natural images (e.g., additive white Gaussian noise and JPEG compression distortion), their application effect on NRIs with special distortion types is limited. Therefore, we selected RIDNet [21], a top-performing model for real-world noisy photograph denoising tasks, along with the well-performing CBDNet [20], to serve as comparative benchmarks for the proposed method. The experimental parameters were set to their default values as specified in the literature. All experiments were performed on a workstation equipped with an AMD 3700X CPU and an NVIDIA RTX 2080Ti GPU.
Suppression results for real NRIs with severe noise
Image denoising is the most fundamental and important task in image processing. Several noise suppression algorithms have been proposed. We selected ImageJ [12], the most widely used software in NR, and mainstream learning-based RIDNet [21] to verify the superiority of the proposed method. As evident from Figure 5, the proposed method exhibits the best visual quality for noise suppression.
-202404-小份文件/1001-8042-35-04-015/alternativeImage/1001-8042-35-04-015-F005.jpg)
Given the lack of ideal reference NRIs, classical full-reference image quality assessment (IQA) methods (e.g., peak signal-to-noise ratio (PSNR) [33]) and gradient magnitude similarity deviation (GMSD) [34]) cannot be used to evaluate the quality of the aforedescribed suppression results. Therefore, a no-reference image quality metric, RBNIQM, designed for NRIs [35] was employed to provide an objective quantitative evaluation. RBNIQM can predict the quality of NRIs with multi-distortions, including Gaussian noise, Poisson noise, and blur. The prediction scores of the RBNIQM method fall within the range of 0~1, and a lower prediction score indicates a higher image quality. An objective evaluation of the suppression results in Figure 5 via RBNIQM is presented in Table 1.
Figure 5 | ImageJ | RIDNet | Proposed |
---|---|---|---|
Score | 0.3259 | 0.2651 | 0.1936 |
As evident from Table 1, the objective evaluation was consistent with the perceptual visual quality depicted in Figure 5.
Suppression results for real NRIs with severe blur
Thus far, deblurring of NRIs remains a major challenge. Figure 6(a) shows two real NRIs: a small motor and a floppy disk drive, from top to bottom [36]. The small motor and floppy disk drive images were obtained using a small L/D value of 115 and a near imaging distance equal to its own width, thereby leading to significant blur distortion rather than noise. The traditional steering-kernel-based Richardson–Lucy algorithm (SK-RL) [37], BM3D frames, and nonlinear variance stabilization (BM3D frames)[17], as well as learning-based RIDNet [21] and CBDNet [20] were considered for comparison with the proposed method on these two images, as shown in Figure 6(b)-(f). Notably, the suppression results in Figure 6(f) were obtained by the proposed method with dataset A. The visual results of Figure 6(a)-(f) also indicate that the proposed method shows the best distortion suppression performance in terms of blur compared with the other four methods.
-202404-小份文件/1001-8042-35-04-015/alternativeImage/1001-8042-35-04-015-F006.jpg)
Next, we used the RBNIQM method to quantitatively evaluate the suppression results in Figure 6. Further, Table 2 indicates that the objective quality evaluation exhibited good consistency with visual perception. In addition, learning-based CBDNet and RIDNet show superiority in multi-distortion suppression compared to traditional image-processing methods such as SK-RL and BM3D frames. This is mainly because learning-based models exhibit good capability for abstract high-level feature (e.g., deep network) extraction rather than the low-level feature (e.g., time-domain or frequency-domain features) extraction of traditional methods. The preceding discussion substantiates the efficacy of the proposed methodology in addressing distortions, particularly in the case of blur.
Figure 6 | (b) | (c) | (d) | (e) | (f) |
---|---|---|---|---|---|
Small motor | 0.1297 | 0.1235 | 0.0930 | 0.0899 | 0.0791 |
Floppy disk drive | 0.1394 | 0.1479 | 0.1298 | 0.0924 | 0.0815 |
Suppression results for real NRIs with severe white spots
Figure 7(a) shows two other real NRIs, a large motor and a bottle, from top to bottom. Unlike the NRIs shown in Figure 6 with severe blur, these two images were mainly degraded with white spots. From the perspective of human perception, white-spot distortion has a greater significance level than various types of noise. Therefore, white-spot suppression is crucial for NRIs.
-202404-小份文件/1001-8042-35-04-015/alternativeImage/1001-8042-35-04-015-F007.jpg)
ImageJ [18] is the most widely used software for white-spot removal by professionals in the NR field. In addition, the SK-RL [37], BM3D frames, nonlinear variance stabilization (BM3D frames) [17], improved robust principal component analysis (IRPCA) [11], RIDNet [21], CBDNet [20] and the proposed method were employed to demonstrate the multi-distortion suppression effects in Figures 7(b)–(h). The suppression results of Figure 7(h) from top to bottom were obtained by the proposed method with datasets C and B. A subjective comparison of the results in Figure 7 indicates that the BM3D frames outperformed the traditional SK-RL and classical ImageJ in terms of white-spot removal. ImageJ demonstrated effectiveness in white-spot removal. The main advantages lie in user-friendly operation and easy access to the Internet. As regards BM3D frames, the filter inevitably induces additional blur, which is not beneficial for subsequent defect identification and measurement tasks. Compared with IRPCA, RIDNet, and CBDNet, the proposed method always shows the best visual perception in both white-spot removal and noise and blur suppressions. The efficacy of the proposed method in handling the first image in Figure 7 is attributed to the incorporation of the degradation model (expressed in (3)). The first image in Figure 7 contains not only prominent white spots but also diverse noise (e.g., Gaussian noise, Poisson noise, and gamma noise). As regards the second image in Figure 7, the effectiveness of the proposed method can be attributed to the adoption of the GAN model with the coordinate attention mechanism, which can effectively boost the capability of the proposed network for feature extraction in the area of interest. This also helped the proposed method outperform other learning-based methods (i.e., RIDNet and CBDNet) when using the same datasets.
Because existing IQA methods do not consider white spots, evaluation of the suppression results regarding white spots is a significant challenge. Therefore, both subjective and objective metrics were employed to evaluate the performance of the proposed method and its counterparts. Four no-reference quality assessment methods, namely, BIQAA [38], BLIINDS [39], NIQE [40], and RBNIQM, were chosen as the objective metrics and compared with a subjective metric (i.e., mean opinion score (MOS)). The normalized scores predicted by the four no-reference quality assessment methods are shown in Figure 8. Except for RBNIQM, all the other methods showed better image quality with higher scores.
-202404-小份文件/1001-8042-35-04-015/alternativeImage/1001-8042-35-04-015-F008.jpg)
The black line in Figure 8 denotes the MOS trend obtained by averaging the subjective scores from the following two groups of evaluators: professional researchers with experience in NR and other researchers without relevant experience. Specifically, the NRIs processed using different methods were first randomly shuffled, and the evaluators (25 in each of the professional and nonprofessional groups) were thereafter instructed to rank the images in ascending order of quality and assign scores incrementally from one. Finally, the average scores from different groups were computed according to the weights of 0.7 and 0.3, respectively, for the professional and nonprofessional groups as the subjective quality scores for each method (Table 3).
Groups | (b) | (c) | (d) | (e) | (f) | (g) | (h) |
---|---|---|---|---|---|---|---|
Professional. | 2.13 | 2.85 | 3.95 | 4.35 | 4.10 | 5.86 | 6.37 |
Nonprofessional | 2.06 | 2.54 | 3.46 | 4.28 | 4.96 | 5.75 | 5.98 |
Integrated | 2.109 | 2.757 | 3.705 | 4.329 | 4.530 | 5.805 | 6.175 |
The trend lines in Figure 8 show that the score curves deviate severely from the subjective scores, indicating their inability to accurately assess the quality of NRIs with white spots. Although the RBNIQM method shows a relatively steady trend compared to the subjective quality assessment, the minimal variations indicate its drawback of low sensitivity to white spots. After an extensive comparison with state-of-the-art distortion suppression methods, we considered three representative methods—ImageJ, IRPCA, and the proposed method—to illustrate the local details of white-spot suppression (Figure 9).
-202404-小份文件/1001-8042-35-04-015/alternativeImage/1001-8042-35-04-015-F009.jpg)
Evidently, the ImageJ method fails to eliminate white spots. It can only remove the brightest pixels of the white spots and exhibits inferior performance in preserving fine details. The IRPCA method achieves satisfactory results in removing white spots but has a limited capability in preserving image details. The proposed method effectively removes white spots and noise with good detail preservation.
To further validate the exceptional efficacy of the proposed method in suppressing white spots in NRIs, the same background regions (i.e., the red box in the machine image) of the NRI with different distortion suppression methods are shown from the perspective of a three-dimensional (3D) grayscale distribution in Figure 10. On the basis of the pixel distribution of the red-box region in Figure 10(b)-(h), we can conclude that both the IRPCA method and proposed method exhibit remarkable white-spot suppression effects. As the selected region is a background with no useful object information, a smoother 3D grayscale distribution is preferred.
-202404-小份文件/1001-8042-35-04-015/alternativeImage/1001-8042-35-04-015-F010.jpg)
Without loss of generality, the real NRI obtained by the CNR system is shown in Figure 11 to further demonstrate the effectiveness of the proposed method. This image was obtained in 1972 using an A-711 D-T neutron tube with a small L/D value of 7.5. The low neutron flux of the neutron tube required a long exposure time of 90 min. However, the image quality was inferior to that of NRIs obtained by the reactor. Next, ImageJ, SK-RL, and the proposed method were employed to demonstrate the multi-distortion suppression performance. As evident from the comparison, the ImageJ and SK-RL methods exhibit fewer effects than the original image. In contrast, the proposed method can improve image quality in terms of noise, blur, and even color restoration to a certain extent. Notably, the proposed method has good generalization ability in improving the visual quality of various NRIs including thermal neutron radiographic images and fast neutron radiographic images. Although the energy spectra of the neutrons used for NR differ, the resulting images have similar image characteristics (e.g., radiographic images) and degraded models (e.g., Gaussian noise, Poisson noise, blur, and white spots).
-202404-小份文件/1001-8042-35-04-015/alternativeImage/1001-8042-35-04-015-F011.jpg)
Guidance on the selection of datasets
As DL-based methods heavily rely on dataset design for specific targets, we herein provide recommendations on the selection of appropriate datasets for different NRIs. Dataset A was mainly designed for blur and noise and is suitable for handling noisy NRIs with severe blur. Building on dataset A, dataset B considers the white-spot distortion type. As a result, dataset B can be considered as the most widely applicable used dataset for suppressing multi-distortion in real NRIs. Additionally, there may also exist NRIs with severe distortions. We recommend the use of dataset C to obtain remarkable results. However, the training time and recovery performance for most NRIs were not as favorable as those in the case of dataset B. Notably, the proposed method emphasizes multi-distortion suppression rather than a single distortion type in NRIs. Multi-distortion suppression can be realized in NRIs in a single training step by selecting the appropriate dataset.
Conclusion
In this study, we devised a novel multi-distortion suppression method based on a modified generative adversarial network (GAN) to improve the visual quality of degraded NRIs. To address the lack of NRI datasets, we built a series of real NRI datasets with various types and levels of distortion for training the proposed network. In addition, a coordinate attention mechanism was incorporated in the backbone network (i.e., GAN) to promote the capability of the proposed network to learn the abstract relationship between ideally clear and degraded images. Extensive comparative experiments showed that the proposed method can effectively suppress multiple distortions existing in real NRIs and achieve state-of-the-art perceptual visual quality, thus demonstrating its application potential in neutron radiography.
Neutron radiography of osseous tumours
. Nature. 230, 461-462 (1971). https://doi.org/10.1038/230461a0Contrast sensitivity in 14MeV fast neutron radiography
. Nucl. Sci. Tech. 28, 78 (2017). https://doi.org/10.1007/s41365-017-0228-5High-resolution neutron imaging of salt precipitation and water transport in zero-gap CO2 electrolysis
. Nat. Commun. 13, 1-9 (2022). https://doi.org/10.1038/s41467-022-33694-yAdvances in neutron radiography and tomography
. J. Phys. D. Appl. Phys. 42, 243001 (2009). https://doi.org/10.1088/0022-3727/42/24/243001Simulation and optimization for a 30-MeV electron accelerator driven neutron source
. Nucl. Sci. Tech. 23, 272-276 (2012).https://doi.org/10.13538/j.1001-8042/nst.23.272-276Study on moderators of small-size neutron radiography installations with neutron tube as source
. Nucl. Sci. Tech. 6, 129-134 (1995).Design of a mobile neutron radiography installation based on a compact sealed tube neutron generator
. Nucl. Sci. Tech. 8, 53-55 (1997).Design of a mobile neutron radiography installation based on a compact sealed tube neutron generator
. Nucl. Sci. Tech. 29, 119 (2018). https://doi.org/10.1007/s41365-018-0455-4MC simulation of thermal neutron flux of large samples irradiated by 14 MeV neutrons
. Nucl. Sci. Tech. 21, 11-15 (2010). https://doi.org/10.13538/j.1001-8042/nst.21.11-15Neutron imaging-detector options and practical results
. Nucl. Instrum. Meth. A. 531, 228-237 (2004). https://doi.org/10.1016/j.nima.2004.06.010White spots noise removal of neutron images using improved robust principal component analysis
. Fusion. Eng. Des. 156, 111739 (2020). https://doi.org/10.1016/j.fusengdes.2020.111739Development of a high frame rate neutron imaging method for two-phase flows
. Nucl. Instrum. Meth. A. 954, 161707 (2020). https://doi.org/10.1016/j.nima.2018.12.022Resolution analysis of thermal neutron radiography based on accelerator-driven compact neutron source
. Nucl. Sci. Tech. 34, 76 (2023).https://doi.org/10.1007/s41365-023-01227-xFeasibility study of portable fast neutron imaging system using silicon photomultiplier and plastic scintillator array
. Nucl. Tech. 44, 030403 (2021).https://doi.org/10.11889/j.0253-3219.2021.hjs.44.030403Super field of view neutron imaging by fission neutrons elicited from research reactor
. Nucl. Tech. 46, 030201 (2023). https://doi.org/10.11889/j.0253-3219.2023.hjs.46.030201Self-adaptive spatial image denoising model based on scale correlation and SURE-LET in the nonsubsampled contourlet transform domain
. Sci. China. Inform. Sci. 57, 092106 (2014). https://doi.org/10.1007/s11432-013-4943-1Neutron radiographic image restoration using BM3D frames and nonlinear variance stabilization
. Nucl. Instrum. Meth. A. 789, 95-100 (2015). https://doi.org/10.1016/j.nima.2015.04.005NIH Image to ImageJ: 25 years of image analysis
. Nat. Methods. 9, 671-675 (2012). https://doi.org/10.1038/nmeth.2089Beyond a gaussian denoiser: residual learning of deep CNN for image denoising
. IEEE. T. Image. Process. 26, 3142-3155 (2017).https://doi.org/10.1109/TIP.2017.2662206Toward convolutional blind denoising of real photographs
, Paper Presented at the Thirty-second IEEE/CVF Conference on Computer Vision and Pattern Recognition, (RIDNet: Recursive Information Distillation Network for Color Image Denoising
, Paper Presented at the Thirty-second IEEE/CVF International Conference on Computer Vision Workshop, (Real photographs denoising with noise domain adaptation and attentive generative adversarial network
, Paper Presented at the Thirty-second IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, (Material decomposition of spectral CT images via attention-based Global convolutional generative adversarial networks
. Nucl. Sci. Tech. 34, 45 (2023).https://doi.org/10.1007/s41365-023-01184-5Visualising liquid water in PEM fuel cells using neutron imaging
. Fuel. Cells. 9, 499-505 (2009). https://doi.org/10.1002/fuce.200800050No-reference quality assessment for neutron radiographic image based on a deep bilinear convolutional neural network
. Nucl. Instrum. Meth. A. 1005, 165406 (2021). https://doi.org/10.1016/j.nima.2021.165406Study on no-reference quality assessment method of neutron radiographic images based on residual network
. Nucl. Tech. 44, 59-66 (2021). https://doi.org/10.11889/j.0253-3219.2021.hjs.44.070503Design of a new CCD-camera neutron radiography detector
. Nucl. Instrum. Meth. A. 399, 382-390 (1997).https://doi.org/10.1016/S0168-9002(97)00944-3Calculation and analysis of the neutron radiography spatial resolution
. Nucl. Tech. 37, 040502 (2014).https://doi.org/10.11889/j.0253-3219.2014.hjs.37.040502Generative adversarial nets
, Paper Presented at the Twenty-eighth Advances in Neural Information Processing Systems, (Coordinate Attention for Efficient Mobile Network Design
, Paper Presented at the Thirty-fourth IEEE/CVF Conference on Computer Vision and Pattern Recognition, (Adam: A method for stochastic optimization
, Paper Presented at the Tertiary International Conference on Learning Representations,(Image quality metrics: PSNR vs. SSIM
, Paper Presented at the Twentieth International Conference on Pattern Recognition, (Gradient magnitude similarity deviation: a highly efficient perceptual image quality index
. IEEE. T. Image. Process. 23, 684-695 (2014). https://doi.org/10.1109/TIP.2013.2293423A practical residual block-based no-reference quality metric for neutron radiographic images
. Nucl. Instrum. Meth. A. 1019, 165841 (2021). https://doi.org/10.1016/j.nima.2021.1658413D neutron computed tomography: requirements and applications
. Physica. B. 276, 59-62 (2000). https://doi.org/10.1016/S0921-4526(99)01254-5Kernel Regression for Image Processing and Reconstruction
. IEEE. T. Image. Process. 16, 349-366 (2007). https://doi.org/10.1109/TIP.2006.888330Blind image quality assessment through anisotropy
. J. Opt. Soc. Am. A. 24, B42-B51 (2007). https://doi.org/10.1364/JOSAA.24.000B42Blind Image Quality Assessment: A Natural Scene Statistics Approach in the DCT Domain
. IEEE. T. Image. Process. 21, 3339-3352 (2012). https://doi.org/10.1109/TIP.2012.2191563Making a “completely blind" image quality analyzer
. IEEE. Signal. Proc. Let. 20, 209-212 (2013). https://doi.org/10.1109/LSP.2012.2227726The authors declare that they have no competing interests.