logo

High-frequency emphasized neural network reconstruction method for in situ synchrotron radiation ultrafast computed tomography characterization

ACCELERATOR, RAY AND APPLICATIONS

High-frequency emphasized neural network reconstruction method for in situ synchrotron radiation ultrafast computed tomography characterization

Jing-Wei Li
Yu Xiao
Yong-Cun Li
Xiao-Fang Hu
Guo-Hao Du
Feng Xu
Nuclear Science and TechniquesVol.37, No.3Article number 45Published in print Mar 2026Available online 10 Jan 2026
3000

There is a contradiction between the evolution rate of materials and the time resolution of SR-CT characterization in the in situ synchrotron radiation computed tomography (SR-CT) characterization of ultrafast evolution process. The sampling strategy of the ultra-sparse angle is an effective method for improving time resolution. Accurate reconstruction under sparse sampling conditions has always been a bottleneck problem. In recent years, convolutional neural networks have shown outstanding advantages in sparse-angle CT reconstruction given the development of deep learning. However, existing ideas did not consider the expression of high-frequency details in neural networks, limiting their application in accurate SR-CT characterization. A novel high-frequency information constrained deep learning network (HFIC-Net) is proposed in response to this problem. Additional high-frequency information constraints are added to improve the accuracy of the reconstruction results. Further, a series of numerical reconstruction experiments are conducted to verify this new method, and the results indicate that the reconstruction results of HFIC-Net method effectively improve reconstruction quality. This new method uses only eight angle projections to achieve the reconstruction effect of the filtered back projection method (FBP) method in 360 projections. The results of the HFIC-Net method demonstrate clear boundaries and accurate detailed structures, correcting the misinformation caused by using other methods. For quantitative evaluation, the SSIM used to evaluate image structure similarity is increased from 0.1951, 0.9212, and 0.9308 for FBP, FBP-Conv, and DDC-Net, respectively, to 0.9620 for HFIC-Net. Finally, the results of actual SR-CT experimental data indicate that the new method can suppress artifacts and achieve accurate reconstruction, and it is suitable for the in situ SR-CT accurate characterization of ultrafast evolution process.

Accurate SR-CT characterizationCT reconstructionSparse angle CT reconstruction problemHigh-frequency information constrainedDeep learning
1

Introduction

Three-dimensional microstructural visualization of ultrafast evolution process is very important to study material mechanisms. Synchrotron radiation computed tomography (SR-CT) technology [1-3] can be used to perform in situ high-resolution internal microstructure characterization [4-10]. Figure 1(a) presents a schematic of the in situ observation of the ultrafast evolution process aided by SR-CT. In this process, there is a contradiction between the evolution rate of materials and the time resolution of SR-CT characterization. According to the integrity conditions of Tuy–Smith data, it is necessary to continuously collect projection data within the complete 180 angle range during SR-CT acquisition [11]. This process typically requires a long time; however, the microstructural evolution of materials develops rapidly. For example, in laser additive manufacturing, the molten pool evolves in a matter of milliseconds [12]. In this case, the internal microstructure changes rapidly during SR-CT acquisition, generating an incorrect reconstructed tomogram. Therefore, improving the time resolution of the in situ SR-CT characterization of ultrafast evolution process while ensuring the accuracy of reconstruction results is important.

Fig. 1
(Color online) (a) Schematic of the in situ observation of an ultrafast evolution process with SR-CT. Contradictions exist between the rapid evolution process and SR-CT acquisition time resolution. (b) Filtered back projection method reconstruction results under eight angles sparse sampling conditions. (c) ART-TV reconstruction results under eight angles sparse sampling conditions
pic

Therefore, a contradiction between the evolution time of materials and the sampling time of the CT system in the CT characterization for rapid evolution. Reducing the sampling time of CT systems is an effective means to alleviate this contradiction [13]. As shown in Fig. 1(a), the CT sampling process involves rotating the sample to obtain a series of projection data within a 180° range. In this case, reducing the number of projection images (such as collecting 180 projection images every 1° interval) and using sparse angle sampling is an effective approach for shortening CT sampling time [14].

However, under the condition of ultra-sparse sampling, the quality of reconstruction results obtained using traditional methods (such as classical filtered back projection method (FBP) [15]) is not satisfactory. As shown in Fig. 1(b), compared with full angle sampling, there is error information in the internal microstructure. Therefore, studying the exact reconstruction method under ultra-sparse angle sampling conditions is necessary to improve the time resolution of the in-situ SR-CT characterization of ultrafast evolution process and ensure the accuracy of the reconstructed results.

Taking the widely used ART-TV [16, 17] algorithm as an example, the gradient descent method is applied to improve the results obtained by ART to resolve this problem [18]. As indicated in Fig. 1(c), reconstruction results obtained using the ART-TV algorithm indicate that the internal detail structure is submerged under the condition of ultra-sparse angle sampling. It is difficult to obtain accurate reconstructed images. In other words, image quality degradation caused by the lack of sampling data is difficult to overcome in the ultra-sparse angle SR-CT reconstruction problem.

In recent years, deep learning has shown outstanding advantages in the field of image processing given the development of big data and the improvement in computer performance [19]. Convolutional neural networks (CNN) have been widely used in the field of image super resolution reconstruction [20-24] and to solve the problem of sparse-angle CT reconstruction [25-32]. Wang [33] revied the current scenario of deep learning and CT imaging technology and indicated that the effective combination can further promote the development of CT imaging technology. In such deep learning methods, researchers focus on optimizing the sinogram or tomogram domain using CNNs. Jin. et al. proposed the FBP-Conv [34] method that uses the tomogram reconstructed by FBP as the input for the neural network, and through continuous training, the output is as close as possible to the real label. Dong et al. [35] used a deep neural network to optimize the sinogram of an incomplete angle in the sinogram domain and reconstructed it directly with FBP, achieving good results. Subsequently, some researchers started focusing on optimization ideas based on the CT reconstruction process. For example, Wang et al. [36] utilized deep neural networks to directly map sparse angle sinogram to tomogram; this method, referred to as “the dual domain constrained network, DDC-Net”, utilized a deep learning network to optimize in the sinogram and tomogram domains, achieving positive effects. Li et al. proposed Quad Net [37], which utilizes FFC transformation to provide a global receptive field for sinogram restoration and image refinement. GloReDi [38] used intermediate-view reconstructed images to provide additional information for the images while expanding the receptive field. Considering the image details enabled further enhancements to the potential application of deep neural networks in accurate SR-CT representation.

A new reconstruction method referred to as the high-frequency information constrained neural network (HFIC-Net) is proposed in this research to solve the problem of accurate characterization of ultra-sparse angle SR-CT. The analysis of SR-CT imaging system reveals a typical problem: the detailed information of the tomogram is often submerged in the projected sinogram. If this high-frequency information cannot be identified in the sinogram domain, the lost details information cannot be recovered in the subsequent tomogram domain optimization. Further, in this study, high-frequency detail constraints are added in to the CNN. The detailed information of the tomogram contains important structures, and therefore, the accurate SR-CT characterization has strict requirements for detailed information. Thus, we added “high-frequency information” constraint based on the DDC-Net idea for improving the expression of detailed information. A series of numerical reconstruction experiments are conducted to verify the effectiveness of this new method. The results of the HFIC-Net method are improved and compared with FBP, FBP-Conv, and DDC-Net. The proposed method uses only eight angle projections to achieve the reconstruction effect of the FBP method in 360 projections. For a quantitative evaluation, the SSIM used to evaluate image structure similarity is increased from 0.1951, 0.9212, and 0.9308 for FBP, FBP-Conv, and DDC-Net, respectively, to 0.9620 for HFIC-Net. Finally, SR-CT experimental images are used to verify this reconstruction method. The novel HFIC-Net method can restore image details and suppress artifacts, and therefore, it is considered suitable for the in situ characterization of SR-CT in ultrafast evolution processes.

The rest of this paper is organized as follows. In Sect. 2, the launching point and network structure of this new method are introduced. Then, in Sect. 3, the effectiveness of this method is verified using simulated and real SR-CT data. Finally, the discussion and conclusions are summarized in Sect. 4.

2

New reconstruction method

Analyzing the principle of SR-CT imaging system is essential to develop a new method for ultra-sparse angle SR-CT reconstruction. A typical problem in the SR-CT imaging system is discovered through the analysis: the detailed information of the tomogram is often submerged in a projected sinogram. If this high-frequency information is not observed in the sinogram domain, the lost details cannot be recovered in the subsequent tomogram domain optimization, which can distort the reconstruction results. In addition, the idea of adding high-frequency information constraints in the CNN is proposed. The model and framework of HFIC-Net are introduced; the HFIC-Net arranged the CNNs in the sinogram and tomogram domains and trained them via the back propagation of the gradient descent method.

2.1
Launching point of developing the new method: Limitations of the current idea

Improving the accuracy of SR-CT reconstruction results is the premise for further research given that the detailed information in the reconstruction results typically contains important structures. A detailed analysis of the CT imaging principles is necessary to develop a new Ultra-sparse angle SR-CT optical measurement method. A schematic of projection acquisition conducted using SR-CT is shown in Fig. 2(a). The mathematical model for generating a projected sinogram is represented by , where RL and f(x, y) represent the projected integral intensity along the X-ray and target to be detected, respectively. The sinogram was obtained by integrating the tomogram, and therefore, some detailed signals in the tomogram f(x, y) were buried in the sinogram. The curve shown in Fig. 2(b) can be obtained by integrating the tomogram f(x, y) along the X-ray direction. Regions of interest (ROI) are marked by red arrows in Fig. 2(a). The difference between the red and blue curves in Fig. 2(b) is whether there are small particle in the ROIs in the tomogram. The difference between red and blue curves was of 1.507%, which is indeed minimal. Tiny structural information in the tomogram can easily be buried in the projected sinogram.

Fig. 2
(Color online) (a) Tomogram f(x, y). The red arrow marks small particles of interest. (b) Integral curve of Fig. 2(a) along the X-ray direction. (c) Gradient of tomogram. (d) Integral curve of Fig. 2(c) along the X-ray direction. (e) Schematic of the process for extracting high-frequency information
pic

However, accurate representations of detailed information have not been considered by current deep learning ideas. The sinogram is integral to the tomogram along the X-ray direction, and therefore, high-frequency information of internal details can be easily lost. Considering the idea of a DDC-Net as an example, the lost details cannot be recovered in subsequent optimization in the tomogram domain if the high-frequency information cannot be observed in the sinogram domain. This will lead to a distorted reconstruction result, limiting its application potential in accurate SR-CT characterization. Therefore, adding high-frequency information constraints on deep neural networks is necessary to improve the representation of the detailed information.

The gradient transformation of the tomogram can highlight the expression of detailed information in its sinogram. The integral curve in Fig. 2(d) reflect this characteristic. The contribution of detailed information to the integral value is indicated by the difference between the peak values of labeled points that refer to the integral values with and without the small particle in the ROI region. Compared with 1.507% in Fig. 2(b), the difference ratio of the green marker points in Fig. 2(d) is as high as 14.23%. Therefore, we add “high-frequency information” constraints in the neural network to learn the detailed information of the tomogram.

In response to this problem, “high-frequency information” constraints are added to drive the learning direction of the neural network, as shown in Fig. 2(e).

The “high-frequency information” constraint is implemented in two steps: perform gradient transformation on tomogram f(x, y) to obtain G(x, y), and perform radon transformation on the gradient image G(x, y).

In recent years, the continuous development of deep learning has provided new ideas to solve serious ill-posed problems such as ultra-sparse angle SR-CT reconstruction. The dual domain learning idea is used considering the physical process of CT reconstruction. Given this context, emphasizing the expression of high-frequency information in a neural network can help improve the accuracy of the reconstruction results. Therefore, designing a neural network considering the high-frequency detail information is a primary task for alleviating the problem of ultra-sparse angle SR-CT reconstruction.

Finally, a novel method suitable for accurate SR-CT characterization under ultra-sparse angle conditions is proposed. This method is referred to as HFIC-Net.

2.2
Deployment of high-frequency information constrained neural network

The HFIC-Net framework is illustrated in Fig. 3. The HFIC-Net comprises two deep neural networks G1 and G2. These networks drive learning in the sinogram and tomogram domains, respectively. In the sinogram domain, G1 restores the sparse angle sinogram to a high-quality sinogram. Assuming that as an ultra-sparse angle sinogram input for HFIC-Net, it is converted to a fully sampled sinogram by mapping G1. Subsequently, is converted to the tomogram domain through the FBP algorithm to obtain the reconstructed result . Next, high-frequency information on is extracted to obtain . Then, mapping G2 performs super-resolution reconstruction on the tomogram, yielding . Finally, the loss of the HFIC-Net comprises sinogram content loss L1, high-frequency information loss L2, and tomogram content loss L3. G1 and G2 are updated in reverse through gradient descent.

Fig. 3
(Color online) (a) Architecture of the proposed HFIC-Net. (b) Network structures of G1 and G2 of the HFIC-Net. G1 and G2 are five-layer deep neural networks with the same input size and output size as 256 × 256 × 3
pic

Sinogram content loss L1: In the sinogram domain, the mean square error (MSE) loss between the high quality sinogram and real label is used as the sinogram content loss L1. The richness of the sampled projection information directly determines the quality of the tomogram. The quality of reconstructed tomogram can be effectively improved if the degradation degree of the projected sinogram can be reduced. Mathematically, the sinogram content loss of HFIC-Net can be expressed aspic(1)where xi, , G1, N, and θ represent the input sparse angle projection, real full angle projected sinogram, mapping in the sinogram domain, number of training data pairs, and training parameters in the entire network, respectively.

High-frequency information loss L2: The high-frequency information loss is added in the network considering the importance of internal detail information. The MSE loss between the high- frequency information feature map and real label is used as the high-frequency information loss. Mathematically, the high-frequency information loss of HFIC-Net can be expressed aspic(2)where xi, , G1, , N, and θ represent the input sparse angle projection, real label, mapping in the sinogram domain, high-frequency information extraction operation, number of training data pairs, and training parameters in the entire network.

Tomogram content loss L3: In the tomogram domain, the mean square error between the high-quality tomogram y generated by G2 and real label is used as the tomogram content loss. Some small errors in the sinogram domain are considerably magnified after FBP reconstruction. Therefore, it is necessary to achieve further improvements in the tomogram domain. Mathematically, the tomogram loss of HFIC-Net can be expressed aspic(3)where xi, fbp, , G1, N, and θ represent the input sparse angle projection, FBP reconstruction, real tomogram, mapping in the tomogram domain, number of training data pairs, and training parameters in the entire network, respectively.

The final objective of the proposed HFIC-Net is defined by combining these three losses aspic(4)Network structure of the HFIC-Net: As shown in Fig. 3(b), G1 and G2 are composed of an encoding–decoding neural network. The coding module is used to extract feature information from the input image. The encoder comprises five convolutional layers; the size of convolution kernel is 4 × 4, the step size is 2, and the number of channels in each layer is 64, 128, 256, 512, and 512, respectively. The activation functions of these five convolutional layers are all Relu. Batch normalization is performed after the activation of each layer. The decoder comprises five layers of deconvolution. The decoding module recombines the acquired feature information into an image. The first four layers of the convolution kernels are 4 × 4 in size, with a step size of 2, and the number of channels are 512, 256, 128, and 64. The fifth-layer network convolution kernel is 4 × 4, the step size is 2, and the number of channels is 3. The activation functions of these five convolutional layers were all Relu. The output size of HFIC-Net is the same as the input; both are 256 × 256 × 3.

3

Results and discussion

The effectiveness of the proposed method is verified by a series of simulated and real SR-CT experimental data. HFIC-Net can only use eight angle projections to achieve the reconstruction effect of the FBP method in 360 projections. Meanwhile, the HFIC-Net shows outstanding advantages in the accurate reconstruction of image internal details. This method corrects the error information of reconstruction. The results of real SR-CT experimental data show that the ultra-sparse angle CT reconstruction method proposed in this paper can alleviate the contradiction between the evolution rate and SR-CT time resolution in the process of ultrafast evolution. This new method suppresses artifacts and ensures the accuracy of the reconstruction results, making it suitable for the in situ SR-CT accurate characterization of the ultrafast evolution process.

3.1
Training configuration and performance evaluation

The HFIC-Net was verified using simulation and real experimental data. All training work was conducted on Inter(R) Core(TM) i7-8700 3.20 GHz CPU and NVIDIA RTX 2070 GPU. All experiments were run in the environment of Python3.7, with CUDA10.0 and CUDNN-v7.4 for acceleration, and the TensorFlow deep learning framework to implement the proposed method. We applied the Adam optimizer for the HFIC-Net. The learning rate was fixed at 0.0002, and the exponential decay rates for the moment estimates in the Adam optimizer were β1 = 0.5 and β2=0.999. The weight balance parameters of different losses were set to 1.

If the HFIC-Net method was used in in situ CT analysis in a training environment based on Inter(R) Core(TM) i7-8700 3.20 GHz CPU and NVIDIA RTX 2070 GPU, it would incur the following computation cost: (1) Preprocessing time for forming the dataset: This part of the time referred to the time required to generate low-quality projected sinograms, high-frequency information constraints, and sinogram labels. This part of the time was approximately 354 s. (2) HFIC-Net training time: The training time for the HFIC-Net was approximately 33 h. The actual computational cost of deploying HFIC-Net would be approximately 33 h more than that when not using this method.

Quantitative parameters were adopted to evaluate the HFIC-Net. The parameters were used for evaluating the difference between the reconstructed and original images, including (1) structural similarity index (SSIM), (2) normalized mean square criterion D, and (3) normalized average absolute distance criterion R [39, 40]. These parameters were respectively calculated aspic(5)pic(6)pic(7)where represents the pixel value of the original image; represents the pixel value of the reconstructed image; and represent the average of all pixel values in an image; N represents the total number of pixels in the image; σu and σv represent the standard deviations; and σu v represents the covariance. Constants C1 and C2 are set as in [40]. Parameter SSIM is a criterion for structural similarity between the reconstructed and original images, and the value range is [0,1]. The larger the value, the higher is the reconstruction accuracy. Parameter D and R are used to evaluate the relative errors of reconstruction. Parameter D indicates a large deviation in few points of the reconstructed image, while R indicates a small deviation in most points of reconstructed image. The smaller the parameter values of D and R, the higher is the reconstruction quality.

3.2
Reconstruction results of simulation data
3.2.1
Comparison with other methods based on simulation data

The effectiveness of the HFIC-Net was verified through numerical experiments with simulation data. The simulation data were a series of randomly generated particle images: 6400 randomly generated images were used as the training set, and another 100 images were generated as the testing set. A complete sinogram was sampled in the range of 180° to be used as a label for the sinogram domain. Then, an ultra-sparse angle sinogram of 8 projections was used as the input of HFIC-Net, and high-quality model images were used as the label for tomogram domain. Finally, the reconstruction tomogram was output.

The effectiveness of the new method was verified by comparing the simulation data test results with those of the most commonly used FBP method (Table 1). SSIM increased nearly 500% by the HFIC-Net compared to the result of the FBP with eight angles. In addition, the HFIC-Net can only use eight angle projections to achieve the reconstruction effect of the FBP method in 360 projections. Figure 4(c) and (d) appear to be almost consistent. A comprehensive quantitative evaluation revealed that the SSIM of the image structure similarity increased from 0.1951 of the eight-angle FBP to 0.9620. Further, in terms of image relative error, the HFIC-Net was superior or equivalent to that of the full-angle FBP.

Table 1
Quantitative evaluation (100 testing images)
  FBP (8 angles) FBP (360 angles) HFIC-Net (8 angles)
arg.SSIM 0.1951 0.5319 0.9620
avg.D 1.2968 0.2783 0.0978
avg.r 1.2890 0.2192 0.0492
Show more
Fig. 4
(Color online) (a) Original image. (b) FBP reconstruction results at eight angles. (c) FBP reconstruction results at 360 angles. (d) HFIC-Net reconstruction results at eight angles
pic

The reconstruction quality of the HFIC-Net was compared with that of the FBP-Conv, DDC-Net, and SART-FDTV-ASD [41] methods. The results of one of the 100 testing model with eight angles by several methods are illustrated in Fig. 5. Under the eight angles sampling conditions, although the result of FBP-Conv, SART-FDTV-ASD, and DDC-Net methods was considerably better than that of FBP, the details of the reconstruction results were biased. The reconstruction quality of the HFIC-Net was improved compared to those of the other methods. Image details were preserved, and internal artifacts were significantly suppressed. Figure 5(g)-(although the result of FBP-Conv, k) shows an absolute difference between results of each method and the original image to demonstrate the effect of the new method more clearly. The difference between the results of the HFIC-Net and original image was minimal. Figure 5(l) and (m) show the profiles along the blue and red solid line in Fig. 5(a), respectively. Through visual inspection, the gray distribution value of the HFIC-Net method is the closest to the original image. The results of the other algorithms indicated by the red arrow indicate error information, which is far from the real label. The comparison results indicate that the proposed new method has advantages in detail characterization.

Fig. 5
(Color online) (a) Original image. (b)–(f) FBP, FBP-Conv, DDC-Net, SART-FDTV-ASD, and HFIC-Net reconstruction results at eight angles, respectively. (g)–(k) represent the absolute differences of (b)–(f) with respect to the original image, respectively. (l) Profiles along the blue solid line in Fig. 5(a). (m) Profiles along the red solid line in Fig. 5(a)
pic

For a quantitative analysis, the average calculation results of 100 testing images are listed in Table 2. The reconstruction results of HFIC-Net are better than those of FBP-Conv, DDC-Net, and SART-FDTV-ASD. The parameter SSIM is increased from 0.1951, 0.7937, 0.9212, and 0.9309 of FBP, SART-FDTV-ASD, FBP-Conv, and DDC-Net to 0.9620, respectively. The parameter D was decreased from 1.2968, 0.2786, 0.1247, and 0.1448 of FBP, SART-FDTV-ASD, FBP-Conv, and DDC-Net to 0.0978, respectively. HFIC-Net achieved a good performance for parameter R that emphasized the small error of most points.

Table 2
Quantitative evaluation (100 testing images)
  FBP FBP-Conv DDC-Net SART-FDTV-ASD HFIC-Net
avg.SSIM 0.1951 0.9212 0.9309 0.7937 0.9620
avg.D 1.2968 0.1247 0.1448 0.2786 0.0978
avg.R 1.2890 0.0465 0.0851 0.1439 0.0492
Show more

This new method shows outstanding advantages in local detail representation. The ROI of red rectangle in Fig. 6(a) is enlarged in Fig. 6(b) to further demonstrate the performance of this new method. Figure 6(c)–(f) corresponded to the results of different algorithms for ROI, respectively. The major areas of visual difference are marked by red arrows. In Fig. 6(c), the reconstruction results of FBP had almost no effective information. Local detail structure information is submerged. In Fig. 6(d)–(f), the reconstruction results are wrong, i.e., original small particles disappear. In Fig. 6(g), the HFIC-Net reconstruction result demonstrated clear edges and accurate structures. The comparison results indicated that the proposed new method has advantages in detail characterization.

Fig. 6
(Color online) (a) Original image. (b) Local enlarged region of the original image. (c)–(g) Correspond to the results of FBP, FBP-Conv, DDC-Net, SART-FDTV-ASD, and HFIC-Net for ROI, respectively. Red arrows mark major areas of visual difference
pic

Table 3 lists the quantitative evaluation indicators of local details. The new method has significant advantages in terms of SSIM. The parameter SSIM increases from 0.3679, 0.4000, 0.4144, and 0.4247 of FBP, DDC-Net, FBP-Conv, and SART-FDTV-ASD to 0.7318, respectively. The parameter D decreases from 1.6070, 1.4716, 1.2590, and 1.2445 of FBP, DDC-Net, SART-FDTV-ASD, and FBP-Conv to 0.7691, respectively. The parameter R decreases from 0.2214, 0.1693, 0.1437, and 0.1343 FBP, DDC-Net, SART-FDTV-ASD, and FBP-Conv to 0.0737, respectively.

Table 3
Quantitative evaluation of ROI
  FBP FBP-Conv DDC-Net SART-FDTV-ASD HFIC-Net
SSIM 0.3679 0.4144 0.4000 0.4247 0.7318
D 1.6070 1.2445 1.4716 1.2590 0.7691
R 0.2214 0.1343 0.1693 0.1437 0.0737
Show more
3.2.2
Ablation Study

FBP-Conv was used as the baseline for adding components to further evaluate the effectiveness of each module in HFIC-Net. The configuration involving comparison includes the following four groups: (1) Baseline FBP-Conv, (2) dual domain DDC-Net without high-frequency information constraints, (3) FBP-Conv constrained by only high-frequency information in a sinogram domain, and (4) HFIC-Net.

Quantitative results in Table 4 confirmed that adding high-frequency information constraints to the HFIC-Net method was beneficial for optimizing the quality of sparse angle CT reconstruction. SSIM significantly improved compared to the baseline FBP-Conv method. Compared to not adding high-frequency information constraints, SSIM increased from 0.9309 to 0.9620, confirming the effectiveness of adding high-frequency information constraints.

Table 4
Quantitative evaluation of ROI
Config SSIM D R
FBP-Conv 0.9212 0.1247 0.0456
FBP-Conv+ L1 loss (DDC-Net) 0.9309 0.1448 0.0851
FBP-Conv+ L2 loss 0.9272 0.1190 0.0442
FBP-Conv + L1 loss + L2 loss (HFIC-Net) 0.9620 0.0978 0.0492
Show more

Thus, this proposed new method showed superior performance in detail restoration and artifact reduction and can be considered an accurate reconstruction method.

3.3
Validation of real experimental data

The HFIC-Net method was applied to the reconstruction of actual SR-CT experimental projection data to evaluate the effectiveness of the new method in practical applications. The experiment was conducted at the BL13W1 beamline of Shanghai Synchrotron Radiation Facility (SSRF). The real experimental data comprised a series of tomograms of particle samples. The training set consisted of 4300 tomograms, and another 50 were selected as the testing set to verify the training results. The number of sparse sampling angles was 8.

Compared to other methods, HFIC-Net also shows certain advantages. The results of one of the 50 testing model with eight angles obtained using several methods are shown in Fig. 7. Compared with the other algorithms, the new method improves the quality of reconstruction with a clear boundary and complete structure. The reconstruction results of FBP have serious truncation artifacts under the eight angles sampling conditions, and the detailed structural information is distorted. Compared with the FBP reconstruction results, the artifacts of FBP-conv, SART-FDTV-ASD, and DDC-Net method are significantly suppressed and the visual effect is improved. Unfortunately, in terms of the detailed structure, there exists considerable error information. The reconstruction quality of the HFIC-Net is improved when compared to the quality obtained using the other methods. Image details are also preserved, and the internal artifacts are significantly suppressed. Figures 7(g)–(k) show the absolute difference between the results of each method and the original image to demonstrate the effect of new method more clearly. Figure 7(l) and (m) indicated the profiles along the blue and red solid line in Fig. 7(a), respectively. Through visual inspection, the gray distribution value of the HFIC-Net method is the closest to the original image. The results of other algorithms indicated by the red arrow had error messages, which is far from the real label. The comparison results demonstrate that the proposed new method showed advantages in detail characterization.

Fig. 7
(Color online) (a) FBP reconstruction results at 180 angles. (b)–(f) FBP, FBP-Conv, DDC-Net, SART-FDTV-ASD, and HFIC-Net reconstruction results at eight angles, respectively. (g)–(k) are the absolute differences of (b)–(f) with respect to the original image, respectively. (l) Profiles along the blue solid line in Fig. 7(a). (m) Profiles along the red solid line in Fig. 7(a)
pic

This new method also demonstrated advantages in local detail representation. The ROI indicated by the red rectangle in Fig. 8(a) was enlarged in Fig. 8(b) to further demonstrate the performance of this new method. Figure 8(c)–(g) corresponded to the results of different algorithms for ROI, respectively. The main differences between results of these methods and original images were marked by the red arrows. In Fig. 8(c), the detailed structure information of FBP reconstruction is almost completely lost. In Figs. 8(d)–(f), the reconstruction results are wrong: there is no particle gap. In Fig. 8(g), the HFIC-Net reconstruction result demonstrates clear edges and accurate structures. The comparison results demonstrate that the proposed method can realize accurate reconstruction.

Fig. 8
(Color online) (a) FBP reconstruction results at 180 angles. (b) Local enlarged region of Fig. 8(a). (c)–(g) Results of FBP, FBP-Conv, DDC-Net, SART-FDTV-ASD, and HFIC-Net at eight angles for ROI, respectively. Red arrows mark major areas of visual difference
pic

With the rapid development of deep learning, some researchers have begun to construct deep neural network based on CT reconstruction process to overcome the bottleneck problem of in-situ SR-CT characterization of ultrafast evolution process. The deep neural networks are arranged in the sinogram and tomogram domains, and some progress had been achieved. However, a good visual effect does not indicate the accurate reconstruction of a tomogram. Therefore, we added a “high-frequency information constraint,” which can reflect the expression of real detail information based on DDC-Net to improve the high preservation of reconstructed results.

The proposed HFIC-Net achieved the best score on quantitative evaluations such as SSIM. The HFIC-Net recovers accurate structure and weak detail information under ultra-sparse angle conditions. Figure 8 shows that the HFIC-Net method accurately reconstructed the gaps between particles. This weak but important information was crucial for analyzing the mechanism of material evolution; however, other methods easily lost these details. In conclusion, the HFIC-Net method had advantages in image detail restoration, which is expected to aid in improving the accuracy of SR-CT characterization under ultra-sparse angle conditions.

4

Conclusion

A novel high-frequency information constraint network called HFIC-Net was proposed to solve the ultra-sparse angle reconstruction problem of in-situ SR-CT during rapid evolution. In this method, high-frequency information loss considering detailed structural constraints was added based on the “sinogram–tomogram domain” joint optimization. The effectiveness of this new method was verified by numerical simulation and real SR-CT experimental data.

Three commonly used image quality evaluation parameters were used in the numerical experiment of simulation data to evaluate the reconstruction results of eight angles of sparse sampling tomogram. SSIM, D, and R were used to evaluate the reconstruction quality. This new method only uses eight angle projections to achieve the reconstruction effect of the FBP method with 360 projections. The quantitative results indicated that adding high-frequency information constraint improved the similarity of the image structure.

Adding “high-frequency information” loss had a positive effect on the accurate reconstruction of HFIC-Net. In the ROI of the tomogram, the reconstruction result obtained using the DDC-Net method had error information, whereas the new method had complete and clear details. This new method had advantages in detail restoration and artifact reduction.

An actual experimental data test was conducted to evaluate the effect of HFICNet in practical application. The new method improved the quality of the tomogram and had advantages in restoring particles details and accurate characterization.

In future work, HFIC-Net can combine current advanced deep learning approaches to provide richer information constraints for deep neural networks. In addition, its applicability in in situ experimental environments can be further evaluated. Adaptive learning can also be used to meet the applicability under different imaging conditions and material sample systems.

In conclusion, the HFIC-Net proposed in this paper is suitable for the in situ characterization of SR-CT in ultrafast evolution process.

References
1.J.F. Ji, H. Guo, Y.L. Xue et al.,

The new X-ray imaging and biomedical application beamline BL13HB at SSRF

. Nucl. Sci. Tech. 34, 197 (2023). https://doi.org/10.1007/s41365-023-01349-2
Baidu ScholarGoogle Scholar
2.S.K. Han, Q.H. Li, MAA Newton et al.,

Research on Cotton Yarn Based on Synchrotron Radiation 3D Micro-CT Imaging

. Fibers Polym 25, 543-555 (2024). https://doi.org/10.1007/s12221-023-00463-7
Baidu ScholarGoogle Scholar
3.Y.D. Wang, G.Y. Peng, Y.J. Tong et al.,

Effects of some factors on X-ray spiral micro-computed tomography at synchrotron radiation

. Acta. Physica. Sinica. 61, 054205 (2012). https://doi.org/10.7498/aps.61.054205
Baidu ScholarGoogle Scholar
4.D.J. Ji, G.R. Qu, C.H. Hu et al.,

Contrast Enhancement Method Based on Synchrotron Radiation CT Image Reconstruction

. Laser Optoelectron. P. 57, 221024 (2020). https://doi.org/10.3788/lop57.221024
Baidu ScholarGoogle Scholar
5.M. Wang, Y.W. Chen, X.F. Hu et al.,

Error mechanism of light source for synchrotron radiation computed tomography technique

. Acta. Physica. Sinica. 57, 6202-6206 (2008). https://doi.org/10.7498/aps.57.6202
Baidu ScholarGoogle Scholar
6.J.Y. Buffière, H. Proudhon, E. Ferrie et al.,

Three dimensional imaging of damage in structural materials using high resolution micro-tomography

. Nucl. Instrum. Methods Phys. Res., Sect. B. 238, 75-82 (2005). https://doi.org/10.1016/j.nimb.2005.06.021
Baidu ScholarGoogle Scholar
7.S.M.H. Hojjatzadeh, N.D. Parab, W. Yan et al.,

Pore elimination mechanisms during 3D printing of metals

. Nat. Commun. 10, 3088 (2019). https://doi.org/10.1038/s41467-019-10973-9
Baidu ScholarGoogle Scholar
8.A.E. Scott, M. Mavrogordato, P. Wright et al.,

In situ fibre fracture measurement in carbon–epoxy laminates using high resolution computed tomography

. Compos. Sci. Technol. 71, 1471-1477 (2011). https://doi.org/10.1016/j.compscitech.2011.06.004
Baidu ScholarGoogle Scholar
9.F. Xu, Y. Niu, X.F. Hu et al.,

Role of second phase powders on microstructural evolution during sintering

. Exp. Mech. 54, 57-62 (2014). https://doi.org/10.1007/s11340-013-9716-7
Baidu ScholarGoogle Scholar
10.J. Z. Hu, Y. Cao, T.D. Wu et al.,

High-resolution three-dimensional visualization of the rat spinal cord microvasculature by synchrotron radiation micro-CT

. Med. Phys. 41, 101904 (2014). https://doi.org/10.1118/1.4894704
Baidu ScholarGoogle Scholar
11.B.D. Smith,

Image reconstruction from cone-beam projection; Necessary and sufficient conditionas and reconstruction methods

. IEEE T. Med. Imaging Mi 4, 14 (1985). https://doi.org/10.1109/TMI.1985.4307689
Baidu ScholarGoogle Scholar
12.R. Cunningham, C. Zhao, N. Parab et al.,

Keyhole threshold and morphology in laser melting revealed by ultrahigh-speed x-ray imaging

. Science 363, 849-852 (2019). https://doi.org/10.1126/science.aav4687
Baidu ScholarGoogle Scholar
13.F. Xu, B. Dong, X.F. Hu et al.,

In situ investigation on rapid microstructure evolution in extreme complex environment by developing a new AFBP-TVM sparse tomography algorithm from original CS-XPCMT

. Opt. Laser Eng. 96, 124-131 (2017). https://doi.org/10.1016/j.optlaseng.2016.05.017
Baidu ScholarGoogle Scholar
14.Y.Q. Yang, W.C. Fang, X.X. Huang et al.,

A new imaging mode based on X-ray CT as prior image and sparsely sampled projections for rapid clinical proton CT

. Nucl. Sci. Tech. 34, 126 (2023). https://doi.org/10.1007/s41365-023-01280-6
Baidu ScholarGoogle Scholar
15.L.A. Shepp, B.F. Logan,

The Fourier reconstruction of a head section

. IEEE T. Nucl. Sci. 21, 21-43 (2013). https://doi.org/10.1109/TNS.1974.6499235
Baidu ScholarGoogle Scholar
16.LI. Rudin, S. Osher, F. Fatemi,

Nonlinear total variation based noise removal algorithms

. Physica D: Nonlinear Phenomena 60, 259-268 (1992). https://doi.org/10.1016/0167-2789(92)90242-F
Baidu ScholarGoogle Scholar
17.M. Ertas, I. Yildirim, M. Kamasak et al.,

An iterative tomosynthesis reconstruction using total variation combined with non-local means filtering

. Biomed. Eng. Online 13, 65 (2014). https://doi.org/10.1186/1475-925X-13-65
Baidu ScholarGoogle Scholar
18.A.H. Andersen, A.C. Kak,

Simultaneous algebraic reconstruction technique (SART): a superior implementation of the ART algorithm

. Ultrasonic Imaging 6, 81-94 (1984). https://doi.org/10.1016/0161-7346(84)90008-7
Baidu ScholarGoogle Scholar
19.H. Chen, Y. Zhang, W.H. Zhang et al.,

Low-dose CT via convolutional neural network

. Biomed. Opt. Express 8, 679-694 (2017). https://doi.org/10.1364/boe.8.000679
Baidu ScholarGoogle Scholar
20.J.Y. Ma, Y. Ren, P. Feng et al.,

Sinogram denoising via attention residual dense convolutional neural network for low-dose computed tomography

. Nucl. Sci. Tech. 32, 41 (2021). https://doi.org/10.1007/s41365-021-00874-2
Baidu ScholarGoogle Scholar
21.Y. Han, J.C. Ye,

Framing U-Net via deep convolutional framelets: application to sparse-view CT

. IEEE T. Med. Imaging 37, 1418-1429 (2018). https://doi.org/10.1109/tmi.2018.2823768
Baidu ScholarGoogle Scholar
22.X.Y. Guo, L. Zhang, Y.X. Xing.

Study on analytical noise propagation in convolutional neural network methods used in computed tomography imaging

. Nucl. Sci. Tech. 33, 77 (2022). https://doi.org/10.1007/s41365-022-01057-3
Baidu ScholarGoogle Scholar
23.H. Chen, Y. Zhang, Y.J. Chen et al.,

Learn: learned experts’ assessment-based reconstruction network for sparse-data CT

. IEEE T. Med. Imaging. 37, 1333-1347 (2018). https://doi.org/10.1109/tmi.2018.2805692
Baidu ScholarGoogle Scholar
24.X. Guo, X.Z. Sang, D. Chen et al.,

Real-time optical reconstruction for a three-dimensional light-field display based on path-tracing and CNN super-resolution

, Opt. Express. 29, 37862-37876 (2021). https://doi.org/10.1364/OE.441714
Baidu ScholarGoogle Scholar
25.Z.C. Zhang, H.K. Liang, X. Dong et al.,

A sparse-view CT reconstruction method based on combination of DenseNet and deconvolution

. IEEE T. Med. Imaging. 37, 1407-1417 (2018). https://doi.org/10.1109/TMI.2018.2823338
Baidu ScholarGoogle Scholar
26.H. Chen, Y. Zhang, M.K. Kalra et al.,

Low-dose CT with a residual encoder-decoder convolutional neural network (RED-CNN)

. IEEE T. Med. Imaging. 36, 2524-2535 (2017). https://doi.org/10.1109/TMI.2017.2715284
Baidu ScholarGoogle Scholar
27.C. Zhang, Y.S. Li, G.H. Chen et al.,

Accurate and robust sparse-view angle CT image reconstruction using deep learning and prior image constrained compressed sensing (DL-PICCS)

. Med. Phys. 48, 5765-5781 (2021). https://doi.org/10.1002/mp.15183
Baidu ScholarGoogle Scholar
28.G.Y. Chen, X. Hong, Q.Q. Ding et al.,

AirNet:Fused analytical and iterative reconstruction with deep neural network regularization for sparse-data CT

. Med. Phys. 47, 2916-2930 (2020). https://doi.org/10.1002/mp.14170
Baidu ScholarGoogle Scholar
29.A.R. Podgorsak, M. Bhurwani,

C.N. Ionita CT artifact correction for sparse and truncated projection data using generative adversarial networks

. Med. Phys. 48, 615-626 (2020). https://doi.org/10.1002/mp.14504
Baidu ScholarGoogle Scholar
30.F.Y. Jiao, Z.G. Gui, K.P. Li et al.,

A dual-domain CNN-based network for CT reconstruction

. IEEE Access 9, 71091-71103 (2021). https://doi.org/10.1109/ACCESS.2021.3079323
Baidu ScholarGoogle Scholar
31.H.K. Yang, K.C. Liang, K.J. Kang et al.,

Slice-wise reconstruction for low-dose cone-beam CT using a deep residual convolutional neural network

. Nucl. Sci. Tech. 30, 59 (2019). https://doi.org/10.1007/s41365-019-0581-7
Baidu ScholarGoogle Scholar
32.Y.J. Ma, Y. Ren, P. Feng et al.,

Sinogram denoising via attention residual dense convolutional neural network for low-dose computed tomography

. Nucl. Sci. Tech. 32, 41 (2021). https://doi.org/10.1007/s41365-021-00874-2
Baidu ScholarGoogle Scholar
33.G. Wang,

A perspective on deep imaging

. IEEE Access 4, 8914-8924 (2017). https://doi.org/10.1109/ACCESS.2016.2624938
Baidu ScholarGoogle Scholar
34.K.H. Jin, M.T. Mccann, E. Froustey et al.,

Deep convolutional neural network for inverse problems in imaging

. IEEE T. Image Process. 26, 4509-4522 (2016). https://doi.org/10.1109/TIP.2017.2713099
Baidu ScholarGoogle Scholar
35.J. Fu, J.B. Dong, F. Zhao,

A deep learning reconstruction framework for differential phase-contrast computed tomography with incomplete data

. IEEE T. Image Process. 29, 2190-2202 (2019). https://doi.org/10.1109/TIP.2019.2947790
Baidu ScholarGoogle Scholar
36.W. Wang, X.G. Xia, C.J. He et al.,

An end-to-end deep network for reconstructing CT images directly from sparse sinograms

. IEEE T. Comput. Imag. 6, 1548-1560 (2020). https://doi.org/10.1109/TCI.2020.3039385
Baidu ScholarGoogle Scholar
37.Z.L. Li, Q. Gao, Y.P. Wu et al.,

Quad-Net: quad-domain network for CT metal artifact reduction

. IEEE T. Med. Imaging 43, 1866-1879, (2024). https://doi.org/10.1109/tmi.2024.3351722
Baidu ScholarGoogle Scholar
38.Z.L. Li, C.L. Ma, J. Chen et al.,

Learning to distill global representation for sparse-view CT

. 2023 IEEE/CVF International Conference on Computer Vision (ICCV). 21139-21150 (2023). https://doi.org/10.1109/ICCV51070.2023.01938
Baidu ScholarGoogle Scholar
39.Y. Xiao, F. Xu, K. Shen et al.,

A novel CT reconstruction algorithm for incomplete projection based on information repairment

. Opt. Laser Eng. 107, 207-213 (2018). https://doi.org/10.1016/j.optlaseng.2018.03.025
Baidu ScholarGoogle Scholar
40.Z. Wang, A.C. Bovik, H.R. Sheikh et al.,

Image quality assessment: from error visibility to structural similarity

. IEEE T. Image Process. 13, 600-612 (2004). https://doi.org/10.1109/TIP.2003.819861
Baidu ScholarGoogle Scholar
41.Z.S. Yu, X.Y. Wen, Y. Yang.

Reconstruction of sparse-view X-ray computed tomography based on adaptive total variation minimization

. Micromachines 14, 2245 (2023). https://doi.org/10.3390/mi14122245
Baidu ScholarGoogle Scholar
Footnote

The authors declare that they have no competing interests.