logo

Sparse view neutron CT 3D image reconstruction algorithm based on split Bregman method

ACCELERATOR, RAY TECHNOLOGY AND APPLICATIONS

Sparse view neutron CT 3D image reconstruction algorithm based on split Bregman method

Teng-Fei Zhu
Yang Liu
Zhi Luo
Xiao-Ping Ouyang
Nuclear Science and TechniquesVol.35, No.9Article number 152Published in print Sep 2024Available online 30 Aug 2024
21208

As a complement to X-ray computed tomography (CT), neutron tomography has been extensively used in nuclear engineering, materials science, cultural heritage, and industrial applications. Reconstruction of the attenuation matrix for neutron tomography with a traditional analytical algorithm requires hundreds of projection views in the range of 0 to 180 degrees and typically takes several hours to complete. Such a low time-resolved resolution degrades the quality of neutron imaging. Decreasing the number of projection acquisitions is an important approach to improve the time resolution of images; however, this requires efficient reconstruction algorithms. Therefore, sparse-view reconstruction algorithms in neutron tomography need to be investigated. In this study, we investigated the three-dimensional (3D) reconstruction algorithm for sparse-view neutron CT scans. To enhance the reconstructed image quality of neutron CT, we propose an algorithm that uses OS-SART to reconstruct images and a split Bregman to solve for the total variation (SBTV). A comparative analysis of the performances of each reconstruction algorithm was performed using simulated and actual experimental data. According to the analyzed results, OS-SART-SBTV is superior to the other algorithms in terms of denoising, suppressing artifacts, and preserving detailed structural information of images.

Neutron CTOS-SARTSparse-view3D reconstructionSplit BregmanTotal variation
1

Introduction

CT is a nondestructive technology used to inspect the internal structure of an object. Neutron CT has incomparable advantages over X-ray CT for heavy metals, isotopes, and hydrogen-containing materials. Consequently, it has several applications in the fields of fluid measurement and visualization in thermodynamics, special nuclear materials, and nuclear fuel composition [1-4]. Neutron tomography, an important branch of neutron radiography, has become increasingly significant for detecting the internal structure of samples owing to its ability to display three-dimensional (3D) information about the sample interior. The most critical aspect of neutron CT systems is the 3D reconstruction algorithm. The intensity of the neutron source is generally low, with source intensities ranging from 105 to 1010 n/cm2/s [5]. Owing to the low neutron flux, it takes several minutes for neutron CT to acquire projection data at a single angle. Therefore, several hours were required to obtain sufficient projection data. This long-term acquisition method limits the use of neutron CT for studying static or quasistatic processes. For processes with fast acquisition of changing rates, such as the dynamic processes of water distribution in an operating fuel cell and the absorption of oxygen by LaNi5 [6-7], the rapidly changing processes cause motion artifacts to reappear in the images. Therefore, improving the quality of neutron tomography has become an important issue in image reconstruction. In general, two methods to exist to enhance the quality of neutron tomography. The first method is to increase the neutron intensity while reducing the time required to acquire individual projections, which is difficult to achieve. The second method is to shorten the acquisition time of the projection data by reducing the projection angle, which requires the application of a specialized reconstruction algorithm. Therefore, this study largely discusses a sparse neutron CT reconstruction algorithm.

Sparse reconstruction is an ill-posed problem in mathematics and a major problem in image reconstruction. Efficient sparse reconstruction algorithms are an important research topic in CT image reconstruction. Neutron CT reconstruction primarily uses analytical and iterative algorithms. The use of the FBP algorithm in CT reconstruction must be completed using projection data. When FBP is applied to a sparse reconstruction, the reconstructed image exhibits noise and artifacts [8]. Compared with FBP, traditional iterative algorithms such as algebraic reconstruction technique (ART) [9], simultaneous iterative reconstruction technique (SIRT) [10], and simultaneous algebraic reconstruction technique (SART) [11] have more significant advantages. These algorithms achieved satisfactory results with sparse reconstruction. However, without adding previous information, the reconstructed images of these algorithms exhibited artifacts when the projected data were extremely sparse. In addition, deep learning has been widely used in sparse CT image reconstruction. Using deep learning frameworks such as generative adversarial networks, researchers have been able to obtain high-quality images from sparse views [12-15]. For example, convolutional neural networks (CNNs) can improve the quality and efficiency of image reconstruction by learning to map from projection to images, and can also improve the quality of images by training denoising networks that learn to remove noise from images [16-18]. However, deep learning still faces challenges in sparse CT image reconstruction, including difficulties acquiring labeled data and high computational complexity. Because of the specificity of the neutron CT application environment, neutron CT cannot obtain complete projection data for image reconstruction, and considerably less projection data for deep learning training. Hence, in this study, we investigated a 3D reconstruction algorithm for sparse neutron CT.

Fortunately, the rapid development of the compressed sensing (CS) theory has made sparse reconstruction possible [19]. According to this theory, if a signal can be represented sparsely, it can be accurately recovered using a specific sparse transformation. Inspired by the CS theory, sparse-view images were successfully reconstructed using a sparsity prior. Several efficient optimization methods can reconstruct high-quality images with flexibility using efficient image priors, such as the non-local mean [20], wavelet frame [21], and total variation (TV) [22]. The TV-based iterative reconstruction algorithm offers significant advantages for sparse reconstruction. It preserves image edge information through the TV regularization term, suppresses noise and smoothed regions, highlights the edges, and generates clear and accurate images. In addition, the TV regularization term promotes the generation of sparse solutions by encouraging large changes in the pixel values, producing fewer non-zero pixels, and generating more sparse solutions. The TV regularization term has local properties and can handle images with different textures, edges, and regions better. The algorithm has good convergence and stability as well as high interpretability, and parameter tuning can control the smoothness and detail retention of the reconstructed images. Therefore, the key to the iterative reconstruction algorithm is to select the most appropriate regular term. TV-based models produce satisfactory results for image restoration and reconstruction. Current TV-based regularization can reconstruct satisfactory images when sparsity is relatively high. However, when the projected data are sparse, these algorithms do not perform well in eliminating artifacts or suppressing noise. To address this problem, researchers have published several TV-based algorithms [23-25]. Although these approaches are effective at suppressing staircase effects, they increase image noise. Another shortcoming of these models is the assumption that the image is piecewise constant, which destroys the structural information in the image. Therefore, it is necessary to devise a well-performing image reconstruction algorithm.

The split Bregman is a mathematical method for solving the minimization objective function, which essentially solves the objective function minimization problem by introducing an L1 parametric regularization term. This method is widely applied to various L1 parametric regularization image denoising problems and makes it difficult to directly differentiate L1 parametric regularization problems into L2-parametric regularization problems that can be directly differentiated. The traditional minimization method consists of two steps: updating the image data fidelity term and the regularization constraint term. By introducing an initial value of 0, the split Bregman algorithm first transforms the non-differentiable L1-parametric regularization problem and solves the objective function using the conjugate gradient algorithm. The split Bregman algorithm solves the problem of difficulty in differentiating L1-parametric regularization in image denoising using the Bregman distance principle and achieves an accurate and fast solution to the L1 parametric regularization problem. The concept of the Bregman iteration first originated from a generalized function analysis for solving extreme values of convex functions [26]. Osher et al. [27] proposed a regularization method to improve the computing effectiveness of the Euler–Lagrange method for TV models based on the Bregman distance. Although this method improves the computational efficiency, it is more complicated to implement. To simplify the above method, Zhang et al. [28] split the original TV model into two sub-models by introducing auxiliary variables and subsequently performing alternate iterations of scattering and shrinkage operators. Goossens et al. [29] proposed a generalized split Bregman-based regularization method for the wide application of TV minimization in image denoising and sparse-view CT reconstruction and compared it with ASD-POCS to demonstrate the possibility of sparse-view reconstruction and noise regularization. Because of its efficiency, the split Bregman algorithm has already been successfully used in various applications, such as image segmentation, image denoising, image compression, and image reconstruction.

In neutron CT systems, the quality of neutron projection data is relatively low. This is because the image acquisition equipment, transmission equipment, and receiving equipment used in the neutron CT system are not perfect, resulting in the neutron projection data being disturbed by different noises during the acquisition, transmission, and storage processes, which blur the reconstructed neutron images and results in a significant loss of edge structure information. In addition, the image processing algorithm used in neutron CT systems directly impacts the reconstructed image quality; therefore, it is necessary to use efficient image reconstruction algorithms in the neutron CT system. We propose the OS-SART-SBTV algorithm based on the split Bregman tight frame algorithm to improve the quality of neutron CT images. This method can reconstruct neutron CT 3D images quickly and accurately. Compared to the traditional algebraic iterative reconstruction method, the regularization constraint term can be updated simultaneously with the data fidelity term, thus ensuring the convergence and speed of the objective function. According to the experimental results, the OS-SART-SBTV algorithm is more effective than the traditional algebraic iterative reconstruction algorithm in increasing the quality of the reconstructed images and reducing the noise caused by insufficient projection data.

2

Methods

2.1
Low-Rank Matrix Approximation (LRMA) Algorithm

The LRMA algorithm is a method for approximating the representation of a high-dimensional matrix as a low-rank matrix [30]. In image processing, this algorithm simplifies the data by preserving their main features and structure.

We first analyzed the problem of estimating a low-rank matrix X from noisy projection data. Y=X+W,         Y,X,WRm×n, (1) where W denotes the noise model. The LRMA problem can be defined using the following equation: argminx{Ψ(x)=12YXF2+λi=1kϕ(σi(X);a)}, (2) where k=min(m, n) and σi(X) are the ith singular values of the matrix X, ϕ is a regularizer that induces sparsity and may be non-convex.

2.2
Total Variation (TV) Algorithm

Iterative algorithms have an advantage over analytical algorithms under the condition of limited projection views; however, when severely under-sampled, the reconstructed image still has severe artifacts, even with iterative algorithms. Because the CS theory was proposed, CS-based approaches have been successfully applied to eliminate artifacts and other aspects. CS theory can only be applied to sparse images or images that can be represented sparsely. Generally, neutron tomography images are not sparse but can be sparsely expressed by a sparse transformation using the following definition equation: |qu,v,z|=(qu,v,zqu1,v,z)2+(qu,v,zqu,v1,z)2+(qu,v,zqu,v,z1)2, (3) where qu,v,z denotes the pixel value at position (u,v,z).

The TV is the total pixel value in the image. qTV=u,v,z|qu,v,z| =u,v,z(qu,v,zqu1,v,z)2+(qu,v,zqu,v1,z)2+(qu,v,zqu,v,z1)2, (4)

Mathematically, TV-based image reconstruction can be expressed as follows: minqqTV, s.t. ωq=p (5)

2.3
Fast Gradient Projection (FGP) Algorithm

Beck and Teboulle introduced the FGP algorithm to solve the TV, which is derived as follows [31]: minxC|xb|F2+2λTV(x). (6)

The non-smooth nature of TV results in the inability to directly solve Eq. (6). To solve this problem, Chambolle used a gradient-based algorithm for the solution process. Based on this algorithm, a constrained problem pair is constructed. Certain symbols are assumed. The ψ represents the set of matrix pairs (f, q) that satisfy the following relations: (fi,j)2+(qi,j)21, (0ih,0jg)|fi,g+1|1,0ih|qh+1,j|1,0jg

κ:(h1)×g×h×(g1)h×g is linear operation. Its definition formula is as below. κ(f,q)i,j=(fi,jfi1,j)+(qi,jqi,j1)1ih,1jg

The operator κT κ: h×g(h1)×g×h×(g1) adjacent to κ uses the following computational formula. κT(o)=(f,q) where f,qm×n m×n are the matrices that defines the equation. fi,j=oi,joi+1,j, i=1, , h1,  j=1, , gqi,j=oi,joi,j+1, i=1, , h,  j=1, , g1

PC is an orthogonal projection operator of C.Thus, when C=Bl,u, the PBl,u is given by PBl,u(O)i,j={loij<loijloijuuoij>u

Assuming this notation, we obtained the dual problem in Eq. (6). This approach can explain the connection between the dual and primal optimal solutions. To explain this relationship further, we introduced the following proposition. We assumed that (f, q)∈Ψ is the optimal solution to this problem. min(f,q)ψ{h(f,q)HC(bλκ(f,q))F2+bλκ(f,q)F2}. (7)

The expression for HC(o) is given below. HC(o)=oPC(o), oh×g. (8)

h in Eq. (7) is a continuous differentiable function whose gradient is defined as follows: h(f,q)=2λκTPC(bλκ(f,q)), (9) T(o,f,q)=Tr(κ(f,q)To). (10)

Consequently, the problem in Eq. (6) can be transformed into the following equation: minoCmax(f,q)ψ{obF2+2λTr(κ(f,q)To}. (11)

Because the objective function is concave in f, q and convex in o, it is possible to swap the maximum and minimum orders. Thus, we obtain the following formula: max(f,q)ψminoC{obF2+2λTr(κ(f,q)To}. (12)

It is able to be reformulated as below. max(f,q)ψminoC{o(bλκ(f,q))F2bλκ(f,q)F2+bF2}. (13)

Therefore, the optimal solution of the above equation can be obtained. o=PC(bλκ(f,q)) (14)

We can then solve the following dyadic problem by applying Eq. (14) to Eq. (13) and omitting the permanent term in Eq. max(f,q)ψ{PC(bλκ(f,q))(bλκ(f,q))F2bλκ(f,q)F2}. (15)

Because our objective is to resolve the dual problem of Eq. (6), and the gradient is represented by Eq. (11), we introduced a gradient projection algorithm, which is commonly used for denoising problems. Because the norm of oϵh×g is Frobenius norm. For (fq)ϵ(h1)×g×h×(g1), it is expressed as the following equation. (f,q)=fF2+qF2 (16) Lhλ2 (17)

The projection in the set ψ could be calculated simply. For (f,q), the projection Pψ(f,q) can be given by Pψ(f,q)=(l1,l2). The expression for lϵRh×g is given below. li,j1={fi,jmax{1,(fi,j)2+(qi,j)2}fi,jmax{1,|fi,j|+|qi,j|}li,j2={qi,jmax{1,(fi,j)2+(qi,j)2}pi,j2max{1,|fi,j|+|qi,j|} (18)

The image-denoising algorithm based on the FGP algorithm can be obtained by substituting the maximum values of the gradient equation, objective function, and Lipschitz constant into the gradient projection algorithm presented in Eq. (6).

2.4
Split Bregman Reconstruction Algorithm

Unconstrained convex minimization problem for CT image reconstruction models. minfTV+μ2Afp22, (19) where fTV=|f|=s=1Sv=1Vfs,v

The main steps for solving the image reconstruction problem with the split Bregman algorithm for TV minimization are expressed by Eq. (19. First, the two variables are imported to transform Eq. (19) into minf,dx,dy,dz(dx,dy,dz)2+u2Afp22, s.t.dx=xf, dy=yf,dz=zf (20) where (dx,dy,dz)2=s,vdx,s,v2+dy,s,v2+dz,s,v2. The penalty term is subsequently introduced, and the above equation is converted into minf,dx,dy,dz(dx,dy,dz)2+μ2Afp22+λ2dxxf22+λ2dyyf22+λ2dzzf22. (21)

Therefore, the Bregman iteration method can be used to solve Eq. (20). minf,dx,dy,dz(dx,dy,dz)2+μ2Afp22    +λ2dxxfbxk22+λ2dyyfbyk22+λ2dzzfbzk22bxk+1=bxk+(xfk+1dxk+1)byk+1=byk+(yfk+1dyk+1)bzk+1=bzk+(zfk+1dzk+1) (22)

Eq. (20) can usually be updated in two alternating iterative steps f and d. Step1:fk+1=argminfu2Afp22+λ2dxkxfbxk22       +λ2dykyfbyk22+λ2dzkzfbzk22 Step2:  (dxk+1,dyk+1,dzk+1) =argmindx,dy,dz(dx,dy,dz)2   +λ2(dxxfbxk22+λ2(dyyfbyk22   +λ2(dzzfbzk22

The first step considers the specificity of the CT reconstruction matrix A and is solved using the gradient descent method. fk+1=fkαkgk, (23) gk=μ·AT(Afkp)λ·xT(dxxfbxk)λ·yT(dyyfbyk)λ·zT(dzzfbzk), where αk=(gk)·(gk)(gk)·f(gk)·(gk) is the step size.

The second step can be solved by the generalized contraction formula. dxk+1=max(sk1λ,0)xfk+bxksk, dyk+1=max(sk1λ,0)yfk+byksk, dzk+1=max(sk1λ,0)zfk+bzksk, where sk=|xfk+bxk|+|yfk+byk|+|zfk+bzk|.

The specific process of CT reconstruction based on the split Bregman algorithm is as follows.

Step 1: initial k=0, dx0=dy0=dz0=bx0=by0=bz0=0, f(0)=0

Step 2: Perform the gradient descent method using Eq. (21)

Step 3: Calculating intermediate dx, dy, dz, bx, by, bz

Step 4: Return to step 2 until the stop condition is reached.

2.5
OS-SART

The SART is an improved iterative reconstruction algorithm proposed by Anderson and Kak in 1984. The SART is a modification of the ART in that it uses the error of all rays passing through a pixel at a certain angle to correct its value. This formula is expressed as follows: fj(k+1)=fjk+pipφ{λkpin=0J1winfin(k)n=0J1winwij}pipφwij.

The OS-SART algorithm is an improvement over the SART algorithm that divides the projection angle using the following equation: fj(k+1)=fjk+iSt{λkpin=0J1winfin(k)n=0J1winwij}iStwij.

2.6
Proposed algorithm

Based on the OS-SART algorithm and split Bregman method, we propose the 3D reconstruction algorithm OS-SART-SBTV for sparse-view neutron CT. The algorithm employs a two-step iteration process with OS-SART for image reconstruction and the split Bregman method for total variation denoising (Table 1).

Table 1
OS-SART-SBTV algorithm
1. Iteration stop condition not reached
2. First: OS-SART
3. Initialize: f0, λOSSART, NOSSART, λSBTV and NSBTV
4. For n=1 to NOSSART do
5. fj(k+1)=fjk+iSt{λkpin=0J1winfin(k)n=0J1winwij}iStwij
6. Non-negativity constraint,If fOSSART(n)<0, fOSSART(n)=0
7. Second: Split Bregman solving TV
8. fo=fOSSART,dx0=dy0=dz0=bx0=by0=bz0=0
9. Until the stop iteration condition is reached
10. fk+1=argminfu2Afp22+λ2dxkxfbxk22+λ2dykyfbyk22+λ2dzkzfbzk22
11. dxk+1=max(sk1λ,0)xfk+bxksk
12. dyk+1=max(sk1λ,0)yfk+byksk
13.  dzk+1=max(sk1λ,0)zfk+bzksk
14. bxk+1=bxk+(xfk+1dxk+1)
15. byk+1=byk+(yfk+1dyk+1)
16. bzk+1=bzk+(zfk+1dzk+1)
17. end if,
18. end if the stop criterion is satisfied
19. Get the final reconstructed image fOSSARTSBTV
Show more
2.7
Quantitative Evaluation Index

Image evaluation metrics play a vital role in assessing the quality and accuracy of reconstructed images. These metrics provide objective measures to compare different algorithms and determine their performance. The following four image evaluation metrics were selected to analyze the performance of each algorithm.

The smaller the root mean square error (RMSE), the better, and the smaller it is also implies that the closer the two images are, the more detail is retained in the original image. RMSE is defined by the following equation: RMSE=n=1N(p^npn)2N.

The universal quality image (UQI) is a metric for evaluating image quality that quantifies the similarity between two images using the following equation: UQI=2cov(μ^,μ)σ^2+σ2·2μ^μμ^2+μ2.

Correlation coefficient (CC) is an evaluation metric used to quantify the correlation between two images using the following equation: CC=k=1M(u(k)u¯)(utrue(k)utrue¯){k=1M(u(k)u¯)2k=1M(utrue(k)utrue¯)2}1/2.

Mean structural similarity (MSSIM) is based on the concept of structural similarity and aims to measure the degree of structural similarity and distortion between two images using the following equation: MSSIM=1Mi=1M[l(xi,yi)]α×[c(xi,yi)]β×[s(xi,yi)]γ, where brightness information is l(x,y)=2uxuy+C1ux2+uy2+C1, contrast information is c(x,y)=2σxσy+C2σx2+σy2+C2 and structure Information is s(x,y)=σxy+C3σx+σy+C3.

3

Experiment

3.1
3D Digital Head Model Experiment

In this study, the performance differences among the five algorithms were comparatively analyzed using a 3D digital head model with dimensions of 256 × 256 × 256 (Fig.1). The four iterative algorithms must be adjusted with the parameters to obtain the best-quality image. Therefore, the parameters of each algorithm were set as follows: OS-SART-TV algorithm, λOSSART=2.2, Niter0SSART=50, λTV=200, and NiterTV=150. As for the OS-SART-FGPTV algorithm, λOSSART=2.5, NiterOSSART=50, λFGPTV=0.006, and NiterFGPTV=150. As for the OS-SART-LRMA algorithm, λOSSART=2.3, NiterOSSART=50, blcksize=12 12 12. For the OS-SART-SBTV algorithm, λOSSART=2.2, NiterOSSART=50, λSBTV=0.0032, and NiterSBTV=150.

Fig. 1
Slice of the head model at z = 90
pic

Figure 2 displays the dependency between the RMSE and iterations for different numbers of projection views reconstructed using the OS-SART-SBTV algorithm. The results display that OS-SART-SBTV can reach a convergence state after certain iterations. In addition, the convergence speed of the algorithm increased with an increase in sparse views, indicating that the OS-SART-SBTV algorithm can minimize the objective function and obtain a satisfactory solution under different sparse view conditions.

Fig. 2
Relationship between RMSE and the number of iterations of OS-SART-SBTV: (a) 30, (b) 60, and (c) 60+
pic

Figure 3 shows the five algorithms used to reconstruct the images with different projection angles. The numbers of projection views were 10, 30, 60, and 60+. 60+ denotes the reconstruction results of the projection data after adding the Poisson and Gaussian noise models. The quality of the neutron source and the inherent noise of the system equipment can affect the acquired projection data. Therefore, by introducing Gaussian and Poisson noise to the digital head model, we can test the performance and stability of the algorithm. To analyze the noise reduction performance of the algorithm, two noise distributions were added to the digital model. The noise model parameters are the incident photon flux 1×105, mic=0, and δic2=0.25 [32]. As depicted in Figure 3, the sharpness and smoothness of the images reconstructed using the five algorithms increased progressively with respect to the number of projected views when reconstructing the noiseless projection data. When the reconstruction angle was 10 degrees, the reconstructed image quality of all five algorithms was poor, leading to an inability to differentiate the performance of the algorithms effectively. The reconstructed results of OS-SART-TV depicted in Figure 3, were superior to those of the FBP algorithm; however, its reconstructed image exhibited the problems of detail loss and excessive smoothing of edge structure, especially when reconstructing 30 views, the reconstructed image of OS-SART-TV algorithm was blurred. The third to fifth rows in Figure 3 represent the reconstructed images from the OS-SART-FGPTV, OS-SART-LRMA, and OS-SART-SBTV algorithms. The number of iterations for the four algorithms used to reconstruct the different projection views was set to 50. Specifically, the reconstruction results of the OS-SART-FGPTV and OS-SART-LRMA algorithms displayed certain improvements compared to the FBP algorithm. The OS-SART-FGPTV and OS-SART-LRMA algorithms reconstruct more details than OS-SART-TV. Compared to OS-SART-TV, OS-SART-FGPTV, and OS-SART-LRMA, the OS-SART-SBTV algorithm had fewer image artifacts and sharper edge structures when the number of reconstruction angles was 30. Compared with OS-SART-TV, the OS-SART-SBTV algorithm not only suppressed the artifacts well but also reconstructed more structural information about the image. In addition, we zoomed in on the regions of interest (ROI) for the different algorithm-reconstructed images depicted in Fig. 3. According to the ROI of the FBP algorithm, serious artifacts were present in the ROI of the reconstructed image, and a significant amount of detailed image information was lost. It is well known that the ROI edges of images reconstructed by OS-SART-TV are excessively smooth, resulting in blurred images. Compared with the OS-SART-FGPTV and OS-SART-LRMA algorithms, the OS-SART-SBTV algorithm displayed fewer artifacts in the ROI images and reconstructed more detailed image information and edge structure features.

Fig. 3
Slice of reconstructed images of the head model obtained using FBP, OS-SART-TV, OS-SART-FGPTV, OS-SART-LRMA, and OS-SART-SBTV at z = 90
pic

Figure 4 represents the relationship between the number of iterations and the variation in the RMSE for the different algorithms. According to the curves in Fig. 4, the OS-SART-SBTV algorithm has the fastest convergence rate, and the RMSE values for each iteration of the OS-SART-SBTV algorithm were smaller than those of the other two algorithms.

Fig. 4
Relationship between RMSE and the iterations for different algorithms: (a) 30, (b) 60, and (c) 60+
pic

Similar conclusions were drawn from Fig. 5, which displays the error images for each reconstruction algorithm. As shown in Fig. 5, different algorithms reconstructed the error images for the 30, 60, and 60+ projection views. Although we could not discriminate the superiority of each algorithm when there were 60 views, the OS-SART-SBTV algorithm showed great superiority when the number of projected views was small, for example, 30 views. From the error image in Fig. 5, the error image of the FBP had serious artifacts and lost a lot of image structure information, indicating that the FBP algorithm could not be applied to sparse reconstruction. Compared to FBP, the four iterative algorithms performed well in reconstructing detailed image information and suppressing artifacts. The OS-SART-SBTV algorithm exhibited the least loss of detailed structural information in the error image. To further compare and analyze the performance differences between the algorithms, we scaled up the ROI of the error images of the different algorithms (red rectangular area in Fig. 5). According to the ROI of the error image of FBP, it is evident that several image structure features were lost, and multiple artifacts appeared in the error image. By comparing the ROI of the OS-SART-TV, OS-SART-FGPTV, OS-SART-LRMA, and OS-SART-SBTV error images, the OS-SART-SBTV algorithm could reconstruct more detailed structures. Therefore, the OS-SART-SBTV algorithm outperformed the other two iterative algorithms in terms of reconstructing the image detail structures.

Fig. 5
Slice of error images at z = 90
pic

Figure 6 displays the vertical and horizontal profiles of the reconstructed FBP, OS-SART-TV, OS-SART-FGPTV, OS-SART-LRMA, and OS-SART-SBTV. The results resemble the reconstructed and reference images. As shown in Fig. 6, fluctuations were present in the profile lines of FBP, and the pixel values of the reconstructed image significantly deviated from the actual values. Although the four iterative algorithms were superior to FBP, they deviated from the real pixel values. The OS-SART-SBTV algorithm reconstructs images with pixel values closest to the real pixel values. Compared to the FBP, OS-SART-TV, OS-ART-FGPTV, and OS-SART-LRMA algorithms, the reconstructed images of the OS-SART-SBTV algorithm were closer to the reference image, regardless of the vertical or horizontal profiles. This illustrates the excellent performance of the OS-SART-SBTV algorithm for sparse-view CT 3D reconstruction.

Fig. 6
Profile line graphs of reconstructed images by different algorithms: vertical and horizontal
pic

Based on the above results, the FBP algorithm cannot be applied to sparse reconstructions. For further comparison, we selected the RMSE, CC, MSSIM, and UQI to illustrate the edge protection and noise suppression effects of the four iterative algorithms.

Figure 7 depicts the UQI, RMSE, CC, and MSSIM evaluation metrics for the reconstructed images using the four algorithms. First, we quantitatively analyzed images reconstructed using noiseless projection data. Figure 7a displays that the RMSE value of the algorithm increased as the number of projections decreased. When the algorithms reconstruct the projection angle in the same manner, the RMSE of the OS-SART-SBTV reconstructed image was minimized. When the reconstructed sparse view was 30, the RMSE of OS-SART-SBTV was 0.0246. As illustrated in Fig. 7, the CC, MSSIM, and UQI of all algorithms decreased with the number of projection angles. When the number of reconstructed angles was the same, the CC, MSSIM, and UQI of OS-SART-SBTV were at the maximum, indicating that the algorithm reconstructed the image with the best quality. We subsequently quantitatively analyzed the reconstructed images using the noise-containing projection data. The RMSE analysis revealed that the OS-SART-SBTV reconstructed image had the lowest RMSE value when reconstructing the same amount of noise-containing projection data. From the CC, MSSIM, and UQI, the OS-SART-SBTV reconstructed images had the greatest values when reconstructing the same number of noise-containing projection data. In conclusion, according to the reconstruction results of each algorithm, the image quality of our proposed algorithm was the highest when reconstructing the same number of projection views.

Fig. 7
Image evaluation metrics of different algorithms for reconstructing head model. (a) RMSE, (b) CC, (c) MSSIM, and (d) UQI
pic
3.2
Real Neutron Beam Projection Experiment

We analyzed the performance of the OS-SART-SBTV algorithm in real applications by reconstructing the projection data of the clock model (Fig.8). The clock model-based neutron CT projection data were supplied by Schillinger [33]. Schillinger presented 201 equirectangular neutron photographs over a 180-degree range.

Fig. 8
Slice of clock model at z = 110
pic

We used five algorithms to reconstruct the clock-projection data from a real neutron CT experiment to verify the superiority of our algorithms. As shown in Fig. 9, the reconstructed image of FBP contains obvious artifacts. The reconstructed image of the OS-SART-TV was extremely smooth, causing a relatively serious loss of detailed information. Although the OS-SART-FGPTV and OS-SART-LRMA algorithms improved the reconstructed images compared to the OS-SART-TV algorithms, certain structural information was still lost in the images reconstructed by the OS-SART-FGPTV and OS-SART-LRMA algorithms when the sparsity was very small. Visual analysis revealed that the OS-SART-SBTV was better than the other algorithms at eliminating artifacts and suppressing noise. In addition, the OS-SART-SBTV algorithm preserved more detailed structural information, and the reconstructed images were of the highest quality.

Fig. 9
Slices of reconstructed images at z = 110 for different algorithms
pic

The images of the real neutron projection data reconstructed by the four algorithms were evaluated using four image evaluation metrics (Fig. 10). The RMSE of OS-SART-SBTV was the lowest, indicating that the algorithm reconstructed the image nearest to the original image. OS-SART-SBTV had the maximum UQI, CC, and MSSIM, indicating the best performance of the algorithm. In summary, according to the reconstruction results, OS-SART-SBTV is the best method for removing noise, suppressing artifacts, and reconstructing detailed structures.

Fig. 10
Image evaluation metrics of different algorithms for reconstructing clock model. (a) RMSE, (b) CC, (c) MSSIM, and (d) UQI
pic
4

Discussion

This study presents an efficient sparse reconstruction algorithm for neutron CT. We used an iterative algorithm for the reconstruction because the FBP algorithm has an inherent flaw when applied to sparse-view image reconstruction, which makes it unsuitable for reconstructing noisy sparse-view neutron projection data. Moreover, an iterative algorithm can optimize the quality of the reconstructed image using regularization. The TV-based iterative algorithm, which has evident advantages over other methods, is widely used for sparse reconstruction. For example, the OS-SART-TV algorithm is superior to OS-SART in terms of noise suppression. According to the quantitative image evaluation index, OS-SART-SBTV was superior to OS-SART-TV, OS-SART-FGPTV, and OS-SART-LRMA. Based on the results of the 3D digital head model reconstruction, the novel algorithm proposed in this study performed well in both visual observation and quantitative measurement. However, in a realistic neutron CT system, the projection data are affected by electronic devices and photon-counting noise. Consequently, several reconstruction algorithms do not achieve satisfactory results in realistic scanning environments. Therefore, we further validated the performance of OS-SART-SBTV in neutron CT using the projection data of the clock model. According to the experimental results, OS-SART-SBTV is superior to the other algorithms in sparse-view neutron CT 3D reconstruction. That is, OS-SART-SBTV is more suitable for application in real neutron CT scanning systems than the other algorithms. In addition, deep learning has achieved good results in sparse-view reconstruction. The DEAR model proposed in [15] not only effectively removes image artifacts but also displays excellent performance in retaining image edge structure information and feature recovery. In addition, the combination of score-based generative models (SGM) and wavelet subnetworks has made significant research progress in sparse reconstruction, which is capable of generating more accurate and reliable reconstructed images and will become an important research direction for sparse reconstruction in the future [34-35].

In the study of a 3D digital head model, this novel algorithm converged monotonically to a stable solution (Fig. 2). To obtain a convergent solution, several parameters of the algorithm must be optimized. For the OS-SART-SBTV, four parameters need to be determined, λOSSART, NiterOSSART, λSBTV and NiterSBTV. We could significantly improve the image quality when λOSSART was varied in a smaller range. Therefore, we could improve the image quality by adjusting λOSSART. NiterOSSART is typically set to 50 to 100 because the image improves more slowly as the number of iterations increases. In general, after 50 iterations, the algorithm can result in a better-reconstructed image. However, more experience is required to set λSBTV and NiterSBTV. These parameters were adjusted until image quality was optimized.

5

Conclusion

We proposed an efficient 3D reconstruction algorithm for sparse neutron CT. To enhance the quality of neutron CT images, we proposed the OS-SART-SBTV algorithm, which reconstructs images using the OS-SART algorithm and uses the split Bregman method to solve TV. Compared with the FBP, OS-SART-TV, OS-SART-FGPTV, and OS-SART-LRMA algorithms, the OS-SART-SBTV algorithm displays great advantages for visualization and quantitative evaluation. Based on the reconstruction results of the digital head model and real neutron projection data, OS-SART-SBTV demonstrated distinct advantages in reconstructing detailed information, reducing noise, and suppressing artifacts compared to other algorithms. Therefore, the OS-SART-SBTV is suitable for sparse neutron CT reconstruction. In the future, further validation and verification will be performed using the OS-SART-SBTV algorithm.

References
1. R. Zboray, I. Mor, V. Dangendorf et al.,

High-frame rate imaging of two-phase flow in a thin rectangular channel using fast neutrons

. Appl. Radiat. Isot. 90, 122-131 (2014). https://doi.org/10.1016/j.apradiso.2014.03.023
Baidu ScholarGoogle Scholar
2. T. Hibiki, K. Mishima, M. Matsubayashi,

Application of high-frame-rate neutron radiography with a steady thermal neutron beam to two-phase flow measurements in a metallic rectangular duct. 

Nuclear technology 110, 422-435 (1995). https://doi.org/10.1016/S0168-9002(98)01298-4
Baidu ScholarGoogle Scholar
3. P A. Hausladen, M A. Blackston, E. Brubaker et al., Demonstration of Emitted-Neutron Computed Tomography to Count Fuel Pins. (United States. 2012)
4. H. Lan, T. Song, Z. Luo et al.,

Isotope-sensitive imaging of special nuclear materials using computer tomography based on scattering nuclear resonance fluorescence

. Phys. Rev. Appl. 16, 054048 (2021). https://doi.org/10.1103/PhysRevApplied.16.054048
Baidu ScholarGoogle Scholar
5. A. H. Andersen,

Algebraic reconstruction in CT from limited views

. IEEE Trans. Med. Imaging 8, 50-55 (1989). https://ieeexplore.ieee.org/document/20361
Baidu ScholarGoogle Scholar
6. I. Manke, C. Hartnig, M. Grünerbel et al.,

Quasi–in situ neutron tomography on polymer elect-rolyte membrane fuel cell stacks

. Appl. Phys. Lett. 90, 184101 (2007). https://doi.org/10.1063/1.2734171
Baidu ScholarGoogle Scholar
7. B.M. Wood, K. Ham, D.S. Hussey et al.,

Real-time observation of hydrogen absorption by LaNi5 with quasi-dynamic neutron tomography

. Nucl. Instrum. Methods Phys. Res. B 324, 95-101 (2014). https://doi.org/10.1016/j.nimb.2013.10.052
Baidu ScholarGoogle Scholar
8. X. Pan, E. Y. Sidky, M. Vannier,

Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction. Inverse

. Probl. 25, 123009 (2009). https://doi.org/10.1088/0266-5611/25/12/123009
Baidu ScholarGoogle Scholar
9. R. Gordon, R. Bender, G.T. Herman,

Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography

. J. Theor. Biol. 29, 471-481 (1970). https://doi.org/10.1016/0022-5193(70)90109-8
Baidu ScholarGoogle Scholar
10. P. Gilbert,

Iterative methods for the three-dimensional reconstruction of an object from projections

. J. Theor. Biol. 36, 105-117 (1972). https://doi.org/10.1016/0022-5193(72)90180-4
Baidu ScholarGoogle Scholar
11. A.H. Andersen, A.C. Kak,

Simultaneous algebraic reconstruction technique (SART): a sup-erior implementation of the ART algorithm

. Ultrason Imaging 6, 81-94 (1984). https://doi.org/10.1016/0161-7346(84)90008-7
Baidu ScholarGoogle Scholar
12. W.W. Wu, D.L. Hu, C. Niu et al.,

DRONE: Dual-domain residual-based optimization network for sparse-view CT reconstruction

. IEEE Trans. Med. Imaging 40, 3002-3014 (2021). https://doi.org/10.1109/TMI.2021.3078067
Baidu ScholarGoogle Scholar
13. W. Wu, D. Hu, K. An et al.,

A high-quality photon-counting CT technique based on weight adaptive total-variation and image-spectral tensor factorization for small animals imaging

. IEEE Trans. Instrum. Meas. 70, 1-14 (2021). https://doi.org/10.1109/TIM.2020.3026804
Baidu ScholarGoogle Scholar
14. W.W. Wu, D.L. Hu, C. Niu et al.,

Deep learning based spectral CT imaging

. Neural Networks 144, 342-358 (2021). https://doi.org/10.10106/j.neunet. 2021.08.026
Baidu ScholarGoogle Scholar
15. W. Wu, X. Guo, Y. Chen et al.,

Deep embedding-attention-refinement for sparse-view CT reconstruction

. IEEE Trans. Instrum. Meas. 7, 1-11 (2023). https://doi.org/10.1109/TIM.2022.3221136
Baidu ScholarGoogle Scholar
16. X. Guo, P. He, X. Lv et al.,

Material decomposition of spectral CT images via attention-based global convolutional generative adversarial network

. Nucl. Sci. Tech, 34, 45 (2023) https://doi.org/10.1007/s41365-023-01184-5
Baidu ScholarGoogle Scholar
17. Y.J. Ma, Y. Ren, P. Feng, et al.,

Sinogram denoising via attention residual dense convolutional neural network for low-dose computed tomography

. Nucl. Sci. Tech. 32, 41 (2021). https://doi.org/10.1007/s41365-021-00874-2
Baidu ScholarGoogle Scholar
18. X Y. Guo, L. Zhang, Y X. Xing,

Study on analytical noise propagation in convolutional neural network methods used in computed tomography imaging

. Nucl. Sci. Tech. 33, 77 (2022). https://doi.org/10.1007/s41365-022-01057-3
Baidu ScholarGoogle Scholar
19. E. J. Candès, J. Romberg, T. Tao,

Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information

. IEEE Trans. Inf. Theory. 52, 489-509 (2006). https://doi.org/10.1109/TIT.2005.862083
Baidu ScholarGoogle Scholar
20. Y. Zhang, Y. Xi, Q. Yang, et al.,

Spectral CT reconstruction with image sparsity and spectral mean

. IEEE Trans. Comput. Imaging 2, 510-523 (2016). https://doi.org/10.1109/TCI.2016.2609414
Baidu ScholarGoogle Scholar
21. O. Rioul, M. Vetterli,

Wavelets and signal processing

. IEEE Signal Process Mag. 8, 14-38 (1991). https://doi.org/10.1109/79.91217
Baidu ScholarGoogle Scholar
22. E.Y. Sidky, C. M. Kao, X. Pan,

Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT

. J. Xray. Sci. Technol. 14, 119-139 (2006). https://doi.org/10.48550/arXiv.0904.4495
Baidu ScholarGoogle Scholar
23. A. Cai, L. Wang, H. Zhang et al.,

Edge guided image reconstruction in linear scan CT by weighted alternating direction TV minimization

. J. Xray. Sci. Technol. 22, 335-349 (2014).https://doi.org/10.3233/XST-140429
Baidu ScholarGoogle Scholar
24. Y. Guo, L. Zeng, J. Wang et al.,

Image reconstruction method for exterior circular cone-beam CT based on weighted directional total variation in cylindrical coordinates. 

J. Inverse. Ill-Pose. P. 28, 155-172 (2020). https://doi.org/10.1515/jiip-2019-0012
Baidu ScholarGoogle Scholar
25. Y. Wang, Z. Qi,

A new adaptive-weighted total variation sparse-view computed tomography image reconstruction with local improved gradient information

. J. Xray Sci. Technol. 26, 957-975 (2018). https://doi.org/10.3233/XST-180412
Baidu ScholarGoogle Scholar
26. L. M. Bregman,

The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming

. USSR Comput. Math. Math. 7, 200-217 (1967). https://doi.org/10.1016/0041-5553(67)90040-7
Baidu ScholarGoogle Scholar
27. S. Osher, M. Burger, D. Goldfarb et al.,

An iterative regularization method for total variation-based image restoration. 

Multiscale Model Simul. 4, 460-489 (2005). https://doi.org/10.1137/040605412
Baidu ScholarGoogle Scholar
28. Y. Wang, J. Yang, W. Yin, Y. Zhang,

A new alternating minimization algorithm for total variation image reconstruction

. SIAM J. Imaging Sci. 1, 248-272 (2008). https://doi.org/10.1137/040605412
Baidu ScholarGoogle Scholar
29. B. Vandeghinste, B. Goossens, J. De Beenhouwer et al.,

Split-Bregman-based sparse-view CT reconstruction

. Fully 3D 11, 431-434 (2011).
Baidu ScholarGoogle Scholar
30. A. Parekh, I.W. Selesnick,

Enhanced low-rank matrix approximation

. IEEE Signal Process Lett. 23, 493-497 (2016). https://doi.org/10.1109/LSP.2016.2535227
Baidu ScholarGoogle Scholar
31. A. Beck and M. Teboulle,

Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems

. IEEE Trans. Image Process 18, 2419-2434 (2009). https://doi.org/10.1109/TIP.2009.2028250
Baidu ScholarGoogle Scholar
32. H. Jiang, Computed tomography: principles, design, artifacts, and recent advances. Bellingham, Washington USA (SPIE and John Wiley & Sons, Inc., 2009)
33. B. Schillinger, A. E. Craft,

A freeware path to neutron computed tomography

. Phys Procedia. 88, 348-353 (2017).https://doi.org/10.1016/j.phpro.2017.06.047
Baidu ScholarGoogle Scholar
34. W.W. Wu, Y.Y. Wang, Q.G. Liu et al.,

Wavelet-improved score-based generative model for medical imaging

. IEEE Trans. Med. Imaging 43, 966-979 (2023). https://doi.org/10.1109/TMI.2023.3325824
Baidu ScholarGoogle Scholar
35. B. Guan, C.L. Yang, L. Zhang et al.,

Generative modeling in sinogram domain for sparse-view CT reconstruction

. IEEE Trans. Plasma Sci. 8, 195-207 (2023).https://doi.org/10.1109/TRPMS.2023.3309474
Baidu ScholarGoogle Scholar
Footnote

The authors declare that they have no competing interests.