Figures 1-4: 1) Dimensionally perfect can was found for building the phantom. 2) Candles melting in a water bath. 3) Graphite stick and aluminum stick crossed and embedded with the melted candle wax. 4) The candle wax during the solidification phase.
Author: Salla-Maaria Latva-Äijö
Software: Matlab
Data: The open data set 3D cross
Download Package: LevelSetMethodBlogShare
Literature: [1] Dynamic multi-source X-ray tomography using a spacetime level set method, Niemi E, Lassas M, Kallonen A, Harhanen L, Hämäläinen K and Siltanen S, 2015, Journal of Computational Physics 291, pp. 218-237. [2] Dynamic x-ray tomography with multiple sources, E.Niemi, M. Lassas, S. Siltanen, in: 8th International Symposium on Image and Signal Processing and Analysis (ISPA), Sept. 2013, 2013, pp. 618–621. Documentation of the experimental data can be found on Arxive.
Experimental measurements
A simple target which is changing in time was prepared setting aluminum and graphite sticks in different angles in relation to each other. The target was stabilized with candle wax. When the target rotates and we observe its cross sections along the z-axis, the cross sections of the sticks are moving frame by frame. The dynamic phantom was measured in the X-ray laboratory of the University of Helsinki, which is build by Alexander Meaney. The tube voltage was set to 50 kV and the object was rotated one degree after every taken projection image. A 3D-volume video was reconstructed with the FDK algorithm of Matlab’s ASTRA toolbox. It serves us as a ground truth reconstruction.
In real life applications, the aim is to minimize the X-ray dose to the target. One way of doing this is to collect a sparse data set, where X-ray images are taken from fewer directions. The level set method can tolerate this kind of highly incomplete data, so we down sampled the data, picking only every sixth column from the original sinogram. Final data could have been produced measuring only 60 angles.
Figures 5-9: 5) The X-ray setup in the industrial mathematics laboratory in the University of Helsinki. Photographed by Markus Juvonen. 6) The final phantom on an imaging platform. 7) One example of a total of 360 projection images taken from the target. 8-9) An example of one 360 degree sinogram (in color + black and white) from one slice of the produced data.
Principle of the spacetime level set method
Level set methods consider that the attenuation function f(u) is given by the level set function itself inside the level set and the attenuation is zero outside the level set. We model our target as two-dimensional by a non-negative X-ray attenuation function f(u) and add the time variable t as a third dimension u = u(x, y, t) to enforce regularity also in the temporal direction.
We find the level set function u as a minimizer of the functional
where A is an operator modeling 2D Radon transforms measured at several times, β = (β_{1}, β_{2}, β_{3}) is a multi-index with |β| = β_{1} + β_{2} + β_{3}, and α>0 is a regularization parameter. For the special case n = 1, F_{n} is equivalent to non-negativity constrained Tikhonov regularization.
In the case n = 2, we would like to minimize the functional F_{2}, but drop for simplicity the mixed derivatives from the functional. The resulting discretized functional to be minimized is
Because this functional is non-differentiable due to the singularity of f at zero, we smooth-out the singularity using the approximation
before applying the Barzilai-Borwein gradient algorithm for minimization. We use δ = 10^{-2} in numerical computations. We approximate the second derivatives of u with respect to x; y and t using the central difference approximations
Where the spacing h_{t} adjusts the amount of regularization in temporal direction. We start the Barzilai-Borwein algorithm from the initial guess u_{0}=0, choose the first step size to be λ_{0} = 0.0001 and iterate as
where the step size λ_{l} is given by
where is the diagonal Jacobian matrix of with diagonal elements , and
consists of terms of the form
Here we apply the negative boundary condition u = −1 on the boundary of Ω rather than the zero condition, since we would like to have the level set function u to be negative outside the level set.
Figures 10-13: Examples of the reconstructions made with the Level Set method. On the left with colors and on the right with gray scale. By changing the choise of the level, we can highlight different features in the reconstructions. In 10 and 12, you can see the candle wax also, but in 11 and in 13, only the shapes of the moving sticks are visible.
]]>Authors: Kristian Bredies, Tatiana Bubba and Samuli Siltanen.
Software: Tested on Matlab 2018a.
Data: The open data set lotusroot.
Download package as zip file: CodesTGV
Literature: Kristian Bredies, Recovering piecewise smooth multichannel images by minimization of convex functionals with Total Generalized Variation penalty, Lecture Notes in Computer Science, 8293:44– 77, 2014.
Please read also the posts Total variation regularization for X-ray tomography and Total variation regularization for X-ray tomography – Experimental data for an introduction on TV tested on simulated and real data. A summary of the experimental data can be found on arXiv.
The usual Total Variation (TV) seminorm [1] is defined as
TV is usually a suitable model for images with edges. However, natural images are often piecewise smooth due to shading, for instance. To overcome the limits of TV, one can consider higher-order derivatives. For example
where S^{d×d} denotes the set of symmetric matrices. The limit here is that solutions of variational problems with TV^{2} penalty cannot have jump discontinuities and object boundaries become inevitably blurry.
The idea behind Total Generalized Variation (TGV) [2] is to incorporate smoothness information on different scales by combining first- and second-order derivative:
In this way, TGV can measure smooth regions as well as jump discontinuities in a convex manner.
In the following we compare the reconstructions obtained by minimizing the penalty functional:
for different values of the parameters α and β. Here, A is the measurement matrix, m the experimental data, f a discretization of the object to reconstruct, α the regularization parameter and ι_{≥0} the indicator function of the nonnegative orthant. The discretization of TGV is carried out by finite differences as described in details in [3].
For the solution of the minimization problems above, any kind of numerical algorithm for the solution of convex-concave saddle point problems can be used. The algorithm implemented in the dowloadable package above is a primal-dual ascent-descent method with primal extragradient, as described in the famous 2011 paper by Chambolle and Pock [4]. The main bulk of the code is in tomo_tgv.m. The script recon_tgv.m can be used to set up the experiment: loading the data and setting up the values for different parameters (the parameters α and β and the maximum number of iterations).
In the following computational examples we use the 256×256 lotus root dataset obtainable from here (documentation in arXiv). This dataset uses 120 projection directions. We set the maximum number of iterations equal to 10000 and β = 2, and we vary the value of the regularization regularization parameterα in the range {10^{−4}, 10^{−5}, 10^{−6}} (see Figure 1). As always, the value for the regularization parameter is chosen to balance the data mismatch term and the prior information given by the TGV penalty.
True object α = 10^{−4}
α = 10^{−5} α = 10^{−6}
Figure 1: Reconstructions with 10000 iterations and β = 2.
We can also play around with the value of β, keeping fixed the value of the regularization parameterα (and the maximum number of iterations equal to 10000) to see how this affect the reconstructions. For instance, we can fix α = 10^{−5} and vary β in {1.5, 2, 2.5} (see Figure 2).
True object β = 2
β = 1.5 β = 2.5
Figure 2: Reconstructions with 10000 iterations and α = 10^{−5}.
References
[1] L. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D 60, 259–268 (1992).
[2] K. Bredies, K. Kunisch, and T. Pock, Total Generalized Variation, SIAM Journal on Imaging Science3, 492–526 (2010).
[3] K. Bredies, Recovering piecewise smooth multichannel images by minimization of convex functionals with Total Generalized Variation penalty, Lecture Notes in Computer Science 8293, 44–77 (2014).
[4] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, Journal of Mathematical Imaging and Vision 40, 120–145 (2011).
]]>Authors: Kristian Bredies, Tatiana Bubba and Samuli Siltanen.
Software: Tested on Matlab 2018a.
Data: The open data set lotusroot.
Download package as zip file: CodesTV
Literature: L. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D 60, 259–268 (1992).
Please read also the post Total variation regularization for X-ray tomography for an introduction on TV tested on simulated data. A summary of the experimental data can be found on arXiv.
The usual Total Variation (TV) seminorm [1] is defined as
TV is usually a suitable model for images with edges and have become a very popular tool in image processing and inverse problems due to peculiar features that cannot be realized with smooth regularizations [2].
In the following we compare the reconstructions obtained by minimizing the penalty functional:for different values of the regularization parameter α. Here, A is the measurement matrix, m the experimental data, f a discretization of the object to reconstruct, and ι_{≥0} the indicator function of the nonnegative orthant. The discretization of TV formulas is carried out by finite differences as described in details in [3].
For the solution of the minimization problem above, any kind of numerical algorithm for the solution of convex-concave saddle point problems can be used. The algorithm implemented in the dowloadable package above is a primal-dual ascent-descent method with primal extragradient, as described in the famous 2011 paper by Chambolle and Pock [4]. The main bulk of the code is in tomo_tv.m. The script recon_tv.m can be used to set up the experiment: loading the data and setting up the values for different parameters (the regularization parameter and the maximum number of iterations).
In the following computational examples we use the 256×256 lotus root dataset obtainable from here (documentation in arXiv). This dataset uses 120 projection directions. We set the maximum number of iterations equal to 10000 and the regularization parameter α = 10^{−5} (see Figure 1).
True object 1000 iterations 10000 iterations
Figure 1: Reconstructions with α = 10^{−5} .
As always, the value for the regularization parameter is chosen to balance the data mismatch term and the prior information given by the TV penalty. We can play around with the value of α (still keeping the maximum number of iterations equal to 10000) to see how this affects the reconstructions (see Figures 2).
True object 1000 iterations 10000 iterations
True object 1000 iterations 10000 iterations
Figure 2: Reconstructions with α = 10^{−4} (top) and α = 10^{−6 } (bottom).
Unfortunately, sometimes it can also happen that TV reconstructs undesired edges: this artifact is called staircasing effect. This is due to the fact that the model assumption for TV is that an image is piecewise constant up to a discontinuity set. However, natural images are often piecewise smooth due to shading, for instance. Stay tuned for the next post to see how this can be overcome.
References
[1] L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D 60, 259–268 (1992).
[2] M. Burger and S. Osher, A guide to the TV zoo (chapter 1 in M. Burger and S. Osher, Level-Set and PDE-based Reconstruction Methods, Springer, 2013).
[3] K. Bredies, Recovering piecewise smooth multichannel images by minimization of convex functionals with Total Generalized Variation penalty, Lecture Notes in Computer Science 8293, 44–77 (2014).
[4] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, Journal of Mathematical Imaging and Vision 40, 120–145 (2011).
]]>Author: Juho Rimpeläinen.
Software: Tested on both Matlab R2017a and Octave (version 4.2.1)
Data: The open walnut data set
Download package as zip file: code_cwds.zip
Literature: It is recommended you check out the previous posts about X-ray tomography. For more information on the reconstruction algorithm see ^{[1]}. The CWDS method was introduced in ^{[2]}.
We study CT imaging by minimizing the following penalty functional:
Here W denotes the digital implementation of a two dimensional wavelet transform and μ is the regularization parameter. This can be minimized using the iterative soft thresholding algorithm^{[1]}. Namely, by using iterations:
with the soft-thresholding operator S_{μ}, which applies the following function to each component of the vector:
Additionally, we make sure that each pixel of the iterate is positive simply by setting the negative elements to zero. Note, however, that it is uncertain how enforcing nonnegativity this way affects the convergence of the algorithm. For this reason, it might be preferable to use for example the primal-dual fixed point algorithm^{[3]}, which allows easy nonnegativity constraints. We also encourage the reader to check out ^{[4]} for a useful generalization for the iterative soft-thresholding algorithm. However, we deem the iterative soft thresholding algorithm sufficient for demonstrating the method for selecting the regularization parameter.
The method for selecting the parameter μ is based on an assumption, that an a priori known estimate for the sparsity level of the object being imaged is available. This could be obtained, for example, from a set of previously computed reconstructions. Here we assume that the estimate is expressed as a ratio of nonzero wavelet coefficients of the reconstruction. Now on each iteration of the algorithm, we can easily measure the current ratio of the nonzero coefficients, and then we try to adjust μ so that the desired ratio is reached. This could be achieved by using a proportional-integral-derivative (PID)^{[5]} controller. However, even a simple integral control seems to be sufficient: at each iteration μ is updated using
where α is a non-negative tuning parameter, y^{(i)} is the ratio of nonzero coefficients at i-th iteration, and y_{prior} is the a priori known estimate for the ratio. The parameter α is a stepsize: a too large value results in oscillative behavior in the ratio of nonzero coefficients as well as in the values of μ, whereas if the value is too small, reaching the desired ratio takes a long time. There are probably countless different ways to select α, but here we consider a very simple and intuitive approach. The basic idea is to first select a too large value for α, and decrease it each time the difference e = y^{(i)} – y_{prior} changes sign. This way we can ensure (at least to some degree) that the desired ratio of nonzero coefficients will be reached in reasonable time. Our implementation of this idea updates α as follows
This simple tuning strategy has some drawbacks: first of all it requires one to have a reasonable guess for the initial α, so that it is sufficiently large, but preferably not too large as it can take a very long time to decrease it to a good level. We solve this by computing the mean of the largest wavelet coefficients of the back-projection reconstruction and then using some fraction of the mean as the initial value. The thinking here is that back-projection reconstruction should contain most of the major features of the object, and thus it could be used to find a somewhat reasonable range for μ, which in turn helps us to find large enough α so that the scheme above works. Additionally, our method might not always have the best performance, because if α decreases too quickly, one might end up in a situation where y^{(i)} is still far away from y_{prior}, but α is so small that the change in μ is very slow.
Next let us briefly inspect the code (download links above) before looking at the results. The main bulk of the code is in the function cwds.m. It contains the code for the reconstruction algorithm, as well as the implementation for the control scheme. The function Smu_wavelet_oper.m implements the operator S_{μ} defined above, and it in turn uses the functions wavetrans2D.m, wavetrans2D_inv.m and Smu.m, which implement the wavelet transform, the inverse wavelet transform and S_{μ}. Finally the file test.m can be used to set up the experiment: loading the data, setting up the values for different parameters, as well as defining the orthonormal wavelet basis used^{[6]}.
In the following computational examples we use the 328×328 walnut dataset obtainable from here (documentation in arXiv) . We use the desired sparsity level of y_{prior} = 0.32. As the initial value for α, we take 10% of the mean of the n(1-y_{prior}) largest wavelet coefficients of the back-projection reconstruction, here n is the total number of wavelet coefficients. In the first example, we use 30 projection directions. Running the file test.m produces the following figures.
Let us also see the reconstruction from more complete data. This is easily obtained by changing Nang = 120
in test.m. The figures can be seen below. The results are fairly similar to ones above, besides the obvious impovement in reconstruction quality, however the results are obtained slightly faster (in 113 iterations vs. 151 iterations), and the peak in the value of μ is lower. This is due to a better intial value for α, obtained from more complete data.
Summary: Electrical Impedance Tomography (EIT) aims to recover the internal electrical conductivity of a physical body from electrode measurements of voltages and currents at the boundary. EIT has applications in medical imaging, underground prospecting, and nondestructive testing. The image reconstruction problem of EIT is a nonlinear and severely ill-posed inverse problem. The D-bar method is a non-iterative regularized reconstruction method based on low-pass filtering a nonlinear Fourier transform. This page discusses the necessary changes that need to be made if one uses the D-bar method for electrode data. We provide Matlab routines implementing the D-bar method for experimental EIT data from the KIT4 system at the University of Eastern Finland.
Authors: Andreas Hauptmann and Janne Tamminen
Software: Matlab, including PDE toolbox.
Download package as zip file: KIT4_Dbar_recon.zip
Literature: This post is based on the previous blog entry for simulated data. We strongly advise to read this entry first and the references therein. A summary of the experimental data can be found on arXiv.
The D-bar algorithm assumes continuum boundary data and as such it is not readily applicable to real measurement data from electrodes. We present here the necessary pre-processing and minor modifications needed to obtain a reconstruction from experimental data with the D-bar algorithm. The key issues we need to take care of are dependent on the measurement system, in this case the Kuopio Impedance Tomograph 4 (KIT4). These issues are:
1.) The KIT4 system supports only pairwise current injection, that means one electrode is assigned a positive current and another is negative. Nevertheless, the measurement is done over all electrodes at the same time.
2.) We can not measure the complex part and hence we need to use a real valued cosine/sine basis for synthesizing the Neumann-to-Dirichlet (ND) and Dirichlet-to-Neumann (DN) maps. This also leads to changes in the routines that compute the boundary integrals.
In the following we will discuss these two issues separately and then adjust the D-bar codes for simulated data (see previous post) to work with the experimental data from the KIT4 system:
The KIT4 system has 16 electrodes in total, that means we will obtain at most 15 linearly independent current patterns. Due to the system restriction we can only apply pairwise injection patterns (one negative, one positive), in this example we use an adjacent current pattern. That means in each current injection two neighboring electrodes are activated, as illustrated below:
The reconstruction procedure is started in the main script comp01_KIT4_recons.m by loading the measurement data, from there all other scripts and functions will be called. Next we discuss how to transform the measurements from the obtained adjacent current pattern to a Neumann-to-Dirichlet map in cosine/sine basis.
Given the measured voltages from the adjacent current pattern, we need to perform a change of basis. This is necessary to represent the ND map with respect to an orthonormal basis on the boundary of our domain (the unit circle), see for instance chapter 12.7 and 13.3 of [MS2012]. Since we can only measure the voltage potential (and not the phase), we can only represent the measurements with a real valued basis. In particular we choose a cosine/sine basis for N=16 electrodes
with
The change of basis is performed in the script comp02_ND_buildFromKIT4.m, by calling the function transfrom_Adj2Trig.m for the measurements with the target and with an empty tank for the background conductivity. After the change of basis is performed we can compute the inner products to obtain the ND matrix with respect to the cosine/sine basis. We then invert the ND matrices in comp03_DN_build.m to obtain the difference DN matrix for solving the boundary integral equations next.
The computation of the boundary integral equation and the scattering transform follows essentially the previous post, except that we need to change the orthonormal basis functions to the cosine/sine basis as explained above. This is done in the routines for the boundary integral equation in comp04_psi_BIE.m and then for the scattering transform in comp05_tBIE_psi.m.
Now we can solve the D-bar equation to obtain our reconstruction. In general the cut-off radius R for the scattering data t(k) with measured data is smaller, hence we choose here R=4. For additional stability we perform a thresholding, if the values of the scattering data are too large. Precisely, for a threshold of C=25 we set the real and imaginary part (independently) to zero if |Re(t(k))|>C, or |Im(t(k))|>C respectively.
The reconstruction is then performed in comp06_Dbarsolve.m. Below you can see some reconstructions for a few examples. Note that the orientation is different than in the summary on arXiv, This is done, so that electrode 1 is at the angle θ=0 and the numbering of the angles follows the mathematical (anti-clockwise) direction.
The first example presented here consists of three conductive pipes, see below. Since the metal is a perfect conductor, the inner part is treated as if the pipes are solid. In principle the conductivity value of the inclusion would be infinite, but since this can not be resolved (numerically and in the measurement) we simply obtain large conductivity values in our reconstruction.
The second example consists of a conductive pipe and a resistive (solid) tube, which is a perfect resistor and hence has conductivity value 0. The reconstruction of the tube is hence much smaller and closer to zero.
We have included all the measured data that is mentioned in the documentation (arXiv) in the zip archive for this post. You can call the different experiments in the main script comp01_KIT4_recons.m by changing the experiment number ex and the individual set up by changing the version ver. The experiment numbers for the examples above are:
pumpkin: ex 7, ver 2
three pipes: ex 3, ver 5
pipe and tube: ex 4, ver 3
]]>
Summary: Electrical Impedance Tomography (EIT) aims to recover the internal electrical conductivity of a physical body from measurements of voltages and currents at the boundary of the body. EIT has applications in medical imaging, underground prospecting, and nondestructive testing. The image reconstruction problem of EIT is a nonlinear and severely ill-posed inverse problem. The D-bar method is a non-iterative regularized reconstruction method based on low-pass filtering a nonlinear Fourier transform. This page contains Matlab routines implementing the D-bar method for simulated EIT data.
Authors: Jennifer Mueller, Samuli Siltanen and Janne Tamminen.
Software: Matlab, including PDE toolbox (probably works also with Octave).
Download package as zip file: DbarEIT_Matlab
Literature: Chapters 12-15 of the book Linear and nonlinear inverse problems with practical applications by Jennifer L Mueller and Samuli Siltanen (SIAM 2012). Below we refer to this book by [MS2012].
How to compute regularized reconstructions from EIT data, given that the inverse problem is both nonlinear and extremely sensitive to modelling errors and measurement noise? One can of course use variational regularization: write a functional consisting of a data discrepancy term and a regularization term, and minimize using an iterative algorithms such as the Gauss-Newton method. While this is a reasonable approach, there are two drawbacks. The iteration may get stuck to a local minimum without you noticing, and it is computationally expensive to solve a direct problem in every iteration.
The D-bar method bypasses these problems: it is a direct method based on computing a nonlinear Fourier transform of the conductivity from EIT data and then inverting the transform. There is no need to solve any direct problem. Regularization is provided by low-pass filtering in the nonlinear frequency domain. The reconstruction is blurry because of the low-pass filter, but this can be helped in post-processing by anisotropic diffusion, frequency-domain filling, or deep learning.
The motivation for this blog post comes from the following medical imaging scenario, where a row of electrodes is attached around the chest of a patient:
We aim to image the heart and lungs of the patient using EIT. We think of maintaining a given voltage potential at each electrode and measuring the resulting electric current through the electrodes. This measurement is repeated for several voltage patterns. Note that practical EIT devices typically feed currents and measure voltages; however, in this blog post we consider voltage-to-current measurements for mathematical convenience. This is not a huge deal since one can always switch between the two data types computationally.
Assuming that there are 32 electrodes in total, we can use at most 31 linearly independent voltage patterns since one of the electrodes is considered as the ground potential to which the other potentials are compared to. We use trigonometric voltage patterns, approximated by continuous sine curves at the boundary:
Furthermore, we approximate the above three-dimensional (3D) situation with a two-dimensional computational
model. Our virtual patient will be modelled by a two-dimensional (2D) disc with varying electrical conductivity
inside. This 2D approximation, while obviously incorrect, works surprisingly well in practice.
Next we show how to simulate the above kind of voltage-to-current data and how to recover the inner
conductivity from the boundary measurements using the D-bar method.
Our simulated conductivity is defined in the file heartNlungs.m, and you can plot it using the routine DbarEIT01_heartNlungs_plot.m. The result should look something like this:
Here the background conductivity is 1, the heart is filled with blood and has higher conductivity 2, and the lungs are filled with air and have lower conductivity 0.5.
Wonder about the white color at the bottom of the colorbar above? It is for is for creating the white background in Matlab. Check out this page to learn more.
We simulate by first computing the current-to-voltage map using the Finite Element Method (FEM) and then inverting. In what follows, the voltage-to-current map is called DN map for Dirichlet-to-Neumann, and the current-to-voltage map is called ND map for Neumann-to-Dirichlet.
FEM is an efficient method for solving elliptic partial differential equations. The solution of the conductivity equation is given as a linear combination of basis functions that are piecewise linear in a triangular mesh. The mesh is constructed by the routine DbarEIT02_mesh_comp.m containing the parameter Nrefine. Here are plots of the triangle meshes with Nrefine=0 and Nrefine=2:
In practice it is a good idea to use Nrefine=5 for accurate results. The mesh is saved to a file called data/mesh.mat. Note that the routine DbarEIT02_mesh_comp.m creates a subdirectory called data. If the subdirectory data already exists, Matlab will show a warning which you can safely ignore.
The simulation of the continuum-model ND map involves solving Neumann problems of the form
containing a Fourier basis function defined by
Both the Neumann condition and the conductivity given to Matlab’s PDE toolbox routines as user-defined functions. Let us see how to do that. First of all, the user-defined routines need to define the functions at appropriate points related to the FEM mesh. Here are some important points:
The conductivity is constant inside each triangle, so it is naturally specified at the centers of triangles shown in (a). This is implemented in the routine FEMconductivity.m. The solution produced by FEM is linear inside each triangle, so the gradient is constant inside each triangle. Therefore it is appropriate to specify the Neumann data at the midpoints (c) of boundary segments. This is done in routine BoundaryData.m. Finally, we need to find the trace of the solution for constructing the ND map; this can be done simply by picking the values of the FEM solution at the vertices (b) located at the boundary.
The routine DbarEIT03_ND_comp.m uses FEM to compute a matrix approximation to the ND map and saves it to the file data/ND.mat. See Chapter 13.2 of [MS2012] for mathematical details of this computation. Note: you may see some warnings in Matlab when runningDbarEIT03_ND_comp.m, but you can safely ignore them.
We need to construct a grid in the complex k-plane for the evaluation of the complex-valued scattering transform t(k), also known as nonlinear Fourier transform. Run the routine DbarEIT05_Kvec_comp.m. You should see something like this:
The next step is to compute the traces of Complex Geometric Optics (CGO) solutions by solving a boundary integral equation. This is done in routine DbarEIT05_psi_BIE_comp.m. This routine calls the function solveBIE.m. The traces are then fed to a boundary integral formula evaluating the scattering transform. The evaluation is done in routine DbarEIT06_tBIE_comp.m. This is how the scattering transform (also known as nonlinear Fourier transform) looks like:
Now we are ready to reconstruct the conductivity. The solution of the D-bar equation is based on generalized Vainikko’s method, which is organized into the files GV_grids.m, GV_project.m, GV_prolong.m and DB_oper.m.
Run DbarEIT08_tBIErecon_comp.m, and you should see this image:
The left image shows the original conductivity distribution, and the right image shows the reconstruction using R=4. The colors (and thus numerical values) are directly comparable. Note that the reconstruction underestimates the value at the “heart.”
You can try a bigger frequency cutoff radius as well. Open the file DbarEIT07_tBIErecon_comp.m and change R=4 to R=6, say. The result looks like this:
This time the reconstruction slightly overestimates the conductivity of the “heart;” this is a typical feature of the truncated nonlinear Fourier transform.
When you use larger cutoff frequencies R, you may need more accuracy in the computational grid. To achieve that, change M = 8 into M=9 or even higher. You can monitor the change in the reconstruction when M grows; when the change from M to M+1 becomes negligible you can stop increasing M. Note that higher M increases memory demands and computation time.
]]>Summary: Tomography means reconstructing the internal structure of a physical body using X-ray images of the body taken from different directions. Mathematically, the problem is to recover a non-negative function f(x) from a collection of line integrals of f. Here we show how to solve the tomography problem using Total Variation (TV) regularization, a reconstruction method for ill-posed inverse problems introduced by Rudin, Osher and Fatemi in 1992. TV regularization is edge-preserving: it favors piecewise smooth reconstructions.
Author: Samuli Siltanen.
Software: Matlab (probably works also with Octave).
Download package as zip file: SparseTomoTV
Literature: Jennifer L Mueller and Samuli Siltanen: Linear and nonlinear inverse problems with practical applications, SIAM 2012.
Please read first the post Simple simulation of X-ray tomography. There you will see how to simulate tomographic data of a simple digital phantom in a way that avoids the dreaded inverse crime.
In this post we study the following Total Variation (TV) regularization. We look for the minimizer of this penalty functional:
Here L_H and L_V are matrices implementing horizontal and vertical differences of neighboring pixels. This is called anisotropic TV since the horizontal and vertical derivatives are treated separately. It is more common to use isotropic TV based on the 1-norm of the Euclidean length of the gradient of f. We discuss the anisotropic form because we can rewrite the functional in a standard Quadratic Programming (QP) form using a simple trick.
Numerical minimization of total variation regularization functionals is an active area of research, and the last ten years have seen a tremendous progress. The method we discuss here is not the fastest. But hey, it works, and if you want something faster, check out the Flexible Primal-Dual Toolbox by Hendrik Dirks, or build upon the Chambolle-Pock algorithm whose Matlab implementation is offered by Gabriel Peyre here.
The trick behind our approach is to write
with non-negative vectors in the decomposition:
Now we can reduce our original unconstrained minimization problem into a standard QP form with both equality and inequality constraints. More precisely, we want to minimize
where 1 is the vector with all elements equal to one. This can be written in the QP form
Above we used these notations:
Note that in the new formulation the minimum point is searched in a 5n-dimensional space instead of the original n-dimensional situation, making the minimization problem harder. Well, you win some, you lose some.
The equality constraints:
The inequality constraints, including non-negativity of the attenuation coefficient:
Show that the two minimization problems above are equivalent.
OK, now it’s time to start computing. Run first the routines tomo1_RadonMatrix_comp.m and tomo2_NoCrimeData_comp.m. Then replace the alpha value to 0.1 in the routine tomo6_TV_comp.m and run it. You should see this:
On the other hand, if you choose a really large regularization parameter, for example take alpha to be 200, you get a picture like this:
Write a for-loop where the number alpha ranges from 0.1 to 1000. You might want to use a logarithmic value vector such as 10.^[-1:.2:3]. For each value of alpha, calculate the relative error in the reconstruction using the Matlab commands
>> orig = RectanglePhantom(50);
>> rel_error = (norm(recon(:)-orig(:))/norm(orig(:));
The computation of the reconstruction error is possible since in this simulated example we know the ground truth. In practical imaging we do not have that luxury. What is the alpha value that gives the minimal relative error?
The maximal attenuation value in our digital phantom is 1. Modify the routinetomo6_TV_comp.m by adding a new inequality constraint limiting the reconstruction to values smaller than equal to one (take care to apply the constraint only to the appropriate elements of vector y). Then repeat the procedure of Exercise 2. What is the alpha value that gives the minimal relative error? Is the minimal error here smaller than in Exercise 2?
]]>
Summary: Tomography means reconstructing the internal structure of a physical body using X-ray images of the body taken from different directions. Mathematically, the problem is to recover a non-negative function f(x) from a collection of line integrals of f. Here we show how to solve the tomography problem using truncated singular value decomposition (TSVD), a basic reconstruction method for linear ill-posed inverse problems. We keep things simple by working at a low resolution (50×50 tomographic images) so that we can actually build the system matrix A and compute the singular value decomposition (SVD) of A.
Author: Samuli Siltanen.
Software: Matlab (probably works also with Octave).
Download package as zip file: SparseTomoTSVD
Literature: Jennifer L Mueller and Samuli Siltanen: Linear and nonlinear inverse problems with practical applications, SIAM 2012. Below we refer to this book by [MS2012].
We start by designing a digital test phantom. It consists of four rectangles with various X-ray attenuation coefficients. The attenuation is constant inside each rectangle and zero outside the rectangles. The Matlab routine RectanglePhantom.m constructs the phantom at the resolution MxM; the number M is given as argument. The phantom is so designed that if you increase resolution by taking a larger M, the resulting pixel image gives a more accurate approximation of the same piecewise constant function defined in the unit square.
You can take a look at the 50×50 phantom by running the routine RectanglePhantom_plot.m. It looks like this:
Next we construct the measurement matrix A for the linear tomographic model m=Af. Here the vertical vector f contains the pixel values of the unknown image numbered column-wise according to the Matlab standard (going from image to vertical vector in Matlab: f=f(:);). Since the phantom has size 50×50 = 2500, the matrix A has 2500 columns.
We consider 15 tomographic projection directions distributed with equal steps over 360 degrees.
We construct A in the following “brute-force” way:
What is the number of rows in matrix A, or equivalently, the dimension of the measurement vector m? The answer comes from the internal workings of the “radon.m” routine in Matlab. For a 50×50 image, one projection direction has 75 data points. Since we have 15 projection directions, the length of the measurement vector m is 15*75=1125.
The Matlab routine for constructing the matrix A is called tomo1_RadonMatrix_comp.m. It also computes the SVD of A in the form A=UDV^T, where U and V are orthonormal matrices and D is a diagonal matrix with nonnegative singular values located along the diagonal ordered from the largest to the smallest. The matrices A, U, V and D saved into the file RadonMatrix_50_15.mat.
You can take a look at the matrix using tomo1_RadonMatrix_plot.m. In this image you see the nonzero elements of the 1125×2500 matrix A marked as blue dots:
Now it is time to simulate tomographic data. The simplest way to do this in Matlab would be to write f=RectanglePhantom(50) and then drop f to vertical vector form with the command f=f(:). Then we could simulate data by writing m=A*f. However, this approach commits the so-called inverse crime, where the same measurement model is used both for data simulation and for reconstruction. Inverse crime leads to unrealistically good reconstructions and therefore yields unreliable results.
To avoid inverse crime, we introduce some modelling error to the data. We actually create a higher-resolution phantom f2=RectanglePhantom(100), simulate data form that, and downsample the data so that it corresponds to that of a 50×50 target. All of this is implemented in the file tomo2_NoCrimeData_comp.m. There you can also add your favourite amount of simulated white noise to the data by changing the parameter “noiselevel.”
The noisy 75×15 sinogram is called mncn, where m stands for “measurement,” nc for “no crime,” and n for “noisy,” and saved to file tomo2_NoCrime_50_15.mat along with other useful variables.
Running tomo2_NoCrimeData_plot(50,15) gives you a plot comparing the inverse-crime and no-inverse-crime sinograms:
Try what happens with Moore-Penrose pseudoinverse when applied to the ill-posed problem of tomography. Write first
>> MPrecon = A\mncn(:);
to compute the minimum norm solution, and then convert the result to image form by
>> MPrecon = reshape(MPrecon,50,50);
Then write
>> imagesc([RectanglePhantom(50),MPrecon])
and see if the result looks like the original phantom. If the picture looks strange, you can restrict the colour range by writing instead
>> imagesc([RectanglePhantom(50),MPrecon],[0,1])
Discuss what you see.
Add non-negativity constraint to the minimum norm solution approach. Write
>> MPrecon_nn = lsqnonneg(A,mncn(:));
and then continue as in Exercise 1. Is the non-negative reconstruction better than the one in Exercise 1?
The above Exercises demonstrated that the solution of ill-posed problems require regularized solution techniques, such as TSVD. For the theory behind TSVD, see Chapter 4 of [MS2012]. This is the formula for TSVD:
The idea of TSVD is to use only r_alpha first singular values in the reconstruction. Here you can see a plot of the singular values of the matrix A:
So it seems that around the index 800 the singular values start to get smaller, and somewhere near index 900 there is a sudden drop in the size of singular values. It is the huge size difference between first and last singular values that causes the instability in the least-squares inversion discussed above.
We first try TSVD reconstruction with 20 largest singular values. This is done by running the routine tomo3_TSVD_comp.m with the parameter value r_alpha = 20. The reconstruction looks like this:
We can use more than 20 singular values. This is the result of TSVD algorithm using 100 largest singular values (with the parameter value r_alpha = 100):
However, if we use too many singular values, the reconstruction breaks down. Here is an example with 900 singular values:
Write a for-loop where the number r_alpha ranges from 1 to 1125 (r_alpha is the number of largest singular values used in TSVD reconstruction). For each value of r_alpha, calculate the relative error in the reconstruction using the Matlab commands
>> orig = RectanglePhantom(50);
>> rel_error = (norm(recon(:)-orig(:))/norm(orig(:));
The computation of the reconstruction error is possible since in this simulated example we know the ground truth. In practical imaging we do not have that luxury. What is the number of singular values that gives the minimal relative error? Plot the singular values of A and discuss the location where the reconstruction error is minimal.
Columns of the matrix V are called singular vectors. They are the building blocks of any reconstruction using TSVD (why?). Here is how to plot the first singular vector as an image:
>> imagesc(reshape(V(:,1),32,32));
Plot a series of singular vectors in this way, for example columns 1, 100, 200, 300, …, of V. Discuss the change in the appearance of the singular vectors as you go on with the process. What does the change mean for the TSVD reconstructions?
Next we move beyond truncated SVD. Tikhonov regularization is a classical reconstruction methodology for ill-posed inverse problems. See the Wikipedia page and Chapter 5 of [MS2012]. One method for computing Tikhonov regularized solutions is filtering the singular values:
make a copy of the file tomo3_TSVD_comp.m and call it tomo4_Tikhonov_comp.m. Modify the file so that the resulting reconstruction is Tikhonov regularized and done with the above filtering of the singular values. Also, create a file tomo4_Tikhonov_plot.m that plots the reconstruction.
Compute Tikhonov regularized solutions with a variety of values of the regularization parameter alpha. Which value of alpha gives the lowest relative error in the reconstruction? Is the relative error smaller or larger than the best one you got with TSVD?
Make a video showing the Tikhonov regularized reconstructions changing when alpha ranges from very small values (minimum for example 0.0000001) to really large values (maximum for example 10000000). Make sure that each frame has the same colormap so that the values are comparable from frame to frame.
Further reading: Total Variation regularization for X-ray tomography.
]]>Author of this post: Samuli Siltanen (samuli.siltanen “at” helsinki.fi)
This blog aims to help everyone interested in inverse problems and their computational solution. We offer open software and links to open datasets so that anyone can easily try different reconstruction methods on both simulated and measured data.
Inverse problems are about interpreting indirect measurements: there is an object we want to see or understand, but we cannot image or measure the object directly. However, we do have available measurement data that is somehow related to the object of interest, but needs further processing for extracting information out. Inverse problems arise for example in medical imaging, underground prospecting, remote sensing and nondestructive testing.
A classical example of an inverse problem is medical X-ray tomography. Several two-dimensional X-ray images are taken of a patient along different projection directions. The inverse problem is to reconstruct the three-dimensional structure inside the patient from all those two-dimensional projection images. For more information about X-ray tomography, see Wikipedia and this page.
Another example is the nonlinear inverse problem of Electrical Impedance Tomography (EIT). There one feeds electric currents into a conductive body using electrodes, measures the resulting voltages, and aims to recover the electrical conductivity distribution inside the body. The EIT image formation problem is very ill-posed, or sensitive to modelling errors and measurement noise. The image above shows a phantom measured at University of Eastern Finland (left), reconstruction using the D-bar method (center, computation by Andreas Hauptmann) and reconstruction using Bayesian inversion (right, computation by Ville Kolehmainen). The EIT data is openly available at the page https://www.fips.fi/EIT_dataset.php, and we will later publish instructions in the FIPS Computational Blog (https://blog.fips.fi/) showing how to reconstruct the phantom. For more information about EIT, see Wikipedia and this page.
We hope you enjoy the blog and find it useful! Please send feedback by either commenting the blog posts or by sending email to the address samuli.siltanen “at” helsinki.fi.
The Finnish Inverse Problems Society (FIPS) wants to increase awareness and technical skills about inverse problems worldwide. This blog is part of the public outreach efforts of FIPS.
]]>