**Summary: **Tomography means reconstructing the internal structure of a physical body using X-ray images of the body taken from different directions. Mathematically, the problem is to recover a non-negative function f(x) from a collection of line integrals of f. Here we show how to solve the tomography problem using Total Variation (TV) regularization, a reconstruction method for ill-posed inverse problems introduced by Rudin, Osher and Fatemi in 1992. TV regularization is edge-preserving: it favors piecewise smooth reconstructions.

**Author: Samuli Siltanen.**

**Software:** Matlab (probably works also with Octave).

**Download** package as zip file: **SparseTomoTV**

**Literature:** Jennifer L Mueller and Samuli Siltanen: *Linear and nonlinear inverse problems with practical applications,* SIAM 2012.

Please read first the post **Simple simulation of X-ray tomography**. There you will see how to simulate tomographic data of a simple digital phantom in a way that avoids the dreaded *inverse crime*.

In this post we study the following Total Variation (TV) regularization. We look for the minimizer of this penalty functional:

Here L_H and L_V are matrices implementing horizontal and vertical differences of neighboring pixels. This is called *anisotropic TV* since the horizontal and vertical derivatives are treated separately. It is more common to use *isotropic TV* based on the 1-norm of the Euclidean length of the gradient of f. We discuss the anisotropic form because we can rewrite the functional in a standard Quadratic Programming (QP) form using a simple trick.

Numerical minimization of total variation regularization functionals is an active area of research, and the last ten years have seen a tremendous progress. The method we discuss here is not the fastest. But hey, it works, and if you want something faster, check out the **Flexible Primal-Dual Toolbox** by Hendrik Dirks, or build upon the Chambolle-Pock algorithm whose Matlab implementation is offered by Gabriel Peyre **here**.

The trick behind our approach is to write

with non-negative vectors in the decomposition:

Now we can reduce our original unconstrained minimization problem into a standard QP form *with both equality and inequality constraints*. More precisely, we want to minimize

where **1** is the vector with all elements equal to one. This can be written in the QP form

Above we used these notations:

Note that in the new formulation the minimum point is searched in a *5n-dimensional space* instead of the original n-dimensional situation, making the minimization problem harder. Well, you win some, you lose some.

The equality constraints:

The inequality constraints, including non-negativity of the attenuation coefficient:

### Exercise 1:

*Show that the two minimization problems above are equivalent.*

OK, now it’s time to start computing. Run first the routines *tomo1_RadonMatrix_comp.m* and *tomo2_NoCrimeData_comp.m. *Then replace the alpha value to 0.1 in the routine* tomo6_TV_comp.m *and run it. You should see this:

On the other hand, if you choose a really large regularization parameter, for example take alpha to be 200, you get a picture like this:

### Exercise 2:

*Write a for-loop where the number alpha ranges from 0.1 to 1000. You might want to use a logarithmic value vector such as 10.^[-1:.2:3]. For each value of alpha, calculate the relative error in the reconstruction using the Matlab commands*

*>> orig = RectanglePhantom(50);*

*>> rel_error = (norm(recon(:)-orig(:))/norm(orig(:));*

*The computation of the reconstruction error is possible since in this simulated example we know the ground truth. In practical imaging we do not have that luxury. What is the alpha value that gives the minimal relative error? *

### Exercise 3:

*The maximal attenuation value in our digital phantom is 1. Modify the routinetomo6_TV_comp.m by adding a new inequality constraint limiting the reconstruction to values smaller than equal to one (take care to apply the constraint only to the appropriate elements of vector y). Then repeat the procedure of Exercise 2. What is the alpha value that gives the minimal relative error? Is the minimal error here smaller than in Exercise 2?*

A well written post, I just given this onto a colleague who was doing slightly analysis on that. And he indeed purchased me breakfast because I discovered it for him .. thus let me reword that: Thankx for the treat! however yeah Thnx for spending the time to talk concerning this, I feel strongly concerning it and revel in reading a lot of on this topic. If potential, as you become expertise, would you mind updating your blog with a lot of info? it’s very useful on behalf of me. two thumb up for this blog!