As explained in Section 2, diffuse fluorescence tomography can be treated as a linear problem in the limit that the absorption associated with the fluorophores is small relative to the absorption caused by tissue chromophores such as hemoglobin.

For each combination of FEM meshes and detection geometries, a Jacobian matrix

*A* is calculated based on solutions to the diffusion equation; see

Eq. (8). Then, using a basis for its kernel from the SVD, the projection of

*μ*_{a,f} to the kernel is found. As explained below, this projection is directly associated with information loss that cannot be reconstructed using the standard least-squares regularization techniques presented in this work. This nonrecoverable portion of

*μ*_{a,f}, i.e., the projection of

*μ*_{a,f} to the kernel of the Jacobian, is displayed in the bottom row in and , whereas the portion of

*μ*_{a,f} that does lie in the orthogonal complement of the kernel (i.e., information that is in principle available for image reconstruction using the standard techniques described below in Section

4) is displayed in the middle row.

It is remarked that the portion of *μ*_{a,f} that lies in the orthogonal complement of the kernel (i.e., the middle row in and ) could also be obtained by computing a data vector *y*_{data} = *Aμ*_{a,f} and then performing image reconstruction using the SVD decomposition based on the simulated data vector. The projection to the kernel of *μ*_{a,f} (i.e., the information loss) could then be computed by taking the difference between the original *μ*_{a,f} and the reconstruction. Note that this last statement applies only to the situation where no errors are introduced in the data vector prior to SVD reconstruction. For example, using different meshes for forward modeling and SVD image reconstruction would lead to another type of information loss due to the propagation of discretization and model mismatch errors in the reconstructed images.

In order to quantify the amount of information loss through projection to the kernel, we show in the scaled error between the target image and the recovered image, computed using the equation

as a function of mesh density. Here

denotes the reconstructed image, whereas

denotes the projection of

*μ*_{a,f} to the kernel of

*A*. The norm

·

is calculated as the standard Euclidean norm, that is, the square root of the sum of the squares of the values at all nodes. The mesh density (in units of nodes per millimeter) is calculated as the square root of the number of nodes divided by the area of the phantom.

As has been proposed here, the displayed errors actually correlate with the size of the nonrecoverable information (as a proportion of the size of the true solution).

Our numerical experiments for those imaging geometries where the number of measurements is small compared to the number of nodes in the mesh show that the forward model

*A* in FT projects to its kernel a significant portion of the optical properties vector, and this projection increases as the spatial resolution of the light transport mesh increases. This projection—regarded as a vector

*ν*_{o} Ker(

*A*) in the canonical decomposition

—has a large relative magnitude (often between 1/3 and 1/2 of the optical properties vector) and is unrecoverable via certain canonical regularization reconstruction methods. We therefore term this projection as “information loss.” Our numerical experiments using a diffuse light propagation model provide explicit examples demonstrating this loss of information in actual fluorescence reconstructions using an imaging geometry similar to what is commonly used in small-animal FMT [

17]. We note that we are also considering, in our numerical experiments, detection geometries with a large enough number of source–detector pairs and a small enough number of nodes in the FEM mesh so as to no longer result in an underdetermined reconstruction problem. These correspond to those portions of the graphs in that are flat and on the horizontal axis; i.e., there is no observed reconstruction error for these experiments. We have included these experiments in this exposition to demonstrate that, in order to increase the resolution in FT reconstruction, not only is a fine mesh necessary but in addition the detection geometry needs to be such that a large enough number of measurements exist to avoid information loss through projection into the kernel.

In what follows we examine mathematically the meaning of the projection to the kernel of the forward model and explain why this projection results in information loss. By analyzing certain standard regularization methods, we demonstrate that projection into the kernel of the FT forward operator always results in reconstructed images where the information contained in the kernel has been lost and thus cannot be recovered.

We begin with the following elementary example. A restricted form of the underdetermined least-squares inversion method can be formulated as follows: for a measurement vector

*y* Ran(

*A*), the so-called least-squares solution

*μ*_{LS} is given by

that is, we choose for a solution

*μ*_{LS} out of the collection of all possible solutions of

*Aμ* =

*y* as being the one that minimizes the square of the norm. Suppose now that a particular true optical properties distribution

*μ*_{true} is known; that is, “true” has the meaning that, for a particular domain, the optical properties have been exactly measured. Under the forward model

*A* the observation

*y*_{meas} =

*Aμ*_{true} is calculated, and then, to test the efficacy of the least-squares inversion method under the assumption of this model

*A*, a reconstruction is performed by calculating the least-squares solution

*μ*_{LS} from the observation

*y*_{meas}. Observe first that

*μ*_{true} decomposes uniquely via the direct sum [Section

2.C,

Eq. (9)] into

, where

*μ*_{true,0} lies in the kernel of

*A* and

is in the orthogonal complement of the kernel of

*A*. Likewise, observe as well that

*any* solution

*μ* that satisfies

*Aμ* =

*y*_{meas} must be of the form

*μ* =

*μ*_{0} +

*μ*^{}, where

*μ*^{} is in Ker(

*A*)

^{} and

*μ*_{0} is in Ker(

*A*). Note that, for any choice of solution

*μ* in the set of all possible solutions, the projection of

*μ* into Ker(

*A*)

^{} must be identical to

, for otherwise

*A* would not act in a one-to-one fashion on Ker(

*A*)

^{}. Using the fact that

*μ*_{0} and

are orthogonal to each other for any choice of

*μ*_{0} Ker(

*A*), then amongst all such solutions

*μ* the minimum norm solution is

. It is thus observed that the projection

*μ*_{true,0} of

*μ*_{true} onto the kernel of

*A* cannot be reconstructed in this setting, and in this sense

*μ*_{true,0} must be considered as being information that is “lost” by the forward model

*A* under this form of the method of least squares.

In fact, for a given forward model the information associated with vectors contained in the kernel cannot always be retrieved given a particular choice of regularization. In support of this assertion, we consider several standard regularization techniques and demonstrate the phenomenon of information loss in optical tomography from first principles.

A. Tikhonov Regularization

More generally, let

be a linear transformation. The method of Tikhonov regularization (see [

29,

30]) seeks to mediate between fitting observed measurements

and fidelity to prior knowledge of some set of characteristics (e.g., size or smoothness) of the true solution

. In its simplest form, Tikhonov regularization, for

*α* (0, ∞), is the solution

*μ*_{α} of the operator equation (Theorem 2.5, [

30]):

which is derived as the solution to the minimization problem

We work with

Eq. (10):

Recall that the range of the transpose

*A*^{T} is the orthogonal complement to the kernel of

*A* (e.g., see [

30],

Appendix A). Thus, the right-hand side of

Eq. (12) represents an element of the orthogonal complement of the kernel of

*A*, implying that

*μ*_{α} must lie in Ker(

*A*)

^{}. Thus, Tikhonov regularization, as realized in

Eq. (11), cannot reconstruct that part of any solution that lies in the kernel.

Two more examples are now provided of popular reconstruction techniques that produce solutions in the orthogonal complement of the kernel of *A*.

B. Truncated Singular Value Decomposition

A brief overview of the SVD of the matrix *A* has been presented in Subsection 2.C.2. The SVD can also be used as a reconstruction technique. This method can be summarized as one that uses geometric information about the matrix *A* to project the measurement vector *y*_{meas} into the range of *A*. One then solves the equation *Aμ* = Proj(*y*_{meas}) in chosen a subspace of the orthogonal complement of the kernel of *A* and uses this value as the reconstructed solution of *Aμ* = *y*_{meas}.

A short description of this method is provided. Recall from our previous discussion of the SVD of

that there exist orthogonal matrices

*U* and

*V* and a diagonal matrix ∑, so that

with

- and .
- is a matrix in diagonal form and whose diagonal elements satisfy

The focus remains on the underdetermined case (

*N*_{m} <

*N*_{ν}). Thus, the index value

*p*, which denotes the smallest nonzero singular value

, is strictly less than

*N*_{ν}; for ease of exposition we will assume that

. Via the Rank Theorem (Theorem 2.1), the kernel of

*A* is a nontrivial subspace of

. In the general implementation of the truncated singular value inversion method, an index

is chosen that denotes the number of positive singular values to be used in the reconstruction; once again, for ease of exposition, we choose

. Since the columns of

*U* are a basis for

, we have that

. A consequence of

Eq. (13) is that

,

and

,

(see Corollary 6.2 in

Appendix A). Expressing both

*μ* in terms of the basis {

*ν*_{j}} and

*y*_{meas} in terms of the basis {

*u*_{i}} and taking the inner product of both sides of

*Aμ* =

*y*_{meas} with

*u*_{i}, it is found that the truncated singular value solution

is

which is in the orthogonal complement of the kernel of

*A* since, as observed in Proposition 6.1 in

Appendix A, {

*ν*_{p+1}, …

*ν*_{Nν}} is a basis of the kernel of

*A*.

C. Landweber–Fridman Iteration

Landweber–Fridman iteration is a reconstruction scheme that is based on fixed point iteration. The key point is that the iteration of a contracting map on a closed and complete space contains a unique fixed point and the computed fixed point is then taken to be the reconstruction. Using the normal equations

on

*y*_{meas} =

*Aμ*, where

*P* is the projection of

*y*_{meas} onto the range of

*A* in

, one can show that the affine map

is contracting for any choice of

*β* so that

, where σ

_{1} is the largest singular value in the SVD of

*A*. If we start at

*μ*^{0} = 0, it is an easy observation that

*μ*_{k+1} =

*T*(

*μ*_{K}) is in Ker(

*A*)

^{} for all

*k*. As Ker(

*A*)

^{} is a vector subspace of

, the fixed point will be in Ker(

*A*)

^{}.

The above reconstruction methods do not represent a complete list of all methods for which the projection to the kernel is unrecoverable; another example is Kaczmarz iteration as implemented in algebraic reconstruction technique [

30].