In biosciences, fluorescence microscopy is an extremely useful and important method for studying living organisms. As one of the implementations of fluorescence microscopy, confocal fluorescence microscopy can be used to study live cells and analyse the response of the cells to external stimuli. Confocal microscopy has several advantages over traditional widefield microscopy. The main advantage is the ability to produce in-focus images of thick specimens via elimination or reduction of background information outside of the focal plane and ability to control the depth of field (within the accuracy of an Airy disk size) (

Inoué, 2006). Despite the advantages over widefield microscopy, confocal images contain imperfections, for example, aberrations due to nonideal optical pathway, residual out-of-focus light, noise from detector electronics, etc. (

Shaw, 2006).

In this paper we focus on image enhancement of microscope images by deconvolution (

Cannell *et al.*, 2006). Each microscope alters the appearance of specimens in a specific way. Image formation can be described by the mathematical operation of convolution, where the ‘true’ image is convolved with distortion effects from the microscope. Deconvolution is a method to reverse the aberrations caused by convolution, that is remove the distortions of the optical train, contributions from out-of-focus objects, and with regularization enabled, reduce the noise originated from detector electronics. Deconvolution takes into account microscope optics and the nature of noise. Therefore, it is a method that can efficiently enhance both widefield microscopy and confocal microscopy images. It can considerably improve image contrast and reduce noise in microscope images.

Several deconvolution algorithms have been proposed for three-dimensional (3D) microscopy. For example, noniterative algorithms such as regularized inverse-filtering algorithm (

Preza *et al.*, 1992), nearest-neighbour algorithm, Wiener filtering algorithm (

Shaw & Rawlins, 1991a), etc.; iterative algorithms such as Richardson–Lucy (RL) algorithm (

Richardson, 1972;

Lucy, 1974), Jansson-van Cittert algorithm (

Agard, 1984;

Abdelhak & Sedki, 1992), Carrington algorithm (

Carrington *et al.*, 1995), constrained Tikhonov-Miller algorithm (

van Kempen *et al.*, 1997), Fourier-wavelet regularized algorithm (

Neelamani *et al.*, 2004), expectation maximization algorithm (

Conchello, 1998;

Preza & Conchello, 2004), etc; blind deconvolution algorithms (

Holmes, 1992;

Avinash, 1996;

Markham & Conchello, 1999). Usually, noniterative methods are fastest but these do not provide optimal image quality, especially in the presence of noise (

Cannell *et al.*, 2006). The particular choice of deconvolution algorithm depends on users requirements (should the deconvolved image be pleasant to the viewers eye or be quantitatively as correct as possible), computational resources and limitations (

Cannell *et al.*, 2006;

Sun *et al.*, 2009).

In this paper, we analyse the RL iterative algorithm that is derived for Poisson noise (

Richardson, 1972;

Lucy, 1974). The assumption of Poisson noise is adequate for confocal microscopes because these use photodetection devices such as avalanche photodiodes to count the number of photons that are emitted from specimens. Because of the quantum nature of light, the number of detected photons is a Poisson process whose variance is equal to the mean of counted photons.

The RL algorithm is commonly used for telescope and microscope image enhancement (

Dey *et al.*, 2006). An undesired property of the RL algorithm is that, in the presence of noise, the deconvolution process converges to a solution which is dominated by the noise (

Dey *et al.*, 2004). An option to circumvent this, is to prefilter images (

Cannell *et al.*, 2006). Another option is to introduce regularization terms such as Tikhonov–Miller (

van Kempen & van Vliet, 2000) or maximum entropy to the RL algorithm (

de Monvel *et al.*, 2001,

2003). Algorithms which are based on Tikhonov–Miller regularization, are often used for deconvolving 3D images. Such algorithms avoid noise amplification but operate poorly near the object edges. Alternatively, to increase the sharpness of object borders and obtain smooth homogeneous areas, total variation (TV) regularization is often applied in the RL algorithm (

Dey *et al.*, 2004). However, regularization terms contain unknown parameters that must be carefully chosen to achieve optimal deconvolution results that would be as close as possible to the ‘true’ image. Some regularized algorithms provide means to determine how much regularization to use in each restoration step (

Sun *et al.*, 2009;

Liao *et al.*, 2009). In this paper, we introduce a method to estimate the regularization parameter for the regularized RL deconvolution algorithm.

All iterative deconvolution algorithms have to deal with the problem of stopping the iteration process. Provided that the iteration converges, seemingly the most natural, in fact, also the most popular stopping criteria are based on measuring the stationary state of the iteration process. For example, this can be measured by computing the relative changes of subsequent estimates and specifying a stopping threshold (

Dey *et al.*, 2004,

2006;

Sun *et al.*, 2009). Surprisingly, as we show in this work, such stopping criteria turn out to be suboptimal: the converged estimate may be less accurate (when comparing with the ‘true’ image) than some of the intermediate estimates. So, a better stopping criteria is needed for improving quantitative results of iterative deconvolution algorithms.

For image restoration by deconvolution, both commercial and open source computer programs are available. Commercial image restoration software solutions give good results in image enhancement and are easy to use, but, as a drawback, they are expensive and do not support testing alternative deconvolution algorithms due to their closed source development policy. Several open source software libraries exists that implement various deconvolution algorithms (

Peterson, 2010b). For example, Clarity Deconvolution Library (

Quammen, 2007) (GPL license) is a C/C++ library that currently implements Wiener filtering (

Shaw & Rawlins, 1991b), Jansson-van Cittert iterative (

Agard, 1984), maximum likelihood iterative (

Richardson, 1972;

Lucy, 1974) with symmetric point spread function (PSF) algorithms; COSMOS (

Valdimarsson & Preza, 2007) is a C++ library (GPL, the successor of XCOSM software) that currently implements depth variant expectation maximization (

Preza & Conchello, 2004), a linear least square (

Preza *et al.*, 1992), a linear maximum

*a posteriori* (

Preza *et al.*, 1993), the Jansen-van Cittert (

Agard, 1984) and the expectation maximization (

Conchello, 1998) algorithms; Deconv is a C++ library (GPL) that currently implements maximum likelihood-Landweber, -conjugate gradient and -expectation maximization iterative deconvolution (

Sun *et al.*, 2009) algorithms. For a scientist who prefers to focus on solving scientific problems, this variety of software and algorithms makes it difficult to decide which of the algorithms is most suitable for particular image data and available computational resources. Therefore, a software platform is needed that would support testing and comparing different deconvolution algorithms and their implementations in an unified manner for variety of microscopy image file formats. For this, we use Python programming language that is becoming an increasingly popular choice for scientific computing because of its many features that are attractive for scientists: Python has very clean and easy-to-learn syntax, it supports very high-level object-oriented programming paradigm, and is easy to extend. High-quality scientific computational packages in Python have emerged within the last 10 years (

Oliphant, 2007;

Jones *et al.*, 2001) and well-developed tools exist for interfacing existing C/C++ and Fortran libraries to Python (

Beazley, 2003;

Peterson, 2009).

The aims of this work are: (1) to work out a practical method for using deconvolution algorithms, in particularly, to find good estimates to regularization parameters as well as to establish a robust criterion for stopping iteration process that would give closest result to the ‘true’ image rather than just detecting deconvolution process stationarity; (2) to develop an open source software package that would allow testing different deconvolution algorithms and at the same time would be easy to use in practice.