Home  About  Journals  Submit  Contact Us  Français 
Energybased source localization is an important problem in wireless sensor networks (WSNs), which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE) and nonlinearleastsquares (NLS) methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distancebased localization. However, not much work related to energybased source localization is covered in the existing review papers. Energybased methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energybased single and multiple source localization problems, their merits and demerits and to point out possible future research directions.
Wireless sensor networks (WSNs) are often used for monitoring tasks, such as detection, classification, localization and tracking of one or more targets in twodimensional (2D) or threedimensional (3D) sensor fields [1,2,3,4,5]. Due to the limited resources for sensing, communication and computation in WSNs, energyefficient collaborative signal processing algorithms are needed [4,6]. The localization problem has received considerable attention recently [7,8,9,10]. Source localization in wireless sensor networks (WSNs) is an important problem encountered in many indoor and outdoor applications [2,5,7,11,12,13,14].
There has been a rich history of published work that attempts to solve the source localization problem in WSNs. These solutions fall into three categories, in which the localization solutions are based on three different types of physical measurements: (1) time of arrival (TOA) [15,16,17,18,19,20] or time difference of arrival (TDOA) [21,22,23,24,25]; (2) direction of arrival (DOA) [4,26,27]; and (3) received signal strength (RSS) or energy [1,7,11,12,28,29,30,31,32,33,34,35,36,37,38].
Among three types of physical measurements, TOA and TDOA requires high precision hardware for timing purposes. Usually, in passive source localization, we do not know the propagation time. Hence, TDOAbased methods are chosen instead of TOAbased ones, however, in which the time synchronization problem arises. This needs extra calibrations. Furthermore, these timebased methods require precise acquisition of the phases of the signals arriving at different nodes. However, the features of some signals tend to be narrowband, making precise phase acquisition difficult in a noisy environment.
DOAbased methods need sensors equipped with antenna subarrays, e.g., a microphone sensor in acoustic networks, each capable of independently detecting the source signal and producing a DOA bearing estimate. The crossings of these bearing estimates are then combined to produce an estimate of the most likely source location. DOAbased methods can be used for narrowband signals. In [27], a DOAbased collaborative acoustic environmental monitoring system is developed. An embedded networked sensing box (ENSBox) platform is used, which can measure DOA from birds, etc. The method in this paper can be used to localize actual animals in their natural habitat. However, deploying many ENSBox nodes can be prohibitively costly.
Compared to the TDOA and the DOA methods, RSS or energybased approaches are attractive because they are widely applicable to WSNs, do not require additional hardware and can reuse the existing wireless infrastructure. Hence, in this paper, we are more interested in source localization using RSS or energy measurements in acoustic sensor networks.
For the acoustic sources, the wideband signal model may be more appropriate. Since acoustic signals are unmodulated and may contain a wider bandwidth. In many cases, we are interested in locating the source in the nearfield [4]. Here, the term ”nearfield” means the sensors are close to the source. In [4,26], the authors explained acoustic source localization and beamforming problems well. In their work, maximum likelihoodbased methods were used to solve the DOA source localization problem using nearfield wideband acoustic signals. Recently, in [39], by using distributed asynchronous sensors, acoustic source localization was achieved using the TDOA technique. Another good work done in acoustic source localization is [40]. In this work, the authors presented a lowcomplexity method for acoustic event detection and localization considering a change detection statistical framework. Two possible implementation approaches based on the efficient cumulative sum (CUSUM) algorithm [41], namely CUSUMFT (fixed time) and CUSUMML (maximum likelihood), were presented and discussed. For the source localization method, the TDOA technique was adopted. A real implementation was conducted to demonstrate the validity of the proposed CUSUM method in a rectangular room of dimensions of 7 m × 6 m. For noisy and reverberant environments, in [42,43,44], a wide range of acoustic source localization methods based on steeredresponse power (SRP) was proposed, e.g., the steered response powerphase transform (SRPPHAT) algorithm in [43], a modified SRPPHAT in [42], etc.
The methods for acoustic source localization presented above can be categorized as signalbased methods, which are different from energybased ones. Energy estimates of the source are obtained at each sensor via averaging of the data samples; these single estimates are fused either in a centralized (transmitted to a fusion center) or decentralized fashion to form the final localization estimate. For some scenarios, signalbased methods may offer improved performance versus energybased methods since the information conveyed in all samples is directly exploited (without averaging), but at the expense of larger transmission resources, e.g., wireless bandwidth [37]. The energybased model was first presented in [7,29], which is derived from the acoustic RSS model. In essence, the energy of the signal is the average RSS measurements over the time window $[tT/2,t+T/2]$. An energybased approach for acoustic source localization is an appropriate choice since the acoustic energy emitted by the sources usually varies slowly. As such, the acoustic energy time series can be sampled at a much lower rate compared with the raw acoustic time series [7]. Therefore, few data will be transmitted to the fusion center via the often congested wireless communication channels. This will reduce the energy consumption for data transmissions at the individual sensor node and save communication bandwidth over shared wireless channels.
This paper provides a survey on the energybased source localization in acoustic sensor networks. Most of the works in the literature focus on the single source localization case. In this paper, an overview of different methods, including weighted leastsquares (WLS) [28,29,35,37], semidefinite programming [30,45], secondorder cone programming (SOCP) [46] and projectontoconvex set (POCS) [38], will be provided. Furthermore, two methods for the energybased multiplesource localization problem, i.e., the multiresolution (MR) search method and an efficient expectation maximization (EM) method, will be introduced.
The paper is organized as follows. Section 2 briefly introduces the energybased sensing model. Section 3 and Section 4 present the algorithms for the single and multiple source localization problems, respectively. Section 5 concludes the paper and points out possible future research directions.
The model below is widely used for acoustic signals [7,28,29,47]. The acoustic signal received by a sensor at time instant n can be expressed as:
where $z\left(n\right)$ represents the acoustic intensity, γ is the gain factor of the sensor, $P\left(n\right)$ denotes the intensity of the source signal measured at a location with distance of 1 m from the source, $d\left(n\right)$ is the distance between the sensor and source and $\u03f5\left(n\right)$ is modeled as a zeromean additive white Gaussian (AWGN) noise with variance ${\varsigma}^{2}$.
The energybased model was first presented in [7,29,48], which is derived from the acoustic RSS Equation (1). In essence, the energy is the average intensity measurements over the time window $[tT/2,t+T/2]$.
Assume that $\u03f5\left(n\right)$ and $P\left(n\right)$ are uncorrelated, such that:
Then, one may represent the acoustic energy as (setting $g={\gamma}^{2}$, and $S\left(n\right)=E\left\{{P}^{2}\left(n\right)\right\}$):
In practice, the expectation is realized using the ensemble average over a time window $T=M/{f}_{s}$, where M is the number of sample points used for estimating the acoustic energy received by the sensor during the time interval T and ${f}_{s}$ is the sampling frequency. The average energy measurement over the time window $[tT/2,t+T/2]$ denoted by $y\left(t\right)$ is given by:
Then, according to the result in Equation (2), Equation (3) can be rewritten as:
The square of the background noise ${\u03f5}^{2}\left(n\right)$ in Equation (3) will have a ${\chi}^{2}$ distribution with the mean equal to $E\left\{{\u03f5}^{2}\left(n\right)\right\}={\zeta}^{2}$ and the variance equal to $2{\zeta}^{4}/M$. If M is sufficiently large ($M\gg 30$), according to the central limit theorem, w can be approximated well with a normal distribution, namely $w\sim N({\zeta}^{2},2{\zeta}^{4}/M)$.
This section covers the single source localization case for energybased methods. Considering the implementation issue, source localization algorithms can be categorized as three types: centralized, sequential and fullydistributed, as shown in Figure 1. In a centralized algorithm, as shown in Figure 1a, the sensors need to transmit the data to a central point or a fusion center for source location estimation. However, in a sequential or a fullydistributed method, no fusion center is required. As shown in Figure 1b, for the sequential algorithms, the measurements by the sensors are processed sequentially through the networks. A specified data transmitting path is needed. How to do path planning in such networks is a big issue. Furthermore, the convergence rate is low when the sensor density is high and data transmission becomes unreliable when some of the nodes fail; while in a fullydistributed algorithm, as shown in Figure 1c, at each time step, the sensors can exchange their estimates with their onehop neighbors. All of the sensors update their estimates simultaneously, and finally, they are able to achieve consensus on a possible minimizer asymptotically. The fullydistributed algorithms avoid the path planning problem in the sequential distributed method and improve the robustness of the network.
Centralized methods: The first work on energybased source localization was done by Li and Hu. In their papers [28,29], the energyratio leastsquares (ERLS) method is proposed. The first step of this method is to obtain the energyratio to eliminate the unknown parameter $S\left(t\right)$. Then, in the second step, the localization of the source is solved by minimizing a nonlinear leastsquares cost function. This method is an approximate solution for maximum likelihood estimation (MLE) for single source localization. Another method reported in their work is twodimensional CPA (closest point of approach). In general, the CPA method searches for the sensor with the maximum energy reading for the single source situation. However, the estimation accuracy of this method highly depends on the density of the sensors. In [35], a twostage algebraic closedform solution was presented. The first stage computes the source location together with an auxiliary variable using the weighted leastsquares method. The second stage explores the relationship between the source location and the auxiliary variable to improve the location estimation. In [30], a weighted direct leastsquares formulation was presented in [37] that provides a tradeoff between performance and computational complexity. In [45,46,49], the source localization problem was formulated using semidefinite programming and secondorder cone programming methods.
Sequential methods: In [34], Rabbat and Nowak proposed a distributed implementation of the incremental gradient (IG) algorithm to solve the nonlinear leastsquares problem. However, in their work, the source energy $S\left(t\right)$ is assumed to be known. Later in [32], they propose to use a kernel averaging estimator for robust source localization. In [38], Blatt and Hero first formulated the source localization as a convex feasibility problem (CFP) and proposed to use a projectionontoconvex sets (POCS) method. This method can be implemented in a distributed manner. However, the convergence of the method is verified by simulation, and no rigorous analysis is provided. An incremental optimization algorithm for maximum likelihoodbased source localization was proposed in [36].
Fullydistributed method: In [5,13], the authors formulated the source localization problem as a convex feasibility problem (CFP) and proposed a fullydistributed method for it. In the proposed method, sensor nodes only need to communicate with onehop neighbors and update their estimates simultaneously based on a projection algorithm. Finally, they are able to achieve consensus on a possible minimizer asymptotically. The proposed method has low complexity and can achieve global optimality fast.
In the following, we will give more details about the algorithms for the energybased single source localization problem.
In this part, we give a brief introduction about the existing methods for centralized source localization. For the energybased source localization using weighted leastsquares method, we mainly introduce algebraic closedform solutions proposed in [28,29,35]. The interested readers are referred to [30,37] for the alternative methods therein.
Let the source be located at an unknown coordinate pair $\theta ={[x,y]}^{T}$. We assume that there are N sensor nodes performing sensing based on energy detection. The location of the ith sensor is denoted by ${r}_{i}={[{x}_{i},{y}_{i}]}^{T},i=1,2,...,N$.
Here, we reproduce the energybased sensing model for convenience.
where ${d}_{i}=\parallel \theta {r}_{i}\parallel $ is the Euclidean distance between the ith sensor and the source. For simplicity, let us assume the mean ${\zeta}^{2}$ has been subtracted from Equation (5) so that ${\epsilon}_{i}$ follows a zero mean Gaussian distribution with variance ${\sigma}_{i}^{2}=2{\zeta}^{4}/M$.
In [29], the authors formulated the energybased source localization as an unconstrained leastsquare problem. Firstly, the energyratio ${\kappa}_{ij}$ of the ith and the jth sensors can be computed as follows:
Note that for $0<{\kappa}_{ij}\ne 1$, all of the possible source coordinates θ that satisfy Equation (6) must reside on a twodimensional hypersphere described by the equation:
where the center ${c}_{ij}$ and the radius ${\rho}_{ij}$ of this hypersphere associated with sensor i and j are given by:
This hypersphere is called a source location hypersphere.
Consider two hyperspheres based on Equation (7):
They are formed from the sensor pairs (i, 0) and (j, 0). Subtract each side, and cancel the term ${\parallel \theta \parallel}^{2}$; we have a hyperplane equation:
Substitute the definition in Equation (8); the above equation is simplified to:
which is a linear hyperplane equation with:
Then, the linear leastsquare cost function can be written as:
Given the coefficients, a solution of θ can be found in closed form since there is no constraint imposed in Equation (13) [29].
The unconstrained leastsquare mentioned above is closedform and is computationally attractive. However, it is not able to reach the Cramer–Rao bound (CRB). In [35], a twostage closedform leastsquares method was proposed, which can achieve CRB when the signaltonoise ratio (SNR) tends to infinity.
From Equation (5), we can see that the measurement ${y}_{i}\left(t\right)$ is highly nonlinearly related to the unknown source location θ and source energy S. Generally, we need to take the energy ratio to remove the dependency of S. Without loss of generality, we choose Sensor 1 as the reference. Then, the ratios of the energy measurements with respect to the reference can be written as:
where $\left({\epsilon}_{i}{d}_{i}^{2}\right)/\left(S{g}_{i}\right)\ll 1$ is assumed to hold, which means the signaltonoise ratio in the energy measurements should be large enough [35].
Expanding the righthand side and ignoring the secondorder noise term yield:
The first term is the true value of ${q}_{i1}$, and the second term is noise. If we collect ${q}_{i1},\forall i=2,...,N$ to form the vector $\mathbf{q}={\mathbf{q}}^{\circ}+\u25b5\mathbf{q}={[{q}_{21},{q}_{31},...,{q}_{N1}]}^{T}$, then the covariance matrix of $\mathbf{q}$, $\mathbf{Q}$ has elements:
After obtaining the formulation in Equation (15), the source localization problem was solved by a twostage closedform solution proposed in [35]. The basic idea of the twostage leastsquares method is that in the first stage, θ and ${d}_{1}^{2}$ are considered as independent unknown variables. The source estimate can be obtained using the standard weighted leastsquares method. Then, in the second stage, the relationship between θ and ${d}_{1}^{2}$ is explored to improve the result. The details of the algorithm can be found in [35].
In [37], the authors claimed that the errors that perturb the leastsquares equations (based on energy model) are not i.i.d, zeromean Gaussian random variables. Hence, the noise term in Equation (5) is not white, but a colored one. In [37], a weighted onestep leastsquares (WOS) method and weighted direct leastsquares method (WD) are proposed to solve the formulated energybased source localization problem. The WD method yields the same the location estimate as WOS. However, WD offers lower computational complexity than WOS. Similar to the twostage weighted leastsquares method stated above, a correction technique can be used to improve the location estimation by incorporating the relationship of two dependent unknown variables, which is assumed independent in the first stage. The WD with correction (WDC) method can attain CRB for the case of a white source.
For the centralized methods, in [45,46,49], the author formulated the energybased source localization problem in different ways. For example, the problem was solved by using the semidefinite programming method [45], secondorder cone programming relaxation (SOCP) [46].
Denoting $\Theta =\theta {\theta}^{T},$ and relaxing it into $\Theta \u2ab0\theta {\theta}^{T}$, then the source localization problem can be formulated using following SDP:
subject to:
As we can see from Equation (18), the condition for the optimization function in Equation (17) (the initial maximumlikelihood (ML) problem) is an approximated one compared to the fact that $\Theta =\theta {\theta}^{T}$. Hence, we can claim that the SDP is an approximated solution to ML estimation of the source localization problem. Motivated by the idea of complexity reduction while keeping the performance at an acceptable level, the nonconvex MLbased localization problem is approximated by a convex secondorder cone programming (SOCP) optimization problem, which can be solved very efficiently by interior point methods [50]. The basic idea of SOCP is similar to the SDP formulation. Both SOCP and SDP can be considered as approximated ML solutions.
The SDP method perform well in terms of localization accuracy. However, its main disadvantage is that it can only be used for centralized implementations.
For practical systems, it will require the quantization of the measurements before transmission. In [51,52,53], source localization using the quantized sensor signal energy readings was considered. Different quantization strategies, e.g., equally distancedivided quantizer (EDQ) [51], were proposed for energyaware source localization. In [52], two heuristic quantization design methods were proposed from maximumlikelihood (ML) target localization. In [53], the authors consider Byzantine attacks for the location estimation task where each sensor uses a binary quantization scheme to send binary data to the fusion center.
In these target localization methods with quantized signal energy, the location estimators adopt simple or improved grid search strategies.
In this part, firstly, we introduce the existing sequential distributed methods and fullydistributed projection methods.
Distributed algorithms include the sequential and fullydistributed methods. In the literature, the distributed algorithms are mostly formulated as a convex feasibility problem (CFP). The mathematical formulation of CFP is as follows.
Suppose that ${C}_{1},...,{C}_{N}$ are closed convex subsets in a Hilbert space with intersection C:
Convex feasibility problem (CFP): Find some point x in C.
We call the CFP consistent if $C\ne \varnothing $ and otherwise call it inconsistent. The projectionbased method is well studied and can be used for solving the CFP.
In this section, we firstly formulate the source problem as a CFP. Then, we will present the sequential and fullydistributed algorithms as the solutions. Note that in the distributed algorithms, the source energy S is assumed known a priori.
The maximum likelihood estimator (MLE) is found by solving the nonlinear leastsquares problem when the noise is Gaussian:
where ${f}_{i}\left(\theta \right)={\left[{y}_{i}{g}_{i}\frac{S}{{d}_{i}^{2}}\right]}^{2}$. Clearly, ${f}_{i}\left(\theta \right)$ attains its minimum of zero on the circle:
However, due to the observation noise, the source may not appear on the circles defined in Equation (20), but be included in a ring area, i.e., $\sqrt{\frac{{g}_{i}S}{{y}_{i}+\zeta {\sigma}_{i}}}\le \parallel \theta {r}_{i}\parallel \le \sqrt{\frac{{g}_{i}S}{{y}_{i}\zeta {\sigma}_{i}}}$ if a $\zeta \sigma $ area of the noise distribution is adopted, where ζ is a constant (Generally, $\zeta =1$–3). This will lead the source localization problem being a nonconvex optimization problem, which is generally difficult to handle. To overcome the difficulty, a relaxation method needs to be used. We define a convex set, which is a circular area:
where only the outer ring is considered. In this paper, we set $\zeta =1$. Intuitively, the source may or may not reside in ${\mathcal{D}}_{i}$. If the source is inside ${\mathcal{D}}_{i},\forall i$, then it is easy to see that the source localization problem can be solved by letting the estimator be a point in the intersection of the sets ${\mathcal{D}}_{i},i=1,2,...,N$. That is,
However, due to the observation noise, the feasibility problems may turn out to be inconsistent, i.e., the intersection $\mathcal{D}$ may be empty. An illustration for the consistent and inconsistent cases is given in Figure 2.
Since the convex feasibility problem may turn out to be inconsistent, the localization problem can be reformulated as finding a point ${\theta}^{\ast}$ that minimizes the sum of the squares of the distances to the convex set ${\mathcal{D}}_{i}$.
where for a closed convex set S $\subseteq {R}^{2}$ and vector $x\in {R}^{2},{\mathcal{P}}_{\mathcal{S}}\left(x\right)$ is the orthogonal projection of x onto S. That is,
For the source localization problem, the projection operator in Equation (24) has a closedform expression given as:
It can be easily checked that if the problem is consistent, i.e., $\mathcal{D}={\bigcap}_{i=1}^{N}{\mathcal{D}}_{i}\ne \varnothing $, then ${\sum}_{i=1}^{N}\parallel \widehat{\theta}{\mathcal{P}}_{{\mathcal{D}}_{i}}\left(\widehat{\theta}\right)\parallel =0$, where $\widehat{\theta}\in G$ and $G={\bigcap}_{i=1}^{N}{\mathcal{D}}_{i}$.
The sequential projection method, also termed POCS (projection on convex sets), is a cyclic algorithm [38]. From Figure 1b, we can see that the data are processed across sensors in sequence. The method is used for the energybased source localization problem in [38]. The update rule of the sequential projection method is given as follows:
where $\left\{\lambda \right(k\left)\right\}$ is a sequence of relaxation parameters satisfying for all $k,{\u03f5}_{1}\le \lambda \left(k\right)\le 2{\u03f5}_{2}$ for some ${\u03f5}_{1},{\u03f5}_{2}>0,\tau \left(k\right)=k$ mod N. The pseudocode of the sequential projection method is given by Algorithm 1.
If $\mathcal{D}={\bigcap}_{i=1}^{N}{\mathcal{D}}_{i}\ne \varnothing $, any sequence $\theta \left(k\right),k\ge 0$ converges to a point in $\mathcal{D}$ [54].
Algorithm 1: Sequential projection method 

From the above theorem, we can see that the POCS algorithm has good convergence performance in the consistent case of CFP. However, the convergence behavior of POCS in the inconsistent case is generally unsatisfactory. A new relaxation sequence $\left\{\lambda {\left(k\right)}_{(k\ge 0)}\right\}$ needs to be used, which is given by:
By using the relaxation sequence stated above, the POCS will converge to a point in G, which has been verified by simulation in [38], but no theoretical analysis is provided.
Incremental subgradient optimization method For the sequential distributed algorithms, in [36], the authors proposed an incremental optimization algorithm called the normalized incremental subgradient algorithm. Since in their algorithm, a simple subgradient operator is adopted, hence it is hard to avoid the local optima solutions. The localization performance is low even though the authors claim and prove the convergence of the proposed algorithm.
One issue of the sequential distributed algorithm is that a specified data transmitting path is required. How to do path planning in a large sensor network is challenging. Furthermore, the convergence rate is low when the sensor density is high, and data transmission becomes unreliable when some of the nodes fail. Hence, the robustness of such networks is low.
To avoid the path planning problem in the sequential distributed method and improve the robustness of the network, a fullydistributed method is useful, in which no fusion center is required and sensor nodes only need to communicate with their closest neighbors, therefore reduces the probability of congestion around the sink nodes and increases the robustness of the network against node failures or unpredictable switches to sleeping mode. Note that data transmission path planning is not needed for a distributed method.
Before introducing the proposed method, we give a brief introduction to a diffusion network. Let us represent the network as an undirected graph defined by $\mathcal{G}:=(\mathcal{N},\mathcal{E})$, where $\mathcal{N}$ is the node set $\mathcal{N}:=1,...,N$ and $\mathcal{E}\subseteq \mathcal{N}\times \mathcal{N}$ is the edge set. If node k and node l can communicate with each other directly, we define the undirected link by $(k,l)\in \mathcal{E}$.
If the CFP of the source localization problem is consistent, a fullydistributed protocol that can be used for source localization can be found in [55]. It works as follows: sensor i at time step $k+1$ generates its estimate according to the following protocol:
where ${w}_{j}^{i}\left(k\right),i=1,...,N,j=1,...,N$ denote the weights at time step k; ${\theta}^{i}\left(0\right),i=1,...,N$ are arbitrary.
(Network connectivity) The network is connected, i.e., there exists a direct or indirect path between any two nodes.
(Weighting rule) $W\left(k\right)$ whose ith row is the vector
is an $N\times N$ doublystochastic weighting matrix with the following properties:
In real applications, actually, it is hard to know whether the CFP of the source localization problem is consistent or not. In this case, a new distributed localization protocol needs to be designed:
where $\beta \left(k\right)$ is the relaxation sequence and ${w}_{j}^{i}\left(k\right)$ is the same as in Equation (28). We can see that if $\beta {\left(k\right)}_{k\ge 0}=1$, then the protocol Equation (30) is the same as Equation (28). The relaxation sequence $\beta \left(k\right)$ plays an important role in the convergence of the method. Next, we study the case when $\beta \left(k\right)\ne 1$. The details will be given below.
${\sum}_{k=1}^{\infty}\beta \left(k\right)(1\beta \left(k\right))=\infty ,\phantom{\rule{3.33333pt}{0ex}}\beta (k+1)\le \beta \left(k\right),\phantom{\rule{3.33333pt}{0ex}}{lim}_{k\to +\infty}\beta \left(k\right)=0.$
If Assumptions 2 to 4 hold, then by using the proposed fullydistributed protocol Equation (30), the location estimates of all sensors will converge to the optimal minimizer of Equation (23) [5].
Ringbased problem formulation: In [56], similar to the problem formulation shown in Figure 2, the authors formulate the localization problem as the intersection computation of a group of sensing rings as shown in Figure 3. In this work, the nonconvex problem is converted into two weighted convex optimization problems.
Similarly, the authors also consider two cases: the consistent case and the inconsistent case. For the consistent case, the problem can be formulated as finding a point in the intersection set of the sensing rings. However, due to the existence of noises, there is still some possibility for the source to be located outside of the rings.
In [56], the authors propose to use a distributed protocol (refer to Equation (8) in [56]), which is the same as Equation (30). In this work, with alternating the computing of the projection on the inner and outer disks of each sensor, by using Protocol Equation (30), the estimates of all sensors converge to the optimal solution of the designed ring intersection optimization problem.
The projectionbased method stated above is a good choice for real implementation in a centralized or distributed manner. However, it is hard to claim its accuracy theoretically, since it only considers bounded noise cases.
In the above, we have provided a detailed literature review for the energybased single source localization problem. As we can see, plenty of algorithms have been proposed, including centralized, sequential and fullydistributed algorithms. In this subsection, we give a brief summary for the single source case. For the single source localization, MLbased algorithms, weighted leastsquares, SDP, SOCP, subgradient, distributed projection, etc., can be used as a solution.
Generally speaking, centralized algorithms have a low robustness to network failures. In the designed algorithms, the fusion node is required to collect all measurements from others, and hence, communication congestion may occur during the data transmission. In the sequential algorithms, the sensor nodes compute the source location sequentially. Still, the network robustness is low. The fullydistributed algorithms have their advantage: robustness and easy implementation. For the localization accuracy performance, since in the projectionbased distributed algorithms, only parts of Gaussian noise information are used for the derivation, it is hard to evaluate and compare the accuracy theoretically. However, simulation results in [5,56] have reported that the localization accuracy of projectionbased methods is comparable to MLbased centralized algorithms.
For the localization accuracy performance, generally, centralized algorithms, e.g., weighted leastsquares, grid search, SDP, are better than POCS and its fullydistributed version [5,56]. Since in the projectionbased distributed algorithms, only parts of Gaussian noise information are used for derivation, it is hard to evaluate and compare the accuracy theoretically. To compare all of the methods fairly, we give an example test environment. A number of N sensors are placed randomly and uniformly in the region of interest sized 50 m × 50 m. In the simulations, we vary the noise level and the number of the sensors in the field. The comparison results are shown in Figure 4 and Figure 5. In the figures, ML represents the grid search (with a $1\times 1$ metersized grid), and LS represents the unconstrained leastsquares method.
From the results, we can see that the SDP and WDC methods outperform the POCS and LSbased ones. For the POCS method, the noise term is ignored even though global optimization can be achieved. LS uses the general linear leastsquares strategy, and the relationship between the unknown parameters is not considered.
For the computational complexity, for the grid search, its complexity is $\mathcal{O}\left({N}^{3}\right)$. For the WDCbased methods, it is reported that the complexity is at the level of $\mathcal{O}\left({N}^{2}\right)$. SDP or SOCPbased methods usually have a larger computational complexity. They both have a complexity in the order of $\mathcal{O}\left({N}^{3.5}\right)$. For POCSbased methods, its computational complexity is the lowest, which is in the order of $\mathcal{O}\left(N\right)$. However, it will also depend on the number of iterations upon the convergence of the POCS.
For the centralized methods, all of the sensor measurements have to be transmitted to a fusion center or one of the sensors. For the fixed quantized level of energy estimates, ${y}_{i}$ and sensor coordinates, conveying the information from the sensors to a fusion center requires the transmission of $\mathcal{O}\left(N\right)$ bits over a distance of $\mathcal{O}\left(1\right)$ per bit [38]. However, for POCSbased ones, $\mathcal{O}\left(N\right)$ bits are transmitted over a distance of $\mathcal{O}({log}^{2}L/L)$ per bit. Hence, it can be seen that the communication burden grows linearly with the number of sensors in the centralized approaches and sublinearly ($\sqrt{L}logL$) in the decentralized approach.
Generally speaking, in sensor networks, it has been shown if the application and the sensor architecture permit, it is much more energy efficient to perform distributed local processing than to do central processing that requires extensive communication [4]. Of course, not all algorithms can use distributed/decentralized processing. It has been reported that POCS [38] and WDC [37] can be run in a distributed way.
As we reported in this survey paper, most of the works done in the literature are algorithm developments. In these works, computer simulations are normally used to verify the corresponding algorithm performance in terms of complexity, accuracy, power consumption, etc. In view of the real implementation issue, the SDP algorithm is hard to implement in a large sensor network, since it can only be run in a centralized way. Distributed convex optimization methods, such as POCS, are easier to implement in a centralized, sequential or fullydistributed manner. In addition, POCS can be used for a large sensor network. LSbased methods, i.e., WDC, can also be implemented in a larger sensor network compared to SDP, but still, LS methods can be run in a fullydistributed manner. Hence, for the real implementation, POCSrelated methods are highly recommended due to their high robustness to the network failures.
In this section, we introduce the methods to solve the energybased multiple source localization problems. The multiple source localization problem is more difficult since the sensor measurement is a superimposition of energy signals emitted from multiple sources. The number of unknown variables is quite large, and hence, an efficient number of sensors should be deployed to take the measurements.
We consider a signal model as in [7]. We assume that a total of N sensors is deployed, and K static acoustic sources are present in the sensor field, where K is the known number of acoustic sources. A fusion center is used to collect the measurement data of the sensors and to run the source localization algorithm. The locations of sensors, denoted by ${r}_{i}={[{x}_{i},{y}_{i}]}^{T},i=1,...,N$, are known to the fusion center. The locations of sources, denoted by ${\mathbf{r}}_{\mathbf{s}k}={[{x}_{\mathbf{s}k},{y}_{\mathbf{s}k}]}^{T},k=1,...,K,$ are unknown. The signal energy received by sensor i at time t is:
where ${d}_{ik}=\parallel {\mathbf{r}}_{i}{\mathbf{r}}_{\mathbf{s}k}\parallel $ is the Euclidean distance between the ith sensor and the kth source; α denotes the unknown decay factor with typical value in the range from two to four; ${S}_{k}\left(t\right)$ is the signal energy at one meter away from the kth source; ${g}_{i}$ is the gain factor of the ith sensor; ${w}_{i}\left(t\right)$ is a Gaussian measurement noise with mean μ and variance ${\sigma}^{2}$, i.e., ${w}_{i}\left(t\right)\sim N(\mu ,{\sigma}^{2})$.
In the following, the time t is omitted for brevity. We define the following matrix notations.
where ${\epsilon}_{i}=({w}_{i}\mu )/\sigma \sim N(0,1),i=1,2,...,N$, are independent Gaussian random variables. T means the transpose of a matrix.
Using these matrices, Equation (31) can be rewritten as:
In addition, we denote:
where ${R}_{ik}={g}_{i}\frac{{S}_{k}}{{d}_{ik}^{\alpha}},i=1,...,N.$
Then, the joint probability density function of $\mathbf{Y}$ can be expressed as:
where:
and α is the decay factor.
The negative loglikelihood function is proportional to the quadratic form:
which can be expressed as:
where ${\mathbf{J}}^{\mathbf{1}}={\mathbf{H}}^{T}\mathbf{H}$ and $\mathbf{J}=diag\underset{N}{\underbrace{\{{\sigma}^{2},...,{\sigma}^{2}\}}}.$
A straightforward method to find a solution that maximizes the likelihood function is exhaustive search. However, the computation cost is extremely high, especially when there are multiple sources. For example, let there be K sources and q grid points to be searched in each dimension. Then the total number of search points with a twodimensional sensor field will be equal to ${q}^{2K}$. While the computation complexity may be feasible for a desktop computer, it is likely to be excessive for low power sensor nodes with limited computing capabilities.
In [7], a multiresolution (MR) search method is proposed to mitigate the exponentiallygrowing computation complexity. Among several choices, a logarithmic MR search strategy will examine only w points in each dimension per iteration, where $q={w}^{m}$, with m being the number of iteration. In each iteration, only ${w}^{2K}$ grid points needs to be searched. Then, another iteration of the search will be confined in the neighborhood of the current best solution by subdividing the coarser mesh around the current solution into w subdivisions and performing the search. After m iterations, the MR method will search at a grid size equal to that of the exhaustive search. However, the total search points will be $m\times {w}^{2K}$ rather than ${q}^{2K}={w}^{2mK}$.
Obviously, due to the coarser search grid at the earlier iterations, the MR method may be trapped in a local minimum and yields an inferior solution.
In order to further reduce the computational complexity and improve the localization performance, an efficient expectationmaximization (EM) algorithm for multiple source localization is proposed in [11].
Basic idea: For the EM algorithm, the basic idea is to decompose the aggregative energy signal into individual components and then estimate the corresponding location parameters separately for each source. The algorithm starts with the initialization based on a sequential dominantsource (SDS) scheme described in the next subsection. Each iteration consists of an Estep, an Mstep and a decay factor parameter update step. In the Estep, we decompose the received energy of sensors to get the hidden data, which represent the signal energy received by the sensors from a source. In the Mstep, a search method will be used to get the optimal source location estimate. In this work, an incremental parameterized search refinement scheme, i.e., the βparameterized search method, is used. After getting the source location estimates, source energy can be computed accordingly. Then, using the estimated source energy information and location information, the decay factor can be obtained by using the normalized incremental gradient method. The details of the implementation of the algorithm can be found in [11].
According to our knowledge, there is not much work done in the energybased multiple source localization problem as summarized in Table 1. The main reason is that the problem formulated is hard to solve, especially in an optimal way. The MR searchbased method can be seen as a direct way to solve the problem. In this method, exhaustive search in a multiresolution manner is adopted to find solutions to minimize the loglikelihood function in Equation (36). Clearly, this kind of search method has a high computational complexity.
Instead, the improved EM algorithm proposed in [11] first decomposes the received energy from multiple sources by the sensors. Then, the incremental search method is applied for single source estimation. Since, in the search phase, the effort is only made for a single source, the computing resources are greatly saved.
In this paper, a comprehensive review of energybased single and multiple source localization algorithms was given. Firstly, we gave a brief introduction about the existing work on centralized, sequential and fullydistributed algorithms for the single source localization problem. Generally speaking, most of the existing algorithms are approximated ML estimation methods; for example, the SDP methods, POCS, WLS, etc. Simulation results were provided to compare these existing methods under different view points, such as computational complexity, communication issues, implementation issues, etc. For the multiplesource localization problem, the MR search method and EM algorithm are introduced briefly. Intuitively, the MR search method can work, but its computational complexity is high. The EM algorithm performs well if the initialization is good enough.
For field experiments for energybased approaches, actually in the literature, not much work has been done. In [57], by using real corrupted or noisy signals collected from acoustic sources, such as a car, a helicopter and speech, source energy is then calculated. Simulation experiments are conducted by using MLbased methods.
Measurement noise and the real communication channel will be different from our assumption in this paper. To improve the localization accuracy in real field implementations, the noise model and energy decay factor need to be considered carefully. Model uncertainty, data uncertainty and environment uncertainty are three key issues to be incorporated in the problem formulation for real implementations. Perhaps the EM algorithm can be used to estimate the unknown parameters, especially for the energy decay factor [58].
Time synchronization and synchronization errors caused may be a big challenge compared to signalbased approaches; since for the energybased approaches, the time duration for averaging has to be consistent among the sensors. Processing speeds for the sensor computing processors are different. This will also be challenging to localize and track sources with higher moving speeds.
For the future research directions, the following points are worth noting:
This work is supported by National Natural Science Foundation of China (NSFC Grants no. 61673055 and no. 61673056).
W.M. wrote the paper. W.X. supervised the work and revised the paper.
The authors declare no conflict of interest.
PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. 