Journal of Inequalities and Applications

J Inequal Appl. 2017; 2017(1): 59.
Published online 2017 March 9.
PMCID: PMC5344962

# An eigenvalue localization set for tensors and its applications

## Abstract

A new eigenvalue localization set for tensors is given and proved to be tighter than those presented by Li et al. (Linear Algebra Appl. 481:36-53, 2015) and Huang et al. (J. Inequal. Appl. 2016:254, 2016). As an application of this set, new bounds for the minimum eigenvalue of -tensors are established and proved to be sharper than some known results. Compared with the results obtained by Huang et al., the advantage of our results is that, without considering the selection of nonempty proper subsets S of N = {1, 2, …, n}, we can obtain a tighter eigenvalue localization set for tensors and sharper bounds for the minimum eigenvalue of -tensors. Finally, numerical examples are given to verify the theoretical results.

Keywords: -tensors, nonnegative tensors, minimum eigenvalue, localization set

## Introduction

For a positive integer n, n ≥ 2, N denotes the set {1, 2, …, n}. (respectively, ) denotes the set of all complex (respectively, real) numbers. We call 𝒜 = (ai1im) a complex (real) tensor of order m dimension n, denoted by [m,n](ℝ[m,n]), if

ai1im ∈ ℂ(ℝ),

where ij ∈ N for j = 1, 2, …, m. 𝒜 is called reducible if there exists a nonempty proper index subset 𝕁 ⊂ N such that

ai1i2im = 0,  ∀i1 ∈ 𝕁, ∀i2, …, im ∉ 𝕁.

If 𝒜 is not reducible, then we call 𝒜 irreducible [3].

Given a tensor 𝒜 = (ai1im) ∈ ℂ[m,n], if there are λ ∈ ℂ and x = (x1,x2,…,xn)T ∈ ℂ∖{0} such that

𝒜xm−1λx[m−1]

then λ is called an eigenvalue of 𝒜 and x an eigenvector of 𝒜 associated with λ, where 𝒜xm−1 is an n dimension vector whose ith component is

$(Axm−1)i=∑i2,…,im∈Naii2⋯imxi2⋯xim$

and

$x[m−1]=(x1m−1,x2m−1,…,xnm−1)T.$

If λ and x are all real, then λ is called an H-eigenvalue of 𝒜 and x an H-eigenvector of 𝒜 associated with λ; see [4, 5]. Moreover, the spectral radius ρ(𝒜) of 𝒜 is defined as

ρ(𝒜) = max {|λ|:λ ∈ σ(𝒜)},

where σ(𝒜) is the spectrum of 𝒜, that is, σ(𝒜) = {λ:λ is an eigenvalue of 𝒜}; see [3, 6].

A real tensor 𝒜 is called an -tensor if there exist a nonnegative tensor and a positive number α > ρ(ℬ) such that 𝒜 = αℐ − ℬ, where is called the unit tensor with its entries

$δi1⋯im={1ifi1=⋯=im,0otherwise.$

Denote by τ(𝒜) the minimal value of the real part of all eigenvalues of an -tensor 𝒜. Then τ(𝒜) > 0 is an eigenvalue of 𝒜 with a nonnegative eigenvector. If 𝒜 is irreducible, then τ(𝒜) is the unique eigenvalue with a positive eigenvector [79].

Recently, many people have focused on locating eigenvalues of tensors and using obtained eigenvalue inclusion theorems to determine the positive definiteness of an even-order real symmetric tensor or to give the lower and upper bounds for the spectral radius of nonnegative tensors and the minimum eigenvalue of -tensors. For details, see [1, 2, 1014].

In 2015, Li et al. [1] proposed the following Brauer-type eigenvalue localization set for tensors.

### Theorem 1

[1], Theorem 6

Let 𝒜 = (ai1im) ∈ ℂ[m,n]. Then

$σ(A)⊆Δ(A)=⋃i,j∈N,j≠iΔij(A),$

where

$Δij(A)={z∈C:|(z−ai⋯i)(z−aj⋯j)−aij⋯jaji⋯i|≤|z−aj⋯j|rij(A)+|aij⋯j|rji(A)},ri(A)=∑δii2⋯im=0|aii2⋯im|,rij(A)=∑δii2⋯im=0,δji2⋯im=0|aii2⋯im|=ri(A)−|aij⋯j|.$

To reduce computations, Huang et al. [2] presented an S-type eigenvalue localization set by breaking N into disjoint subsets S and , where is the complement of S in N.

### Theorem 2

[2], Theorem 3.1

Let 𝒜 = (ai1im) ∈ ℂ[m,n], S be a nonempty proper subset of N, be the complement of S in N. Then

$σ(A)⊆ΔS(A)=(⋃i∈S,j∈S¯Δij(A))∪(⋃i∈S¯,j∈SΔij(A)).$

Based on Theorem 2, Huang et al. [2] obtained the following lower and upper bounds for the minimum eigenvalue of -tensors.

### Theorem 3

[2], Theorem 3.6

Let 𝒜 = (ai1im) ∈ ℝ[m,n] be an -tensor, S be a nonempty proper subset of N, be the complement of S in N. Then

$min{mini∈Smaxj∈S¯Lij(A),mini∈S¯maxj∈SLij(A)}≤τ(A)≤max{maxi∈Sminj∈S¯Lij(A),maxi∈S¯minj∈SLij(A)},$

where

$Lij(A)=12{ai⋯i+aj⋯j−rij(A)−[(ai⋯i−aj⋯j−rij(A))2−4aij⋯jrj(A)]12}.$

The main aim of this paper is to give a new eigenvalue inclusion set for tensors and prove that this set is tighter than those in Theorems 1 and 2 without considering the selection of S. And then we use this set to obtain new lower and upper bounds for the minimum eigenvalue of -tensors and prove that new bounds are sharper than those in Theorem 3.

## Main results

Now, we give a new eigenvalue inclusion set for tensors and establish the comparison between this set with those in Theorems 1 and 2.

### Theorem 4

Let 𝒜 = (ai1im) ∈ ℂ[m,n]. Then

$σ(A)⊆Δ∩(A)=⋃i∈N⋂j∈N,j≠iΔij(A).$

### Proof

For any λ ∈ σ(𝒜), let x = (x1,…,xn)T ∈ ℂn∖{0} be an associated eigenvector, i.e.,

𝒜xm−1λx[m−1].
1

Let |xp| = max {|xi|:i ∈ N}. Then |xp| > 0. For any j ∈ Nj ≠ p, then from (1) we have

$λxpm−1=∑δpi2⋯im=0,δji2⋯im=0api2⋯imxi2⋯xim+ap⋯pxpm−1+apj⋯jxjm−1$

and

$λxjm−1=∑δji2⋯im=0,δpi2⋯im=0aji2⋯imxi2⋯xim+aj⋯jxjm−1+ajp⋯pxpm−1,$

equivalently,

$(λ−ap⋯p)xpm−1−apj⋯jxjm−1=∑δpi2⋯im=0,δji2⋯im=0api2⋯imxi2⋯xim$
2

and

$(λ−aj⋯j)xjm−1−ajp⋯pxpm−1=∑δji2⋯im=0,δpi2⋯im=0aji2⋯imxi2⋯xim.$
3

Solving $xpm−1$ from (2) and (3), we get

$((λ−ap⋯p)(λ−aj⋯j)−apj⋯jajp⋯p)xpm−1=(λ−aj⋯j)∑δpi2⋯im=0,δji2⋯im=0api2⋯imxi2⋯xim+apj⋯j∑δji2⋯im=0,δpi2⋯im=0aji2⋯imxi2⋯xim.$

Taking absolute values and using the triangle inequality yields

$|(λ−ap⋯p)(λ−aj⋯j)−apj⋯jajp⋯p||xp|m−1≤|λ−aj⋯j|rpj(A)|xp|m−1+|apj⋯j|rjp(A)|xp|m−1.$

Furthermore, by |xp| > 0, we have

$|(λ−ap⋯p)(λ−aj⋯j)−apj⋯jajp⋯p|≤|λ−aj⋯j|rpj(A)+|apj⋯j|rjp(A),$

which implies that $λ∈Δpj(A)$. From the arbitrariness of j, we have $λ∈⋂j∈N,j≠pΔpj(A)$. Furthermore, we have $λ∈⋃i∈N⋂j∈N,j≠iΔij(A)$. The conclusion follows.

Next, a comparison theorem is given for Theorems 1, 2 and 4.

### Theorem 5

Let 𝒜 = (ai1im) ∈ ℂ[m,n], S be a nonempty proper subset of N. Then

Δ(𝒜) ⊆ ΔS(𝒜) ⊆ Δ(𝒜).

### Proof

By Theorem 3.2 in [2], ΔS(𝒜) ⊆ Δ(𝒜). Here, only Δ(𝒜) ⊆ ΔS(𝒜) is proved. Let z ∈ Δ(𝒜), then there exists some i0 ∈ N such that $z∈Δi0j(A),∀j∈N,j≠i0$. Let be the complement of S in N. If i0 ∈ S, then taking $j∈S¯$, obviously, $z∈⋃i0∈S,j∈S¯Δi0j(A)⊆ΔS(A)$. If $i0∈S¯$, then taking j ∈ S, obviously, $z∈⋃i0∈S¯,j∈SΔi0j(A)⊆ΔS(A)$. The conclusion follows.

### Remark 1

Theorem 5 shows that the set Δ(𝒜) in Theorem 4 is tighter than those in Theorems 1 and 2, that is, Δ(𝒜) can capture all eigenvalues of 𝒜 more precisely than Δ(𝒜) and ΔS(𝒜).

In the following, we give new lower and upper bounds for the minimum eigenvalue of -tensors.

### Theorem 6

Let 𝒜 = (ai1im) ∈ ℝ[m,n] be an irreducible -tensor. Then

$mini∈Nmaxj≠iLij(A)≤τ(A)≤maxi∈Nminj≠iLij(A).$

### Proof

Let x = (x1,x2,…,xn)T be an associated positive eigenvector of 𝒜 corresponding to τ(𝒜), i.e.,

𝒜xm−1τ(𝒜)x[m−1].
4

(I) Let xq = min {xi:i ∈ N}. For any j ∈ Nj ≠ q, we have by (4) that

$τ(A)xqm−1=∑δqi2⋯im=0,δji2⋯im=0aqi2⋯imxi2⋯xim+aq⋯qxqm−1+aqj⋯jxjm−1$

and

$τ(A)xjm−1=∑δji2⋯im=0,δqi2⋯im=0aji2⋯imxi2⋯xim+aj⋯jxjm−1+ajq⋯qxqm−1,$

equivalently,

$(τ(A)−aq⋯q)xqm−1−aqj⋯jxjm−1=∑δqi2⋯im=0,δji2⋯im=0aqi2⋯imxi2⋯xim$
5

and

$(τ(A)−aj⋯j)xjm−1−ajq⋯qxqm−1=∑δji2⋯im=0,δqi2⋯im=0aji2⋯imxi2⋯xim.$
6

Solving $xqm−1$ by (5) and (6), we get

$((τ(A)−aq⋯q)(τ(A)−aj⋯j)−aqj⋯jajq⋯q)xqm−1=(τ(A)−aj⋯j)∑δqi2⋯im=0,δji2⋯im=0aqi2⋯imxi2⋯xim+aqj⋯j∑δji2⋯im=0,δqi2⋯im=0aji2⋯imxi2⋯xim.$

From Theorem 2.1 in [9], we have τ(𝒜) ≤ miniNaii and

$((aq⋯q−τ(A))(aj⋯j−τ(A))−aqj⋯jajq⋯q)xqm−1=(aj⋯j−τ(A))∑δqi2⋯im=0,δji2⋯im=0|aqi2⋯im|xi2⋯xim+|aqj⋯j|∑δji2⋯im=0,δqi2⋯im=0|aji2⋯im|xi2⋯xim.$

Hence,

$((aq⋯q−τ(A))(aj⋯j−τ(A))−|aqj⋯j||ajq⋯q|)xqm−1≥(aj⋯j−τ(A))∑δqi2⋯im=0,δji2⋯im=0|aqi2⋯im|xqm−1+|aqj⋯j|∑δji2⋯im=0,δqi2⋯im=0|aji2⋯im|xqm−1.$

From xq > 0, we have

$(aq⋯q−τ(A))(aj⋯j−τ(A))−|aqj⋯j||ajq⋯q|≥(aj⋯j−τ(A))∑δqi2⋯im=0,δji2⋯im=0|aqi2⋯im|+|aqj⋯j|∑δji2⋯im=0,δqi2⋯im=0|aji2⋯im|=(aj⋯j−τ(A))rqj(A)+|aqj⋯j|rjq(A),$

equivalently,

$(aq⋯q−τ(A))(aj⋯j−τ(A))−(aj⋯j−τ(A))rqj(A)−|aqj⋯j|rj(A)≥0,$

that is,

$τ(A)2−(aq⋯q+aj⋯j−rqj(A))τ(A)+aq⋯qaj⋯j−aj⋯jrqj(A)+aqj⋯jrj(A)≥0.$

Solving for τ(𝒜) gives

$τ(A)≤12{aq⋯q+aj⋯j−rqj(A)−[(aq⋯q−aj⋯j−rqj(A))2−4aqj⋯jrj(A)]12}=Lqj(A).$

For the arbitrariness of j, we have τ(𝒜) ≤ minjqLqj(𝒜). Furthermore, we have

$τ(A)≤maxi∈Nminj≠iLij(A).$

(II) Let xp = max {xi:i ∈ N}. Similar to (I), we have

$τ(A)≥mini∈Nmaxj≠iLij(A).$

The conclusion follows from (I) and (II).

Similar to the proof of Theorem 3.6 in [2], we can extend the results of Theorem 6 to a more general case.

### Theorem 7

Let 𝒜 = (ai1im) ∈ ℝ[m,n] be an -tensor. Then

$mini∈Nmaxj≠iLij(A)≤τ(A)≤maxi∈Nminj≠iLij(A).$

By Theorems 3, 6 and 7 in [13], the following comparison theorem is obtained easily.

### Theorem 8

Let 𝒜 = (ai1im) ∈ ℝ[m,n] be an -tensor, S be a nonempty proper subset of N, be the complement of S in N. Then

$mini∈NRi(A)≤minj≠iLij(A)≤min{mini∈Smaxj∈S¯Lij(A),mini∈S¯maxj∈SLij(A)}≤mini∈Nmaxj≠iLij(A)≤maxi∈Nminj≠iLij(A)≤max{maxi∈Sminj∈S¯Lij(A),maxi∈S¯minj∈SLij(A)},$

where Ri(𝒜) = ∑i2,…,imNaii2im.

### Remark 2

Theorem 8 shows that the bounds in Theorem 7 are shaper than those in Theorem 3, Theorem 2.1 of [9] and Theorem 4 of [13] without considering the selection of S, which is also the advantage of our results.

## Numerical examples

In this section, two numerical examples are given to verify the theoretical results.

### Example 1

Let 𝒜 = (aijk) ∈ ℝ[3,4] be an irreducible -tensor with elements defined as follows:

$A(:,:,1)=(62−3−4−2−4−2−2−1−3−1−3−3−3−3−2−2),A(:,:,2)=(0−4−3−3−128−2−2−1−2−2−4−2−2−3−1),A(:,:,3)=(−2−1−2−1−1−1−1−2−2−463−4−4−4−2−2),A(:,:,4)=(−4−2−2−1−1−2−3−1−2−3−3−2−2−2−461).$

By Theorem 2.1 in [9], we have

$2=mini∈NRi(A)≤τ(A)≤min{maxi∈NRi(A),mini∈Nai⋯i}=28.$

By Theorem 4 in [13], we have

$τ(A)≥minj≠iLij(A)=2.3521.$

By Theorem 3, we have

By Theorem 7, we have

3.6685 ≤ τ(𝒜) ≤ 19.7199.

In fact, τ(𝒜) = 14.4049. Hence, this example verifies Theorem 8 and Remark 2, that is, the bounds in Theorem 7 are sharper than those in Theorem 3, Theorem 2.1 of [9] and Theorem 4 of [13] without considering the selection of S.

### Example 2

Let 𝒜 = (aijkl) ∈ ℝ[4,2] be an -tensor with elements defined as follows:

a1111 = 6,  a1222 = −1,  a2111 = −2,  a2222 = 5,

other aijkl = 0. By Theorem 7, we have

4 ≤ τ(𝒜) ≤ 4.

In fact, τ(𝒜) = 4.

## Conclusions

In this paper, we give a new eigenvalue inclusion set for tensors and prove that this set is tighter than those in [1, 2]. As an application, we obtain new lower and upper bounds for the minimum eigenvalue of -tensors and prove that the new bounds are sharper than those in [2, 9, 13]. Compared with the results in [2], the advantage of our results is that, without considering the selection of S, we can obtain a tighter eigenvalue localization set for tensors and sharper bounds for the minimum eigenvalue of -tensors.

## Acknowledgements

This work is supported by the National Natural Science Foundation of China (Nos. 11361074, 11501141), the Foundation of Guizhou Science and Technology Department (Grant No. [2015]2073) and the Natural Science Programs of Education Department of Guizhou Province (Grant No. [2016]066).

## Footnotes

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

## Contributor Information

Jianxing Zhao, moc.361@402018xjz.

Caili Sang, moc.621@lcgnas.

## References

1. Li CQ, Chen Z, Li YT. A new eigenvalue inclusion set for tensors and its applications. Linear Algebra Appl. 2015;481:36–53. doi: 10.1016/j.laa.2015.04.023.
2. Huang ZG, Wang LG, Xu Z, Cui JJ. A new S-type eigenvalue inclusion set for tensors and its applications. J. Inequal. Appl. 2016;2016 doi: 10.1186/s13660-016-1200-3. [PubMed]
3. Chang KQ, Zhang T, Pearson K. Perron-Frobenius theorem for nonnegative tensors. Commun. Math. Sci. 2008;6:507–520. doi: 10.4310/CMS.2008.v6.n2.a12.
4. Qi LQ. Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 2005;40:1302–1324. doi: 10.1016/j.jsc.2005.05.007.
5. Lim LH. Proceedings of the IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing. 2005. Singular values and eigenvalues of tensors: a variational approach; pp. 129–132.
6. Yang YN, Yang QZ. Further results for Perron-Frobenius theorem for nonnegative tensors. SIAM J. Matrix Anal. Appl. 2010;31:2517–2530. doi: 10.1137/090778766.
7. Ding WY, Qi LQ, Wei YM. ℳ$M$-tensors and nonsingular ℳ$M$-tensors. Linear Algebra Appl. 2013;439:3264–3278. doi: 10.1016/j.laa.2013.08.038.
8. Zhang LP, Qi LQ, Zhou GL. ℳ$M$-tensors and some applications. SIAM J. Matrix Anal. Appl. 2014;35:437–452. doi: 10.1137/130915339.
9. He J, Huang TZ. Inequalities for ℳ$M$-tensors. J. Inequal. Appl. 2014;2014 doi: 10.1186/1029-242X-2014-114.
10. Li CQ, Li YT, Kong X. New eigenvalue inclusion sets for tensors. Numer. Linear Algebra Appl. 2014;21:39–50. doi: 10.1002/nla.1858.
11. Li CQ, Li YT. An eigenvalue localization set for tensor with applications to determine the positive (semi-)definiteness of tensors. Linear Multilinear Algebra. 2016;64(4):587–601. doi: 10.1080/03081087.2015.1049582.
12. Li CQ, Jiao AQ, Li YT. An S-type eigenvalue location set for tensors. Linear Algebra Appl. 2016;493:469–483. doi: 10.1016/j.laa.2015.12.018.
13. Zhao JX, Sang CL. Two new lower bounds for the minimum eigenvalue of ℳ$M$-tensors. J. Inequal. Appl. 2016;2016 doi: 10.1186/s13660-016-1210-1.
14. He J. Bounds for the largest eigenvalue of nonnegative tensors. J. Comput. Anal. Appl. 2016;20(7):1290–1301.

Articles from Springer Open Choice are provided here courtesy of Springer

 PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers.