Home | About | Journals | Submit | Contact Us | Français |

**|**Front Neurorobot**|**v.11; 2017**|**PMC5585159

Formats

Article sections

- Abstract
- 1. Introduction
- 2. Finite-Time Recurrent Neural Network
- 3. Computer Simulation
- 4. Application to Robotic Motion Tracking
- 5. Conclusion
- Author Contributions
- Conflict of Interest Statement
- References

Authors

Related links

Front Neurorobot. 2017; 11: 45.

Published online 2017 September 1. doi: 10.3389/fnbot.2017.00045

PMCID: PMC5585159

Edited by: Shuai Li, Hong Kong Polytechnic University, Hong Kong

Reviewed by: Weibing Li, University of Leeds, United Kingdom; Yinyan Zhang, Hong Kong Polytechnic University, Hong Kong; Dechao Chen, Sun Yat-sen University, China; Ke Chen, Tampere University of Technology, Finland

*Correspondence: Lin Xiao, Email: moc.361@827068niloaix

Received 2017 May 30; Accepted 2017 August 11.

Copyright © 2017 Ding, Xiao, Liao, Lu and Peng.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

To obtain the online solution of complex-valued systems of linear equation in complex domain with higher precision and higher convergence rate, a new neural network based on Zhang neural network (ZNN) is investigated in this paper. First, this new neural network for complex-valued systems of linear equation in complex domain is proposed and theoretically proved to be convergent within finite time. Then, the illustrative results show that the new neural network model has the higher precision and the higher convergence rate, as compared with the gradient neural network (GNN) model and the ZNN model. Finally, the application for controlling the robot using the proposed method for the complex-valued systems of linear equation is realized, and the simulation results verify the effectiveness and superiorness of the new neural network for the complex-valued systems of linear equation.

Today, the complex-valued systems of linear equation has been applied into many fields (Duran-Diaz et al., 2011; Guo et al., 2011; Subramanian et al., 2014; Hezari et al., 2016; Zhang et al., 2016; Xiao et al., 2017a). In mathematics, the complex-valued systems of linear equations can be written as

$$\mathit{Az}\left(t\right)=b\in {\mathbb{C}}^{n},$$

(1)

where $A\in {\mathbb{C}}^{n\times n}$ and $b\in {\mathbb{C}}^{n}$ are the complex-valued coefficients, and $z\left(t\right)\in {\mathbb{C}}^{n}$ is a complex-valued vector to be computed. Xiao et al. (2015) proposed a fully complex-valued gradient neural network (GNN) to solve such a complex-valued systems of linear equation. However, the corresponding error norm usually converges to the theoretical solution after very long time. So to increase the convergence rate, a kind of neural network called Zhang neural network (ZNN) is proposed to make the lagging error converge to 0 exponentially (Zhang and Ge, 2005; Zhang et al., 2009). However, in Xiao (2016) and Xiao et al. (2017b), Xiao pointed that the original ZNN model cannot converge to 0 within finite time, and its real-time calculation capability may be limited (Marco et al., 2006; Li et al., 2013; Li and Li, 2014; Xiao, 2015). So, Xiao (2016) presented a new design formula, which can converge to 0 within finite time for the time-varying matrix inversion.

Considering that a complex variable can be written as the combination of its real and imaginary parts, we have *A*=*A*_{re}+*jA*_{im}, *b*=*b*_{re}+*jb*_{im}, and *z*(*t*)=*z*_{re}(*t*)+*z*_{im}(*t*), where the symbol $j=\sqrt{-1}$ means an imaginary unit. Therefore, the equation (1) can be presented as

$$\left[{A}_{\text{re}}+{\mathit{jA}}_{\text{im}}\right]\left[{z}_{\text{re}}\left(t\right)+{\mathit{jz}}_{\text{im}}\left(t\right)\right]={b}_{\text{re}}+{\mathit{jb}}_{\text{im}}\in {\mathbb{C}}^{n},$$

(2)

where ${A}_{\text{re}}\in {\mathbb{R}}^{n\times n},{A}_{\text{im}}\in {\mathbb{R}}^{n\times n}$, ${z}_{\text{re}}\in {\mathbb{R}}^{n},{z}_{\text{im}}\in {\mathbb{R}}^{n}$, ${b}_{\text{re}}\in {\mathbb{R}}^{n}$, and ${b}_{\text{im}}\in {\mathbb{R}}^{n}$. According to the complex formula, the real (or imaginary) part of the left-side and right-side of equation is equal (Zhang et al., 2016). Then we have

$$\left\{\begin{array}{c}{A}_{\text{re}}{z}_{\text{re}}\left(t\right)-{A}_{\text{im}}{z}_{\text{im}}\left(t\right)={b}_{\text{re}}\in {\mathbb{R}}^{n},\hfill \\ {A}_{\text{im}}{z}_{\text{re}}\left(t\right)+{A}_{\text{re}}{z}_{\text{im}}\left(t\right)={b}_{\text{im}}\in {\mathbb{R}}^{n}.\hfill \end{array}\right.$$

(3)

Thus, we can express the equation (3) in a compact matrix form as:

$$\left[\begin{array}{cc}\hfill {A}_{\text{re}}\hfill & \hfill -{A}_{\text{im}}\hfill \\ \hfill {A}_{\text{im}}\hfill & \hfill {A}_{\text{re}}\hfill \end{array}\right]\left[\begin{array}{c}\hfill {z}_{\text{re}}\left(t\right)\hfill \\ \hfill {z}_{\text{im}}\left(t\right)\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {b}_{\text{re}}\hfill \\ \hfill {b}_{\text{im}}\hfill \end{array}\right]\in {\mathbb{R}}^{2n}.$$

(4)

We can write the equation (4) as

$$\mathit{Cx}\left(t\right)=e\in {\mathbb{R}}^{2n},$$

(5)

where $C=\left[\begin{array}{cc}\hfill {A}_{\text{re}}\hfill & \hfill -{A}_{\text{im}}\hfill \\ \hfill {A}_{\text{im}}\hfill & \hfill {A}_{\text{re}}\hfill \end{array}\right]$, $x\left(t\right)=\left[\begin{array}{c}\hfill {z}_{\text{re}}\left(t\right)\hfill \\ \hfill {z}_{\text{im}}\left(t\right)\hfill \end{array}\right]$, and $e=\left[\begin{array}{c}\hfill {b}_{\text{re}}\hfill \\ \hfill {b}_{\text{im}}\hfill \end{array}\right]$. Now the complex-valued system of linear equation can be computed in real domain. In this situation, most methods for solving real-valued system of linear equation can be used to solve the complex-valued system of linear equation (Zhang and Ge, 2005; Zhang et al., 2009; Guo et al., 2011). For example, a gradient neural network (GNN) can be designed to solve such a real-valued system of linear equation. The GNN model can be directly presented as follows (Xiao et al., 2015):

$$\dot{x}\left(t\right)=-\mathrm{\gamma}{C}^{\text{T}}\left(\mathit{Cx}\left(t\right)-e\right),$$

(6)

where design parameter γ>0 is employed to adjust the convergence rate of the GNN model. Zhang et al. (Zhang et al., 2016) used the recurrent neural network to solve the complex-valued quadratic programming problems. Hezari et al. (2016) solved a class of complex symmetric system of linear equations using an iterative method. However, the above mentioned neural networks cannot converge to the desired solution within finite time. Considering that the complex-valued system of linear equation can be transformed into the real-valued system of linear equation, a new neural network can be derived from the new design formula proposed by Xiao for solving the complex-valued system of linear equation (Xiao et al., 2015). In addition, the new neural network possesses a finite-time convergence property.

In recent years, the research on robot has become a hot spot (Khan et al., 2016a,b; Zanchettin et al., 2016; Guo et al., 2017), and the neural network has been successfully applied into the robot domain (He et al., 2016; Jin and Li, 2016; Woodford et al., 2016; Jin et al., 2017; Xiao, 2017). However, the application of the new design method for the complex-valued system of linear equation in robot domain has not been reported. So this is the first time to propose a new neural network, which can convergence within finite-time for solving the complex-valued system of linear equation and its application to robot domain.

The rest of this paper is organized into four sections. Section 2 proposes a finite-time recurrent neural network (FTRNN) to deal with the complex-valued system of linear equation, and its convergence analysis is given in detail. Section 3 gives the computer-simulation results to substantiate the theoretical analysis and the superiority. Section 4 gives the results of the application for controlling the robotic motion planning. Finally, the conclusions are presented in Section 5. Before ending this section, the main contributions of the current work are presented as follows.

- The research object focuses on a complex-valued system of linear equation in complex domain, which is quite different from the previously investigated real-valued system of linear equation in real domain.
- A new finite-time recurrent neural network is proposed and investigated for solving complex-valued systems of linear equation in complex domain. In addition, it is theoretically proved to be convergent within finite time.
- Theoretical analyses and simulative results are presented to show the effectiveness of the proposed finite-time recurrent neural network. In addition, a five-link planar manipulator is used to validate the applicability of the finite-time recurrent neural network.

Considering that the complex-valued system of linear equation can be computed in real domain, the error function *E*(*t*) of traditional ZNN can be presented as

$$E\left(t\right)=\mathit{Cx}\left(t\right)-e\in {\mathbb{R}}^{2n}.$$

(7)

Then, according to the design formula $\dot{E}\left(t\right)=-\mathrm{\gamma}\text{\Phi}\left(E\left(t\right)\right)$, the original ZNN model can be presented as

$$C\dot{x}\left(t\right)=-\mathrm{\gamma}\text{\Phi}\left(\mathit{Cx}\left(t\right)-e\right),$$

(8)

where Φ(·) means an activation function array, and γ>0 is used to adjust the convergence rate. In this paper, the new design formula in Xiao (2016) for *E*(*t*) can be directly employed and written as follows:

$$\frac{\text{d}E\left(t\right)}{\text{d}t}=-\mathrm{\gamma}\text{\Phi}\left({\rho}_{1}E\left(t\right)+{\rho}_{2}{E}^{j\u2215f}\left(t\right)\right),$$

(9)

where the parameters *ρ*_{1} and *ρ*_{2} satisfy *ρ*_{1}>0, *ρ*_{2}>0, and *f* and *j* mean the positive odd integer and satisfy *f* >*j*. Then we have

$$C\dot{x}\left(t\right)=-\mathrm{\gamma}\text{\Phi}\left({\rho}_{1}\left(\mathit{Cx}\left(t\right)-e\right)+{\rho}_{2}{\left(\mathit{Cx}\left(t\right)-e\right)}^{j\u2215f}\left(t\right)\right).$$

(10)

To simplify the formula, Φ(·) uses the linear activation function. Then we have

$$\frac{\text{d}E\left(t\right)}{\text{d}t}=-\mathrm{\gamma}\left({\rho}_{1}E\left(t\right)+{\rho}_{2}{E}^{j\u2215f}\left(t\right)\right),$$

(11)

and

$$C\dot{x}\left(t\right)=-\mathrm{\gamma}\left({\rho}_{1}\left(\mathit{Cx}\left(t\right)-e\right)+{\rho}_{2}{\left(\mathit{Cx}\left(t\right)-e\right)}^{j\u2215f}\right),$$

(12)

which is called the finite-time recurrent neural network (FTRNN) model to online deal with the complex-valued system of linear equation. In addition, for design formula (11) and FTRNN model (12), we have the following two theorems to ensure their finite-time convergence properties.

**Theorem 1.**
*The error function E*(*t*) *of design formula (11) converges to zero within finite-time t _{u} regardless of its randomly generated initial error E*(0)

$${t}_{u}=\frac{f}{\mathrm{\gamma}{\rho}_{1}\left(f-j\right)}\mathrm{ln}\frac{{\rho}_{1}{h}_{M}{\left(0\right)}^{\left(f-j\right)\u2215f}+{\rho}_{2}}{{\rho}_{2}},$$

*where*
*h _{M}*(0)

Proof. For design formula (11), we have

$$\frac{\text{d}E\left(t\right)}{\text{d}t}=-\left(\mathrm{\gamma}{\rho}_{1}E\left(t\right)+\mathrm{\gamma}{\rho}_{2}{E}^{j\u2215f}\left(t\right)\right).$$

(13)

To deal with the dynamic response of the equation (13), the above differential equation can be rewritten as below:

$${E}^{-j\u2215f}\left(t\right)\u25c7\frac{\text{d}E\left(t\right)}{\text{d}t}+\mathrm{\gamma}{\rho}_{1}{E}^{\left(f-j\right)\u2215f}\left(t\right)=-\mathrm{\gamma}{\rho}_{2},$$

(14)

where the matrix-multiplication operator means the Hadamard product and can be written as

$$W\u25c7S=\left[\begin{array}{cccc}\hfill {W}_{11}{S}_{11},\hfill & \hfill {W}_{12}{S}_{12},\hfill & \hfill \cdots ,\hfill & \hfill {W}_{1n}{S}_{1n}\hfill \\ \hfill {R}_{21}{S}_{21},\hfill & \hfill {W}_{21}{S}_{21},\hfill & \hfill \cdots ,\hfill & \hfill {W}_{2n}{S}_{2n}\hfill \\ \hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ \hfill {W}_{m1}{S}_{m1},\hfill & \hfill {W}_{m2}{S}_{m2},\hfill & \hfill \cdots ,\hfill & \hfill {W}_{\mathit{mn}}{S}_{\mathit{mn}},\hfill \end{array}\right]\in {\mathbb{R}}^{m\times n}.$$

Now let us define *Y* (*t*)=*E*^{(}^{f}^{–}^{j}^{)/}* ^{f}*(

$$\frac{\text{d}Y\left(t\right)}{\text{d}t}=\frac{f-j}{f}{E}^{-j\u2215f}\left(t\right)\u25c7\frac{\text{d}E\left(t\right)}{\text{d}t}.$$

Thus, the differential equation (14) can be equivalent to the following first order differential equation:

$$\frac{\text{d}Y\left(t\right)}{\text{d}t}+\frac{f-j}{f}\mathrm{\gamma}{\rho}_{1}Y\left(t\right)=-\frac{f-j}{f}\mathrm{\gamma}{\rho}_{2}I.$$

(15)

This is a typical first order differential equation, and we have

$$Y\left(t\right)=\left(\frac{{\rho}_{2}}{{\rho}_{1}}I+Z\left(0\right)\right)\mathrm{exp}\left(-\frac{f-j}{f}\mathrm{\gamma}{\rho}_{1}t\right)-\frac{{\rho}_{2}}{{\rho}_{1}}I.$$

(16)

So we have

$${E}^{\left(f-j\right)\u2215f}\left(t\right)=\left(\frac{{\rho}_{2}}{{\rho}_{1}}I+{E}^{\left(f-j\right)\u2215f}\left(0\right)\right)\mathrm{exp}\left(-\frac{f-j}{f}\mathrm{\gamma}{\rho}_{1}t\right)-\frac{{\rho}_{2}}{{\rho}_{1}}I,$$

(17)

and

$$E\left(t\right)={\left[\left(\frac{{\rho}_{2}}{{\rho}_{1}}I+{E}^{\left(f-j\right)\u2215f}\left(0\right)\right)\mathrm{exp}\left(-\frac{f-j}{f}\mathrm{\gamma}{\rho}_{1}t\right)-\frac{{\rho}_{2}}{{\rho}_{1}}\right]}^{f\u2215\left(f-j\right)}.$$

(18)

From the equation (18), we can find the error matrix *E*(*t*) will converge to 0 in *t _{u}*, and

$$\left(\frac{{\rho}_{2}}{{\rho}_{1}}I+{E}^{\left(f-j\right)\u2215f}\left(0\right)\right)\mathrm{exp}\left(-\frac{f-j}{f}\mathrm{\gamma}{\rho}_{1}{t}_{u}\right)-\frac{{\rho}_{2}}{{\rho}_{1}}I=0.$$

(19)

Considering each element of the matrix *E*(*t*) has the same identical dynamics, we have

$${t}_{\mathit{ik}}=\frac{f}{\mathrm{\gamma}{\rho}_{1}\left(f-j\right)}\mathrm{ln}\frac{{\rho}_{1}{h}_{\mathit{ik}}^{\left(f-j\right)\u2215f}\left(0\right)+{\rho}_{2}}{{\rho}_{1}},$$

(20)

where *h _{ik}* means the

$${t}_{u}=\frac{f}{\mathrm{\gamma}{\rho}_{1}\left(f-j\right)}\mathrm{ln}\frac{{\rho}_{1}{h}_{M}{\left(0\right)}^{\left(f-j\right)\u2215f}+{\rho}_{2}}{{\rho}_{2}}.$$

According to the above analysis, we can draw a conclusion that the error matrix *E*(*t*) will converge to 0 within the finite time *t _{u}* regardless of its initial value

**Theorem 2.**
*The state matrix X*(*t*) *of FTRNN model (12) will converge to the theoretical solution of (5) in finite time t _{u} regardless of its randomly generated initial state x*(0),

$${t}_{u}\in \left\{\frac{f}{\mathrm{\gamma}{\rho}_{1}\left(f-j\right)}\mathrm{ln}\frac{{\rho}_{1}{h}_{L}{\left(0\right)}^{\left(f-j\right)\u2215f}+{\rho}_{2}}{{\rho}_{2}},\right.\left.\frac{f}{\mathrm{\gamma}{\rho}_{1}\left(f-j\right)}\mathrm{ln}\frac{{\rho}_{1}{h}_{M}{\left(0\right)}^{\left(f-j\right)\u2215f}+{\rho}_{2}}{{\rho}_{2}}\right\},$$

*where*
*h _{M}*(0)

Proof. Let *x*_{(}_{FT}_{)}(*t*) represent the solution of the FTRNN model (12), *x*_{(}_{org}_{)}(*t*) represent the theoretical solution of the equation (5), and $\tilde{x}\left(t\right)$ represent the difference between *x*_{(}_{FT}_{)}(*t*) and *x*_{(}_{org}_{)}(*t*). Then, we can obtain

$$\tilde{x}(t)={x}_{(\mathit{FT})}(t)-{x}_{(\mathit{org})}(t)\in {R}^{2n\times 2n}.$$

(21)

The equation (21) can be written as

$${x}_{(\mathit{FT})}(t)=\tilde{x}(t)+{x}_{(\mathit{org})}(t)\in {R}^{2n\times 2n}.$$

(22)

Substitutes the above equation into FTRNN model (12), we have

$$C(\dot{\tilde{x}}(t)+{\dot{x}}_{(\mathit{org})}(t))=-\gamma \left({\rho}_{1}(C(\tilde{x}(t)+{x}_{(\mathit{org})}(t))-e)+{\rho}_{2}{(C(\tilde{x}(t)+{x}_{(\mathit{org})}(t))-e)}^{j/f}\right).$$

(23)

Considering *Cx*_{(}_{org}_{)}(*t*)−*e*=0 and $C{\dot{x}}_{\left(\mathit{org}\right)}\left(t\right)=0$, the above equation can be written as

$$C\dot{\tilde{x}}\left(t\right)=-\mathrm{\gamma}\left({\rho}_{1}\left(C\tilde{x}\left(t\right)-e\right)+{\rho}_{2}{\left(C\tilde{x}\left(t\right)-e\right)}^{j\u2215f}\right).$$

Furthermore, considering $E\left(t\right)=C\left(\tilde{x}\left(t\right)+{x}_{\left(\mathit{org}\right)}\left(t\right)\right)-e$, *Cx*_{(}_{org}_{)}(*t*)−*e*=0, and $E\left(t\right)=C\tilde{x}\left(t\right)$, the above differential equation can be written as

$$\frac{\text{d}E\left(t\right)}{\text{d}t}=-\mathrm{\gamma}\left({\rho}_{1}\left(E\left(t\right)-e\right)+{\rho}_{2}{\left(E\left(t\right)-e\right)}^{j\u2215f}\right).$$

Let $\tilde{E}\left(t\right)=E\left(t\right)-e$, then we have

$$\frac{\text{d}\tilde{E}\left(t\right)}{\text{d}t}=-\mathrm{\gamma}\left({\rho}_{1}\tilde{E}\left(t\right)+{\rho}_{2}{\tilde{E}}^{j\u2215f}\left(t\right)\right).$$

(24)

So according to the equation (20), we have

$${\tilde{t}}_{\mathit{ik}}=\frac{f}{\mathrm{\gamma}{\rho}_{1}\left(f-j\right)}\mathrm{ln}\frac{{\rho}_{1}{\tilde{h}}_{\mathit{ik}}^{\left(f-j\right)\u2215f}\left(0\right)+{\rho}_{2}}{{\rho}_{1}},$$

(25)

where ${\tilde{t}}_{\mathit{ik}}$ means the time upper of *ik*th solution of the matrix $\tilde{E}\left(t\right)$, and ${\tilde{h}}_{\mathit{ik}}$ means the *ik*th initial error value of the matrix $\tilde{E}\left(0\right)$.

Let us define ${\tilde{h}}_{M}=\text{max}\left({\tilde{h}}_{\mathit{ik}}\left(0\right)\right)$, and ${\tilde{h}}_{L}=\text{min}\left({\tilde{h}}_{\mathit{ik}}\left(0\right)\right)$ with *i, k*=1, 2,…*n*. Then for all possible *i* and *k*, we have

$$\frac{f}{\mathrm{\gamma}{\rho}_{1}\left(f-j\right)}\mathrm{ln}\frac{{\rho}_{1}{\tilde{h}}_{L}^{\left(f-j\right)\u2215f}\left(0\right)+{\rho}_{2}}{{\rho}_{1}}\u2a7d{\tilde{t}}_{\mathit{ik}}\left(t\right)\u2a7d\frac{f}{\mathrm{\gamma}{\rho}_{1}\left(f-j\right)}\mathrm{ln}\frac{{\rho}_{1}{\tilde{h}}_{M}^{\left(f-j\right)\u2215f}\left(0\right)+{\rho}_{2}}{{\rho}_{1}}.$$

The above equation shows that the state matrix $\tilde{x}\left(t\right)={x}_{\left(\mathit{FT}\right)}\left(t\right)-{x}_{\text{(}\mathit{org}\text{)}}\left(t\right)$ will converges to 0 within finite time regardless of its initial error value. In another word, the matrix *x*_{(FT)}(*t*) for the FTRNN model (12) will converge to the theoretical solution *x*_{(}_{org}_{)}(*t*) for the theoretical model (5) within finite time regardless of its randomly generated initial state *x*(0). Now the proof is completed. □

In this section, a digital example will be carried out to show the superiority of FTRNN model (12) to GNN model (6) and ZNN model (8). We can choose the design parameters *f* and *j*, which satisfy *f* >*j*. For example, we choose *f* =5 and *j*=1 in this paper. In addition to this, to substantiate the superiority of FTRNN model (12), we choose the same complex-valued matrix *A* and *b* as these of the paper (Xiao et al., 2015). Then we have

$$A=\left[\begin{array}{cccc}\hfill -0.7597+0.6503j\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.8391-0.5440j\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0.2837-0.9589j\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1\hfill \\ \hfill 0.7597+0.6503j\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.8391+0.5440j\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.2837-0.9589j\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1\hfill \\ \hfill 0.7597-0.6503j\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.8391-0.5440j\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.2837+0.9589j\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1\hfill \\ \hfill 0-1.0000j\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-1.0000\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0+1.0000j\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1\hfill \end{array}\right].$$

So we have

$${A}_{\text{re}}=\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\left[\begin{array}{cccc}\hfill -0.7597\hfill & \hfill -0.8391\hfill & \hfill 0.2837\hfill & \hfill 1\hfill \\ \hfill 0.7597\hfill & \hfill -0.8391\hfill & \hfill -0.2837\hfill & \hfill 1\hfill \\ \hfill 0.7597\hfill & \hfill -0.8391\hfill & \hfill -0.2837\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill -1.0000\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right],$$

and

$${A}_{\text{im}}=\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\left[\begin{array}{cccc}\hfill 0.6503\hfill & \hfill -0.5440\hfill & \hfill -0.9589\hfill & \hfill 0\hfill \\ \hfill 0.6503\hfill & \hfill 0.5440\hfill & \hfill -0.9589\hfill & \hfill 0\hfill \\ \hfill -0.6503\hfill & \hfill -0.5440\hfill & \hfill 0.9589\hfill & \hfill 0\hfill \\ \hfill -1.0000\hfill & \hfill 0\hfill & \hfill 1.0000\hfill & \hfill 0\hfill \end{array}\right].$$

Now the randomly generated vector *b*=[1.0000, 0.2837+0.9589*j*, 0.2837−0.9589*j*, 0]^{T} in Xiao et al. (2015) is employed in this paper. The theoretical solution of the complex-valued linear equation system can be written as *z*_{(}_{org}_{)}=[−0.4683−0.2545*j*, 1.2425+0.3239*j*, −0.6126+0.0112*j*, 1.5082+0.4683*j*]. Then according to the equation (5), we have

$$C=\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\left[\begin{array}{cccccccc}\hfill -0.7597\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.8391\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0.2837\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.6503\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0.5440\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0.9589\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill \\ \hfill 0.7597\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.8391\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.2837\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.6503\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.5440\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0.9589\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill \\ \hfill 0.7597\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.8391\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.2837\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0.6503\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0.5440\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.9589\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill \\ \hfill 0\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-1.0000\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1.0000\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-1.0000\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill \\ \hfill 0.6503\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.5440\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.9589\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.7597\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.8391\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0.2837\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1\hfill \\ \hfill 0.6503\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0.5440\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.9589\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0.7597\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.8391\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.2837\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1\hfill \\ \hfill -0.6503\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.5440\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0.9589\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0.7597\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.8391\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-0.2837\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1\hfill \\ \hfill -1.0000\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1.0000\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-1.0000\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}0\hfill & \hfill \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}1\hfill \end{array}\right],$$

and *e*=[1.0000, 0.2837, 0.2837, 0, 0, 0.9589, −0.9589, 0]^{T}. So the theoretical solution of the complex-valued linear equation system can be rewritten as *x*_{(}_{org}_{)}=[−0.4683, 1.2425, −0.6126, 1.5082, −0.2545, 0.3239, 0.0112, 0.4683]^{T}.

First, a zero initial complex-valued state $z\left(0\right)\in {\mathbb{C}}^{4}$ is generated, which can be transformed into the real-valued state $x\left(0\right)\in {\mathbb{R}}^{8}$ in real domain. To help facilitate the contrast, we choose the design parameter γ=5 and γ=500, respectively.

Now GNN model (6), ZNN model (8), and FTRNN model (12) are applied to solve this complex-valued systems of linear equation problem. The output trajectories of the corresponding neural-state solutions are displayed in Figures Figures11–3. As seen from such three figures, we can conclude that the output trajectories of the neural-state solutions can converge to the theoretical solutions, but the convergence rates are different. By comparison, we can easily find that FTRNN model (12) has a fastest convergence property.

Output trajectories of neural states *x*(*t*) synthesized by GNN model (6) with γ=5. **(A)** Element of real part of *x*(*t*), **(B)** element of imaginary part of *x*(*t*).

Output trajectories of neural states *x*(*t*) synthesized by FTRNN model (12) with γ=5. **(A)** Element of real part of *x*(*t*), **(B)** element of imaginary part of *x*(*t*).

Output trajectories of neural states *x*(*t*) synthesized by ZNN model (8) with γ=5. **(A)** Element of real part of *x*(*t*), **(B)** element of imaginary part of *x*(*t*).

To directly show the solution process of such three neural-network models, the evolution of the corresponding residual errors, measured by the norm ||*E*(*t*)||_{2}, is plotted in Figure Figure44 under the conditions of γ=5 and γ=500. From Figure Figure4A,4A, the results are consistent with those of Figures Figures11–3. In addition, from Figure Figure4B,4B, the convergence speeds of GNN model (6), ZNN model (8), and FTRNN model (12) can be improved as the value of γ increases.

Output trajectories of residual functions ||*E*(*t*)||_{2} synthesized by different neural-network models with **(A)** γ=5 and **(B)** γ=500.

Now we can draw a conclusion that, as compared with GNN model (6) and ZNN model (8), FTRNN model (12) has the most superiority for solving the complex-valued system of linear equation problem.

In this section, a five-link planar manipulator is used to validate the applicability of the finite-time recurrent neural network (FTRNN) (Zhang et al., 2011). It is well known that the kinematics equations of the five-link planar manipulator at the position level and at the velocity level are, respectively, written as follows (Xiao and Zhang, 2013, 2014a,b, 2016; Xiao et al., 2017c):

$$r\left(t\right)=f\left(\theta \left(t\right)\right)$$

(26)

$$\dot{r}\left(t\right)=J\left(\theta \right)\dot{\theta}\left(t\right)$$

(27)

where *θ* denotes the angle vector of the five-link planar manipulator, *r*(*t*) denotes the end-effector position vector, *f* (·) stands for a smooth non-linear mapping function, and *J*(*θ*)=*f* (*θ*)/*θ* *R ^{m}*

To realize the motion tacking of this five-link planar manipulator, the inverse kinematic equation has been solved. Especially, equation (27) can be seen as a system of linear equations when the end-effector motion tracking task is allocated [i.e., $\dot{r}(t)$ is known and $\dot{\theta}(t)$ needs to be solved]. Thus, we can use the proposed FTRNN model (12) to solve this system of linear equations. Then, based on the design process of FTRNN model (12), we can obtain the following dynamic model to track control of the five-link planar manipulator [based on the formulation of equation (27)]:

$$C\dot{x}\left(t\right)=-\mathrm{\gamma}\left({\rho}_{1}\left(\mathit{Cx}\left(t\right)-e\right)+{\rho}_{2}{\left(\mathit{Cx}\left(t\right)-e\right)}^{j\u2215f}\right),$$

where *C*=*J*, *x* = $\dot{\theta}$ and e = $\dot{r}(t)$.

In the simulation experiment, a square path (with the radius being 1m) is allocated for the five-link planar manipulator to track. Besides, initial state of the mobile manipulator is set as *θ*(0)=[*π*/4, *π*/4, *π*/4, *π*/4, *π*/4]^{T}, γ=10^{3} and task duration is 20s. The experiment results are shown in Figures Figures55 and and6.6. From the results shown in such two figures, we can obtain that the five-link planar manipulator completes the square path tracking task successfully.

Simulative results synthesized by FTRNN model (12) when the end-effector of five-link planar manipulator tracking the square path. **(A)** Motion trajectories of manipulator, **(B)** actual and desired path, **(C)** position error, **(D)** velocity error.

In this paper, a finite-time recurrent neural network (FTRNN) for the complex-valued system of linear equation in complex domain is proposed and investigated. This is the first time to propose such a neural network model, which can convergence within finite time to online deal with the complex-valued system of linear equation in complex domain, and the first time to apply this FTRNN model for robotic path tracking by solving the system of linear equation. The simulation experiments show that the proposed FTRNN model has better effectiveness, as compared to the GNN model and the ZNN model for the complex-valued system of linear equation in complex domain.

LD: experiment preparation, publication writing; LX: experiment preparation, data processing, publication writing; BL: technology support, data acquisition, publication review; RL: supervision of data processing, publication review; HP revised the manuscript.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The reviewer, YZ, and handling editor declared their shared affiliation.

**Funding.** This work is supported by the National Natural Science Foundation of China under grants 61503152 and 61363073, the Natural Science Foundation of Hunan Province, China under grants 2016JJ2101 and 2017JJ3258), the National Natural Science Foundation of China under grants, 61563017, 61662025, and 61561022, and the Research Foundation of Jishou University, China under grants 2017JSUJD031, 2015SYJG034, JGY201643, and JG201615.

- Duran-Diaz I., Cruces S., Sarmiento-Vega M. A., Aguilera-Bonet P. (2011). Cyclic maximization of non-gaussianity for blind signal extraction of complex-valued sources. Neurocomputing 74, 2867–2873.10.1016/j.neucom.2011.03.031 [Cross Ref]
- Guo D., Nie Z., Yan L. (2017). The application of noise-tolerant ZD design formula to robots’ kinematic control via time-varying nonlinear equations solving. IEEE Trans. Syst. Man Cybern. Syst. 10.1109/TSMC.2017.2705160 [Cross Ref]
- Guo D., Yi C., Zhang Y. (2011). Zhang neural network versus gradient-based neural network for time-varying linear matrix equation solving. Neurocomputing 74, 3708–3712.10.1016/j.neucom.2011.05.021 [Cross Ref]
- He W., Chen Y., Yin Z. (2016). Adaptive neural network control of an uncertain robot with full-state constraints. IEEE Trans. Cybern. 46, 620–629.10.1109/TCYB.2015.2411285 [PubMed] [Cross Ref]
- Hezari D., Salkuyeh D. K., Edalatpour V. (2016). A new iterative method for solving a class of complex symmetric system of linear equations. Numer. Algorithms 73, 1–29.10.1007/s11075-016-0123-x [Cross Ref]
- Jin L., Li S. (2016). Distributed task allocation of multiple robots: a control perspective. IEEE Trans. Syst. Man Cybern. Syst. 10.1109/TSMC.2016.2627579 [Cross Ref]
- Jin L., Li S., Xiao L., Lu R., Liao B. (2017). Cooperative motion generation in a distributed network of redundant robot manipulators with noises. IEEE Trans. Syst. Man Cybern. Syst. 10.1109/TSMC.2017.2693400 [Cross Ref]
- Khan M., Li S., Wang Q., Shao Z. (2016a). Formation control and tracking for co-operative robots with non-holonomic constraints. J. Intell. Robot. Syst. 82, 163–174.10.1007/s10846-015-0287-y [Cross Ref]
- Khan M., Li S., Wang Q., Shao Z. (2016b). CPS oriented control design for networked surveillance robots with multiple physical constraints. IEEE Trans. Comput. Aided Des. Integr. Circuit Syst. 35, 778–791.10.1109/TCAD.2016.2524653 [Cross Ref]
- Li S., Chen S., Liu B. (2013). Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by using a sign-bi-power activation function. Neural Process. Lett. 37, 189–205.10.1007/s11063-012-9241-1 [Cross Ref]
- Li S., Li Y. (2014). Nonlinearly activated neural network for solving time-varying complex Sylvester equation. IEEE Trans. Cybern. 44, 1397–1407.10.1109/TCYB.2013.2285166 [PubMed] [Cross Ref]
- Marco M., Forti M., Grazzini M. (2006). Robustness of convergence in finite time for linear programming neural networks. Int. J. Circuit Theory Appl. 34, 307–316.10.1002/cta.352 [Cross Ref]
- Subramanian K., Savitha R., Suresh S. (2014). A complex-valued neuro-fuzzy inference system and its learning mechanism. Neurocomputing 123, 110–120.10.1016/j.neucom.2013.06.009 [Cross Ref]
- Woodford G. W., Pretorius C. J., Plessis M. C. D. (2016). Concurrent controller and simulator neural network development for a differentially-steered robot in evolutionary robotics. Rob. Auton. Syst. 76, 80–92.10.1016/j.robot.2015.10.011 [Cross Ref]
- Xiao L. (2015). A finite-time convergent neural dynamics for online solution of time-varying linear complex matrix equation. Neurocomputing 167, 254–259.10.1016/j.neucom.2015.04.070 [Cross Ref]
- Xiao L. (2016). A new design formula exploited for accelerating Zhang neural network and its application to time-varying matrix inversion. Theor. Comput. Sci. 647, 50–58.10.1016/j.tcs.2016.07.024 [Cross Ref]
- Xiao L. (2017). Accelerating a recurrent neural network to finite-time convergence using a new design formula and its application to time-varying matrix square root. J. Franklin Inst. 354, 5667–5677.10.1016/j.jfranklin.2017.06.012 [Cross Ref]
- Xiao L., Liao B., Zeng Q., Ding L., Lu R. (2017a). “A complex gradient neural dynamics for fast complex matrix inversion,” in International Symposium on Neural Networks (Springer), 521–528.
- Xiao L., Liao B., Jin J., Lu R., Yang X., Ding L. (2017b). A finite-time convergent dynamic system for solving online simultaneous linear equations. Int. J. Comput. Math. 94, 1778–1786.10.1080/00207160.2016.1247436 [Cross Ref]
- Xiao L., Liao B., Li S., Zhang Z., Ding L., Jin L. (2017c). Design and analysis of FTZNN applied to real-time solution of nonstationary Lyapunov equation and tracking control of wheeled mobile manipulator. IEEE Trans. Ind. Inf. 10.1109/TII.2017.2717020 [Cross Ref]
- Xiao L., Meng W. W., Lu R. B., Yang X., Liao B., Ding L. (2015). “A fully complex-valued neural network for rapid solution of complex-valued systems of linear equations,” in International Symposium on Neural Networks 2015, Lecture Notes in Computer Science, Vol. 9377, 444–451.
- Xiao L., Zhang Y. (2013). Acceleration-level repetitive motion planning and its experimental verification on a six-link planar robot manipulator. IEEE Trans. Control Syst. Technol. 21, 906–914.10.1109/TCST.2012.2190142 [Cross Ref]
- Xiao L., Zhang Y. (2014a). Solving time-varying inverse kinematics problem of wheeled mobile manipulators using Zhang neural network with exponential convergence. Nonlinear Dyn. 76, 1543–1559.10.1007/s11071-013-1227-7 [Cross Ref]
- Xiao L., Zhang Y. (2014b). A new performance index for the repetitive motion of mobile manipulators. IEEE Trans. Cybern. 44, 280–292.10.1109/TCYB.2013.2253461 [PubMed] [Cross Ref]
- Xiao L., Zhang Y. (2016). Dynamic design, numerical solution and effective verification of acceleration-level obstacle-avoidance scheme for robot manipulators. Int. J. Syst. Sci. 47, 932–945.10.1080/00207721.2014.909971 [Cross Ref]
- Zanchettin A. M., Ceriani N. M., Rocco P., Ding H., Matthias B. (2016). Safety in human-robot collaborative manufacturing environments: metrics and control. IEEE Trans. Autom. Sci. Eng. 13, 882–893.10.1109/TASE.2015.2412256 [Cross Ref]
- Zhang Y., Ge S. S. (2005). Design and analysis of a general recurrent neural network model for time-varying matrix inversion. IEEE Trans. Neural Netw. 16, 1447–1490.10.1109/TNN.2005.857946 [PubMed] [Cross Ref]
- Zhang Y., Shi Y., Chen K., Wang C. (2009). Global exponential convergence and stability of gradient-based neural network for online matrix inversion. Appl. Math. Comput. 215, 1301–1306.10.1016/j.amc.2009.06.048 [Cross Ref]
- Zhang Y., Xiao L., Xiao Z., Mao M. (2016). Zeroing Dynamics, Gradient Dynamics, and Newton Iterations. Boca Raton: CRC Press.
- Zhang Y., Yang Y., Tan N., Cai B. (2011). Zhang neural network solving for time-varying full-rank matrix Moore-Penrose inverse. Computing 92, 97–121.10.1007/s00607-010-0133-9 [Cross Ref]

Articles from Frontiers in Neurorobotics are provided here courtesy of **Frontiers Media SA**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |