Home | About | Journals | Submit | Contact Us | Français |

**|**Comput Intell Neurosci**|**v.2017; 2017**|**PMC5518498

Formats

Article sections

- Abstract
- 1. Introduction
- 2. T–S Fuzzy Model
- 3. Cuckoo Search
- 4. Encoding Scheme for T–S Fuzzy Model
- 5. Metaheuristic Algorithms
- 6. Application Examples and Discussions
- 7. Conclusions
- References

Authors

Related links

Comput Intell Neurosci. 2017; 2017: 8942394.

Published online 2017 July 6. doi: 10.1155/2017/8942394

PMCID: PMC5518498

Research Unit of Industrial Systems Study and Renewable Energy (ESIER), National Engineering School of Monastir (ENIM), 5019 Monastir, Tunisia

*Mourad Turki: Email: rf.oohay@ikrutssedaruom

Academic Editor: Paolo Gastaldo

Received 2017 March 18; Revised 2017 May 15; Accepted 2017 May 23.

Copyright © 2017 Mourad Turki and Anis Sakly.

This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A new method called cuckoo search (CS) is used to extract and learn the Takagi–Sugeno (T–S) fuzzy model. In the proposed method, the particle or cuckoo of CS is formed by the structure of rules in terms of number and selected rules, the antecedent, and consequent parameters of the T–S fuzzy model. These parameters are learned simultaneously. The optimized T–S fuzzy model is validated by using three examples: the first a nonlinear plant modelling problem, the second a Box–Jenkins nonlinear system identification problem, and the third identification of nonlinear system, comparing the obtained results with other existing results of other methods. The proposed CS method gives an optimal T–S fuzzy model with fewer numbers of rules.

To control any system, it is necessary to obtain an exact model of it but in many cases, it has not enough information to get an acceptable mathematical model, and it is required to use modelling techniques based on input–output data.

Fuzzy models are used due to their excellent performance in the modelling of nonlinear systems and being easy to implement. A fuzzy model is constructed from a basis of rules formed by inputs and outputs of a system [1].

The Takagi–Sugeno (T–S) fuzzy model is a type of fuzzy models which is able to give a local linear representation of the nonlinear system. Such a model is able to approximate a wide class of nonlinear systems because they are considered powerful in modelling and control of complex dynamic systems.

A T–S fuzzy model is powerful if it allows obtaining highly accurate models from a small number of rules but the majority of works in literature provide a large number of rules.

Many works concerning the T–S fuzzy models especially discrete-time type are done in literature such as in [2, 3]. The optimization of T–S fuzzy models is to determine the structure and parameters of model. The methods used to tune the antecedent and consequent parameters are the clustering algorithms [4] and the linear least squares [5–7]. The design of a fuzzy model is a search problem where each point represents a possible fuzzy model with different structures and parameters [1, 8]. In the aim to obtain optimal fuzzy models, many evolutionary algorithms (EAs) such as genetic algorithms (GAs) [9], genetic programming [10], evolutionary programming [11], evolution strategy [12], and differential evolution (DE) [13] have been used [14]. These methods tune the parameters and the structure is predefined.

Though, all parameters of model such as structure and parameters are linked and should be optimized simultaneously. Thereby, in [1] the authors have presented the optimization of rule structure where all the information is encoded into a chromosome.

The particle swarm optimization (PSO) is a novel metaheuristic algorithm used recently in many domains [15]. The PSO algorithm is used to elicit fuzzy models such as in [16] in which PSO optimize the structure, the number of membership functions, and the singleton consequent parameters. In [17], the results of PSO and GA are compared for the same method given in [16] with fixed number of rules and membership function for the same example of simulation.

The structure of the fuzzy model is identified using an online fuzzy clustering method and the parameters are optimized by PSO [20]. In [21], the fuzzy model is extracted using PSO with the recursive least-squares method. The ant colony and PSO were used to obtain T–S fuzzy model in [22]. Lin [23] used immune-based symbiotic PSO to obtain T–S models for the prediction of the skin color detection. In [24], Niu et al. used multiswarm PSO to tune fuzzy systems parameters. In [25], the subtractive clustering is utilized to extract a set of fuzzy rules and a variant of PSO called CPSO algorithm in the aim to find the optimal membership functions and the consequent parameters. In [19], the CRPSO is employed to tune all parameters of the fuzzy models. In [26], the GA is used for learning the T–S fuzzy model from data with a new encoding scheme.

DE and quantum inquired differential evolution is utilized to learn the T–S fuzzy model in [27]. T–S fuzzy models are also developed for modelling industrial systems such as the moving grate biomass furnace in [28].

Other metaheuristic named hunting search (HUS) is used in [18] to determine the parameters of the T–S fuzzy model.

A recent metaheuristic algorithm named the cuckoo search (CS) is proposed in 2009 by Yang. The CS algorithm has given best results compared to other metaheuristics such as PSO and GA. The CS is used to learn neural networks [29] and in the reliability optimization [30]. In [31], the CS is used to tune parameters of Sugeno-Type Fuzzy Logic Controller for liquid level control. Optimizing fuzzy controller using CS in the case study of computer numerical control of a steam condenser is done in [32]. In [33], a prediction of academic performance of student based on the CS is proposed. In [34, 35], the cuckoo search is used in the reduction of high order. In [36], Hammerstein model trained by the cuckoo search algorithm is proposed to identify nonlinear system. In [37], CS is used in the structural damage identification.

The goal and the motivation of this contribution is to obtain a T–S model with minimum number of rules by using a CS method. The objective is to have a model with less complexity able to be easily implemented in embedded systems and with a minimum of errors which proves its efficiency and precision.

This paper describes the use of CS to obtain the optimal structure in terms of a number of rules and also the parameters premises and consequents of T–S model simultaneously in the aim to explore the advantages of CS in optimization which are better compared to other metaheuristics in many examples in literature. The optimal T–S fuzzy models extracted are compared on the same examples with other methods in terms of MSE and number of rules.

This paper is organized as follows. Section 2 describes the structure of T–S fuzzy system. The CS algorithm is introduced in Section 3. Section 4 explains the encoding scheme of T–S model method used. Section 5 presents the different metaheuristic algorithms. Section 6 presents the application examples with results and discussions. Finally, conclusions are given in Section 7.

T–S fuzzy model presented in [38] is given by the following basis of rules.

*Rule i*. If *x*_{1} is *A*_{i}^{1} and … and *x*_{NI} is *A*_{i}^{NI}, then

$$\begin{array}{c}{y}_{i}={\alpha}_{i}^{\mathrm{0}}+{\alpha}_{i}^{\mathrm{1}}{x}_{\mathrm{1}}+\cdots +{\alpha}_{i}^{{N}_{I}}{x}_{{N}_{I}},\end{array}$$

(1)

where *i* = 1,…, *N*_{R}, *N*_{R} is the number of rules, *x* = [*x*_{1},…, *x*_{NI}] represents the input, *N*_{I} is the size of input, *α*_{i}^{0}, *α*_{i}^{1},…, *α*_{i}^{NI} are the consequent parameters, *y*_{i} is the *i*th fuzzy rule output, and *A*_{i}^{j} is a fuzzy subset.

The output of the model* y* is obtained as follows:

$$\begin{array}{c}\widehat{y}=\frac{{\sum}_{i=\mathrm{1}}^{{N}_{R}}{\omega}_{i}{y}_{i}}{{\sum}_{i=\mathrm{1}}^{{N}_{R}}{\omega}_{i}},\end{array}$$

(2)

where *ω*_{i} of the *i*th rule is computed as

$$\begin{array}{c}{\omega}_{i}={\displaystyle \prod}_{j=\mathrm{1}}^{n}\mu \left(\phantom{\rule[-3.18102pt]{0ex}{9.87196pt}}{A}_{i}^{j}\phantom{\rule[-3.18102pt]{0ex}{9.87196pt}}\right)\end{array}$$

(3)

with *μ*(*A*_{i}^{j}) being the membership function's grade of *A*_{i}^{j} and is characterized by a Gaussian function as

$$\begin{array}{c}{\mu}_{{A}_{i}^{j}}\left(\phantom{\rule[-4.19899pt]{0ex}{4.53pt}}{x}_{j}\phantom{\rule[-4.19899pt]{0ex}{4.53pt}}\right)=\mathrm{exp}\left(\phantom{\rule[-13.13pt]{0ex}{20.33095pt}}-\frac{\mathrm{1}}{\mathrm{2}}{\left(\phantom{\rule[-12.25299pt]{0ex}{18.37094pt}}\frac{{x}_{j}-{c}_{i}^{j}}{{\sigma}_{i}^{j}}\phantom{\rule[-12.25299pt]{0ex}{18.37094pt}}\right)}^{\mathrm{2}}\phantom{\rule[-13.13pt]{0ex}{20.33095pt}}\right).\end{array}$$

(4)

*c*_{i}^{j} and *σ*_{i}^{j} are, respectively, the mean and the deviation of the MF. The premise parameters *c*_{i}^{j} and *σ*_{i}^{j} are adjusted [20].

CS is developed by Yang and Deb in 2009 [39–41]. CS imitates the parasitism of cuckoo and used Lévy Flights which are better than random walks [32].

The CS has the specificity that the cuckoos lay their eggs in the nests of host birds. Some cuckoos can mimic the properties of the host eggs.

As a result, the number of the eggs abandoned is reduced and their reproductivity increases [42].

The CS models are used in many optimization problems. In [40, 43], it is demonstrated that Lévy Flights are better than random walk in CS algorithm.

In CS algorithm, the solution is given by an egg in a nest, and a new solution is represented by a cuckoo egg. The goal is to use the newer cuckoos or solutions to substitute worst solutions. In the classical algorithm, each nest has one egg; however the algorithm can be extended to complex problem [40, 43].

In the cuckoo search, the rules are as follows:

- Every cuckoo lays a single egg and throws it in a random nest.
- The best nests or solutions will be transferred to the next generations.
- The number of host nests is predefined, and a host can detect a stranger egg with probability
*p*_{a}[0,1]. In that event, the host bird skips the egg or abandons the nest and builds a new nest in another place [43].

The CS algorithm is described in Pseudocode 1 [43].

In our work, *f*(*x*) is the MSE and nests *x*_{i} are possible T–S fuzzy models with cuckoos being the parameters of the T–S fuzzy model.

This algorithm uses a combination of a local random walk and the global random walk, controlled by parameter *p*_{a}. The local random walk can be written as

$$\begin{array}{c}{x}_{i}^{t+\mathrm{1}}={x}_{i}^{t}+\alpha s\otimes H\left(\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}{p}_{a}-\epsilon \phantom{\rule[-2.59pt]{0ex}{5.32999pt}}\right)\otimes \left(\phantom{\rule[-4.69899pt]{0ex}{8.92297pt}}{x}_{j}^{t}-{x}_{k}^{t}\phantom{\rule[-4.69899pt]{0ex}{8.92297pt}}\right),\end{array}$$

(5)

where *x*_{j}^{t} and *x*_{k}^{t} are two different solutions, *H*(*u*) is a Heaviside function, *ε* is a random number drawn from a uniform distribution, and *s* is the step size. The global random walk is carried out by using Lévy Flights.

$$\begin{array}{c}H\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}u\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)=\left\{\begin{array}{cc}\mathrm{0},\hfill & \text{if}\hspace{0.17em}\hspace{0.17em}u<\mathrm{0}\hfill \\ \mathrm{1},\hfill & \text{if}\hspace{0.17em}\hspace{0.17em}u\ge \mathrm{0}\hfill \end{array}\right.\\ \\ {x}_{i}^{t+\mathrm{1}}={x}_{i}^{t}+\alpha L\left(\phantom{\rule[-1.16998pt]{0ex}{7.08pt}}s,\lambda \phantom{\rule[-1.16998pt]{0ex}{7.08pt}}\right),\\ \end{array}$$

(6)

where

$$\begin{array}{c}L\left(\phantom{\rule[-1.16998pt]{0ex}{7.08pt}}s,\lambda \phantom{\rule[-1.16998pt]{0ex}{7.08pt}}\right)=\frac{\lambda \mathrm{\Gamma}\left(\lambda \right)\mathrm{sin}\left(\pi \lambda /\mathrm{2}\right)}{\pi}\xb7\frac{\mathrm{1}}{{s}^{\mathrm{1}+\lambda}}\end{array}$$

(7)

with

$$\begin{array}{c}\mathrm{\Gamma}\left(\phantom{\rule[-0.12pt]{0ex}{7.08pt}}\lambda \phantom{\rule[-0.12pt]{0ex}{7.08pt}}\right)=\underset{\mathrm{0}}{\overset{+\infty}{{\displaystyle \int}}}{t}^{\lambda -\mathrm{1}}{e}^{-t}\hspace{0.17em}dt.\end{array}$$

(8)

Here, *α* > 0 is the step size scaling factor, which should be related to the scales of the problem of interest [44]. In fact, we use the Lévy Flights to obtain other T–S fuzzy models in the next generation which can be a solution.

Recent studies utilize cuckoo search such as Walton et al. by formulating a modified cuckoo search algorithm [45]; Yang and Deb improve it to multiobjective optimization [39].

In this paper, the fuzzy system is given by a particle formed by the premise and the consequent parameters and also the labels which are used to choose the rules to construct the fuzzy system [20]. The fuzzy model's particle is presented in Figure 1.

In Figure 1, each rule *i* is formed by premise and consequent parameters and the label.

Figure 2 shows that the particle *i* is given by a vector composed of the premise parameters *σ*_{i}^{1}, *c*_{i}^{1},…, *σ*_{i}^{NI}, *c*_{i}^{NI}, consequent parameters *α*_{i}^{0}, *α*_{i}^{1},…, *α*_{i}^{NI}, and the label *l*_{i} of all rules.

The fuzzy rules are selected using the labels in fact. If*l*_{i} > 0, then the rule *i* is selected where *i* = 1,…, *N*_{max} is the index of the rule. The T–S fuzzy system is composed of the active rules.

The CS algorithm is used to elicit T–S fuzzy model and presented as follows:

(1) Encoding all the parameters premise and consequent of all rules with a predefined maximum number of rules *N*_{max}.

(2) Defining a fitness function and the bounds of parameters.

(3) Randomizing an initial swarm of nests. Initializing all the parameters of particles representing fuzzy models with the lower and upper bounds chosen of parameters. Every nest is a fuzzy model with different structures and parameters.

(4) Calculating the fitness of initials nests which is MSE given by this equation:

$$\begin{array}{c}\mathrm{M}\mathrm{S}\mathrm{E}=\frac{\mathrm{1}}{n}{\displaystyle \sum}_{k=\mathrm{1}}^{n}{\left(\phantom{\rule[-2.59pt]{0ex}{7.08pt}}{y}_{\mathrm{r}\mathrm{e}\mathrm{f}}\left(\phantom{\rule[-0.12pt]{0ex}{7.08pt}}k\phantom{\rule[-0.12pt]{0ex}{7.08pt}}\right)-y\left(\phantom{\rule[-0.12pt]{0ex}{7.08pt}}k\phantom{\rule[-0.12pt]{0ex}{7.08pt}}\right)\phantom{\rule[-2.59pt]{0ex}{7.08pt}}\right)}^{2}.\end{array}$$

(9)

*n* is the number of input prototypes, *y*_{ref}(*k*) is the desired output, and *y*(*k*) is the model output.

(5) Using the CS to search the optimal T–S fuzzy model.

Get a fuzzy model (cuckoo) FM_{1} from the swarm of nests (TS fuzzy models generated randomly) randomly by using Lévy Flights, calculate its fitness, and select another fuzzy model (nest) FM_{2} randomly among the* n* nests of the swarm.

If MSE(FM_{1}) > MSE(FM_{2}), replace FM_{2} by the new solution FM_{1}; otherwise pass to the next step.

*p*
_{a} worst fuzzy models or nests (each nest is a fuzzy model in our work) are abandoned and new ones are built.

Test the stopping criterion; if it is verified, keep the best fuzzy model (the optimal premise, consequent, and number of rules) and otherwise return to Step 1.

There are many metaheuristic algorithms in literature such as the particle swarm optimization (PSO), the cooperative random learning PSO called CRPSO, the hunting search (HUS), the genetic algorithms (GA), and the differential evolution (DE).

The particle swarm optimization (PSO) imitates the movement of birds flocking or fish schooling looking for food [19]. The research of optimal solution is given by two equations:

$$\begin{array}{c}v\left(\phantom{\rule[-0.23pt]{0ex}{6.35999pt}}t+\mathrm{1}\phantom{\rule[-0.23pt]{0ex}{6.35999pt}}\right)=w\xb7v\left(\phantom{\rule[-0.12pt]{0ex}{5.95pt}}t\phantom{\rule[-0.12pt]{0ex}{5.95pt}}\right)+{c}_{\mathrm{1}}{r}_{\mathrm{1}}\left[\phantom{\rule[-2.59pt]{0ex}{6.93999pt}}p-x\left(\phantom{\rule[-0.12pt]{0ex}{5.95pt}}t\phantom{\rule[-0.12pt]{0ex}{5.95pt}}\right)\phantom{\rule[-2.59pt]{0ex}{6.93999pt}}\right]\\ \phantom{\rule{11.436553955078125pt}{0ex}}+{c}_{\mathrm{2}}{r}_{\mathrm{2}}\left[\phantom{\rule[-4.19899pt]{0ex}{6.93999pt}}{p}_{g}-x\left(\phantom{\rule[-0.12pt]{0ex}{5.95pt}}t\phantom{\rule[-0.12pt]{0ex}{5.95pt}}\right)\phantom{\rule[-4.19899pt]{0ex}{6.93999pt}}\right]\\ x\left(\phantom{\rule[-0.23pt]{0ex}{6.35999pt}}t+\mathrm{1}\phantom{\rule[-0.23pt]{0ex}{6.35999pt}}\right)=x\left(\phantom{\rule[-0.12pt]{0ex}{5.95pt}}t\phantom{\rule[-0.12pt]{0ex}{5.95pt}}\right)+v\left(\phantom{\rule[-0.23pt]{0ex}{6.35999pt}}t+\mathrm{1}\phantom{\rule[-0.23pt]{0ex}{6.35999pt}}\right),\\ \end{array}$$

(10)

where *x* is the position of a particle, *v* is the velocity, *w* is the inertia weight, *c*_{1} and *c*_{2} are constants, *r*_{1} and *r*_{2} are random numbers between 0 and 1,* p* is the best position of the particle, and *p*_{g} is the best position of all particles in the swarm.

Another version of PSO is the cooperative random learning PSO (CRPSO) which used subswarms and the equation of velocity is given as follows:

$$\begin{array}{c}{v}_{j}\left(\phantom{\rule[-0.23pt]{0ex}{6.35999pt}}t+\mathrm{1}\phantom{\rule[-0.23pt]{0ex}{6.35999pt}}\right)=w\xb7{v}_{j}\left(\phantom{\rule[-0.12pt]{0ex}{5.95pt}}t\phantom{\rule[-0.12pt]{0ex}{5.95pt}}\right)+{c}_{\mathrm{1}}{r}_{\mathrm{1}}\left[\phantom{\rule[-4.19899pt]{0ex}{6.93999pt}}{p}_{j}-{x}_{j}\left(\phantom{\rule[-0.12pt]{0ex}{5.95pt}}t\phantom{\rule[-0.12pt]{0ex}{5.95pt}}\right)\phantom{\rule[-4.19899pt]{0ex}{6.93999pt}}\right]\\ \phantom{\rule{11.436553955078125pt}{0ex}}+{c}_{\mathrm{2}}{r}_{\mathrm{2}}\left[\phantom{\rule[-4.19899pt]{0ex}{8.07999pt}}{p}_{g}\left(\phantom{\rule[-2.59pt]{0ex}{6.57999pt}}j\phantom{\rule[-2.59pt]{0ex}{6.57999pt}}\right)-{x}_{j}\left(\phantom{\rule[-0.12pt]{0ex}{5.95pt}}t\phantom{\rule[-0.12pt]{0ex}{5.95pt}}\right)\phantom{\rule[-4.19899pt]{0ex}{8.07999pt}}\right]\phantom{\rule{11.436553955078125pt}{0ex}}+{c}_{\mathrm{3}}{r}_{\mathrm{3}}\left[\phantom{\rule[-4.19899pt]{0ex}{6.93999pt}}{p}_{g}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}r\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)-{x}_{j}\left(\phantom{\rule[-0.12pt]{0ex}{5.95pt}}t\phantom{\rule[-0.12pt]{0ex}{5.95pt}}\right)\phantom{\rule[-4.19899pt]{0ex}{6.93999pt}}\right],\end{array}$$

(11)

where *c*_{3} is constant, *r*_{3} is random number,* j* is the index of subswarm,* r* is the number between 1 and the number of subswarms, and *p*_{g} is the global best position of all subswarms [19].

The hunting search (HUS) algorithm imitates the social behavior of animals when catching a prey in the way wolves hunt. The algorithm is based on approaching the leader having the best position in the group and reorganizing if the hunters are close to each other but still cannot find the optimum solution [18].

The research of new solution obeys this equation:

$$\begin{array}{c}{x}_{i}={x}_{i}+r\xb7\mathrm{N}\mathrm{M}\mathrm{L}\xb7\left(\phantom{\rule[-2.984pt]{0ex}{9.75598pt}}{x}_{i}^{l}-{x}_{i}\phantom{\rule[-2.984pt]{0ex}{9.75598pt}}\right),\end{array}$$

(12)

where *r* is a random number between 0 and 1, NML is the maximum number of movements toward the leader, and *x*_{i}^{l} is the position of leader in the *i*th variable.

Another step is the reorganization of hunters which is given by this equation:

$$\begin{array}{c}{x}_{i}={x}_{i}^{l}\pm \mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}\xb7\left(\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\mathrm{max}\left(\phantom{\rule[-2.484pt]{0ex}{4.53pt}}{x}_{i}\phantom{\rule[-2.484pt]{0ex}{4.53pt}}\right)-\mathrm{min}\left(\phantom{\rule[-2.484pt]{0ex}{4.53pt}}{x}_{i}\phantom{\rule[-2.484pt]{0ex}{4.53pt}}\right)\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\right)\xb7\alpha \\ \phantom{\rule{11.436553955078125pt}{0ex}}\xb7\mathrm{exp}\left(\phantom{\rule[-2.59pt]{0ex}{7.08pt}}-\beta \xb7\mathrm{E}\mathrm{N}\phantom{\rule[-2.59pt]{0ex}{7.08pt}}\right),\end{array}$$

(13)

where EN is the number of past reorganizations and *α* and *β* are positive constants, with the aim to avoid falling into a local minimum and obtain a globally optimal solution.

The genetic algorithm (GA) is a random search technique which imitates natural evolution with Darwinian survival of the fittest approach. In this algorithm, the variables are represented as genes in a chromosome, and the chromosomes are evaluated according to their fitness values. The chromosomes with better fitness are found through the three basic operations of GA: selection, crossover, and mutation [46].

The algorithm of GA is described as follows:

- Initialization of initial population called chromosomes.
- Evaluation of each element in the population by calculating its fitness function.
- Selection of the chromosomes.
- Generation of new chromosomes using the chromosomes selected and the GA's operations such as crossover and mutation.
- Test of the stopping criterion: if validated, then the parameters are kept; otherwise return to Step (2).

The differential evolution (DE) is a search algorithm that is similar to GA; it deals with a real coded population and devises its own crossover and mutation in the real space [13]. DE creates *x*^{0}, a mutated form of any individual *x*, using the vector difference of randomly picked individuals called *x*^{} and *x*° using this equation:

$$\begin{array}{c}{x}^{\mathrm{0}}=x+\gamma \left(\phantom{\rule[-0.23pt]{0ex}{8.63599pt}}{x}^{\ast}-{x}^{\xb0}\phantom{\rule[-0.23pt]{0ex}{8.63599pt}}\right),\end{array}$$

(14)

where *γ* is a scaling factor between 0 and 2. Then, the crossover is applied between any individual member of the population and the mutated vector *x*^{0} and the best element is kept in the last iteration.

In this part, the T–S models optimized are used to identify three systems: a nonlinear plant modelling problem, the Box–Jenkins gas furnace benchmark, and identification of nonlinear system.

The performance of CS is compared with other metaheuristic algorithms. The parameters used in the examples are presented in Table 1.

In [1, 5, 6], the nonlinear dynamic plant given by this nonlinear difference equation has been modelled.

$$\begin{array}{c}y\left(\phantom{\rule[-0.12pt]{0ex}{7.08pt}}k\phantom{\rule[-0.12pt]{0ex}{7.08pt}}\right)=g\left(\phantom{\rule[-2.59pt]{0ex}{7.08pt}}y\left(\phantom{\rule[-0.23pt]{0ex}{7.08pt}}k-\mathrm{1}\phantom{\rule[-0.23pt]{0ex}{7.08pt}}\right),y\left(\phantom{\rule[-0.23pt]{0ex}{7.08pt}}k-\mathrm{2}\phantom{\rule[-0.23pt]{0ex}{7.08pt}}\right)\phantom{\rule[-2.59pt]{0ex}{7.08pt}}\right)+u\left(\phantom{\rule[-0.12pt]{0ex}{7.08pt}}k\phantom{\rule[-0.12pt]{0ex}{7.08pt}}\right),\end{array}$$

(15)

where

$$\begin{array}{c}g\left(\phantom{\rule[-2.59pt]{0ex}{7.08pt}}y\left(\phantom{\rule[-0.23pt]{0ex}{7.08pt}}k-\mathrm{1}\phantom{\rule[-0.23pt]{0ex}{7.08pt}}\right),y\left(\phantom{\rule[-0.23pt]{0ex}{7.08pt}}k-\mathrm{2}\phantom{\rule[-0.23pt]{0ex}{7.08pt}}\right)\phantom{\rule[-2.59pt]{0ex}{7.08pt}}\right)\\ \phantom{\rule{10pt}{0ex}}=\frac{y\left(k-\mathrm{1}\right)y\left(k-\mathrm{2}\right)\left(y\left(k-\mathrm{1}\right)-\mathrm{0.5}\right)}{\mathrm{1}+y{\left(k-\mathrm{1}\right)}^{2}+y{\left(k-\mathrm{2}\right)}^{2}}.\end{array}$$

(16)

The aim of this application is to identify the nonlinear component *g*(*y*(*k* − 1), *y*(*k* − 2)) using the presented fuzzy model with CS algorithm. The example has two inputs and one output. The simulated data are formed by 400 points which are generated from the plant model: 200 data points are calculated using a random input signal *u*(*k*) uniformly distributed in [−1.5,1.5], and other 200 data points are computed using a sinusoidal input signal *u*(*k*) = sin (2*π*k/25) [1, 5, 6].

The number of generations chosen is 500 and was iterated 50 times on a Pentium Core 2 Duo (1.8GHz CPU) and 2GB memory personal computer in the same computing environment (MATLAB 2007a). The consequent parameters are chosen in [−10,10] and the width of the premises parameters is given in [0,5].

As those in [1, 5, 6], we select *y*(*k* − 1) and *y*(*k* − 2) as inputs in the aim to predict the nonlinear component *g*(*y*(*k* − 1), *y*(*k* − 2)) and 5 is the maximum number of rules. Table 2 gives the best results of 50 experimental trials.

According to Table 2, CS method gives the best results in terms of a number of rules (mean) fewer than the HUS method in [18], MSE (mean), and standard deviation (Std) in both training and testing stages compared with other methods. Also, the CS method gives good performances with the smaller number of evaluations than the result in [19]. In fact, in [19] 1000 generations and 20 particles are used with CRPSO algorithm and 2000 generations and 30 particles with PSO, GA, and DE; however in our work we use 500 generations and 20 particles which are less than the previous algorithms.

The optimal fuzzy model gives an MSE 810^{−4} in training and 410^{−4} in testing.

Figure 3 indicates outputs of target and model in the training and testing stages of the optimal model and the errors between them can be seen in Figure 4. As follows in Figure 3, the CS method gives the output with small errors.

The target output and model output for nonlinear plant modelling problem: (a) training stage and (b) testing stage.

The Box–Jenkins gas furnace data [1, 11, 34–36] was recorded from a combustion process of a methane–air mixture [44]. The data set originally consists of 296 data points [*y*(*t*), *u*(*t*)]. The input *u*(*t*) is the gas flow rate; however the output *y*(*t*) was the carbon dioxide (CO_{2}) concentration. The aim is to elicit a model to predict *y*(*t*) using this data. The first step is to determine the appropriate inputs to be used. The initial fuzzy inputs are *y*(*t* − 1), *y*(*t* − 2), *y*(*t* − 3), *y*(*t* − 4), *u*(*t* − 1), *u*(*t* − 2), *u*(*t* − 3), *u*(*t* − 4), *u*(*t* − 5), and *u*(*t* − 6), and the output is *y*(*t*). Due to many studies in literature, *y*(*t* − 1) and *u*(*t* − 4) are chosen as inputs. All the coefficients of the consequent parameters are chosen in [−100,100] and the width of the Gaussian function is limited to [0,10].

All the simulations are executed 50 times. The mean number of rules, the mean, and standard deviations of the MSE are listed in Table 3.

From Table 3, we conclude that CS has minimum mean MSE compared to PSO, HUS, and DE and less standard deviation MSE than PSO, CRPSO, HUS, and DE. The mean number of the rules of CS is much smaller than HUS, PSO, CPSO, GA, and DE. In conclusion, the CS-based method can give a fuzzy model with less number of rules. The optimal fuzzy model found by CS during 50 runs has an MSE of 0.139 and 3 rules.

Figure 5 shows the target and the model outputs and Figure 6 gives the errors between them. The optimal fuzzy model can identify the output with small errors.

Table 4 gives the parameters of the optimal fuzzy model.

The third example used for identification, given by Narendra and Parthasarathy, is described by the next difference equation [47]:

$$\begin{array}{c}y\left(\phantom{\rule[-0.23pt]{0ex}{7.08pt}}k+\mathrm{1}\phantom{\rule[-0.23pt]{0ex}{7.08pt}}\right)=\frac{y\left(k\right)}{\mathrm{1}+{y}^{\mathrm{2}}\left(k\right)}+{u}^{\mathrm{3}}\left(\phantom{\rule[-0.12pt]{0ex}{7.08pt}}k\phantom{\rule[-0.12pt]{0ex}{7.08pt}}\right).\end{array}$$

(17)

The input *u*(*k*) is given by this equation:

$$\begin{array}{c}u\left(\phantom{\rule[-0.12pt]{0ex}{7.08pt}}k\phantom{\rule[-0.12pt]{0ex}{7.08pt}}\right)=\mathrm{sin}\left(\phantom{\rule[-7.18999pt]{0ex}{13.6pt}}\frac{\mathrm{2}\pi k}{\mathrm{25}}\phantom{\rule[-7.18999pt]{0ex}{13.6pt}}\right).\end{array}$$

(18)

The output of this equation depends nonlinearly on both its past values and the input. The 200 training input patterns are randomly generated in the interval $\left[\begin{array}{cc}-1& 1\end{array}\right]$ by using (17).

The aim of this application is to predict *y*(*k*) by using this approach when the inputs chosen are *y*(*k* − 1) and *u*(*k* − 1).

All the coefficients of the premise and consequent parameters are limited to [−5,5]. The maximum number of rules is chosen as 5. Table 5 gives the best results of 50 experimental trials.

For all methods, the number of generations is fixed to 500 and the number of particles is fixed to 20.

As shown in Table 5, CS gives a minimum number of rules, less mean of MSE, and less standard deviation MSE compared to PSO, HUS, and GA.

The optimal fuzzy model found by CS during 50 runs has an MSE of 0.0231 and 3 rules.

Figure 7 shows the target and the model outputs and Figure 8 gives the errors between them. The optimal fuzzy model can predict the output with small errors.

In this paper, the extracting of T–S fuzzy model using CS algorithm is described. The T–S fuzzy model tuned by CS has the rules structures and both the premises and consequents parameters optimized. The optimal T–S fuzzy model has a fewer number of rules and smaller MSE both in mean and in standard deviation. The T–S model using CS algorithm is validated by the comparison of its performance to other methods for modelling three benchmarks: the nonlinear plant modelling problem, the Box–Jenkins problem, and identification of nonlinear system and this shows that the CS algorithm gives much better accuracy in modelling nonlinear systems; in fact, CS gives a model with minimum of number of rules with better errors compared to other metaheuristics.

The authors declare that there are no conflicts of interest regarding the publication of this paper.

1. Kim M.-S., Kim C.-H., Lee J.-J. Evolving compact and interpretable Takagi-Sugeno fuzzy models with a new encoding scheme. *IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics*. 2006;36(5):1006–1023. doi: 10.1109/TSMCB.2006.872265. [PubMed] [Cross Ref]

2. Xie X., Yue D., Ma T., Zhu X. Further studies on control synthesis of discrete-time T-S Fuzzy systems via augmented multi-indexed matrix approach. *IEEE Transactions on Cybernetics*. 2014;44(12):2784–2791. doi: 10.1109/tcyb.2014.2316491. [PubMed] [Cross Ref]

3. Xie X., Yue D., Zhang H., Xue Y. Fault Estimation Observer Design for Discrete-Time Takagi-Sugeno Fuzzy Systems Based on Homogenous Polynomially Parameter-Dependent Lyapunov Functions. *IEEE Transactions on Cybernetics*. 2017 doi: 10.1109/TCYB.2017.2693323. [PubMed] [Cross Ref]

4. Singh K. K., Nigam M. J., Pal K., Mehrotra A. A fuzzy Kohonen local information c-means clustering for remote sensing imagery. *IETE Technical Review (Institution of Electronics and Telecommunication Engineers, India)* 2014;31(1):75–81. doi: 10.1080/02564602.2014.891375. [Cross Ref]

5. Rong H.-J., Sundararajan N., Huang G.-B., Saratchandran P. Sequential adaptive fuzzy inference system (SAFIS) for nonlinear system identification and prediction. *Fuzzy Sets and Systems*. 2006;157(9):1260–1275. doi: 10.1016/j.fss.2005.12.011. [Cross Ref]

6. Wang L., Yen J. Extracting fuzzy rules for system modeling using a hybrid of genetic algorithms and Kalman filter. *Fuzzy Sets and Systems. An International Journal in Information Science and Engineering*. 1999;101(3):353–362. doi: 10.1016/S0165-0114(97)00098-5. [Cross Ref]

7. Angelov P. P., Filev D. P. An approach to online identification of Takagi-Sugeno fuzzy models. *IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics*. 2004;34(1):484–498. doi: 10.1109/TSMCB.2003.817053. [PubMed] [Cross Ref]

8. Shi Y., Eberhart R., Chen Y. Implementation of evolutionary fuzzy systems. *IEEE Transactions on Fuzzy Systems*. 1999;7(2):109–119. doi: 10.1109/91.755393. [Cross Ref]

9. Cordón O., Herrera F. A two-stage evolutionary process for designing TSK fuzzy rule-based systems. *IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics*. 1999;29(6):703–715. doi: 10.1109/3477.809026. [PubMed] [Cross Ref]

10. Cordon O., Herrera F., Gomide F., Hoffmann F., Magdalena L. Ten years of genetic fuzzy systems: current framework and new trends. Proceedings of the Joint 9th IFSA World Congress and 20th NAFIPS International Conference; 2001; Vancouver, BC, Canada. pp. 1241–1246. [Cross Ref]

11. Kang S.-J., Woo C.-H., Hwang H.-S., Woo K. B. Evolutionary design of fuzzy rule base for nonlinear system modeling and control. *IEEE Transactions on Fuzzy Systems*. 2000;8(1):37–45. doi: 10.1109/91.824766. [Cross Ref]

12. Pedrycz W., Reformat M. Evolutionary fuzzy modeling. *IEEE Transactions on Fuzzy Systems*. 2003;11(5):652–665. doi: 10.1109/TFUZZ.2003.817853. [Cross Ref]

13. Eftekhari M., Katebi S. D., Karimi M., Jahanmiri A. H. Eliciting transparent fuzzy model using differential evolution. *Applied Soft Computing Journal*. 2008;8(1):466–476. doi: 10.1016/j.asoc.2007.02.008. [Cross Ref]

14. Mitra S., Hayashi Y. Neuro-fuzzy rule generation: survey in soft computing framework. *IEEE Transactions on Neural Networks*. 2000;11(3):748–768. doi: 10.1109/72.846746. [PubMed] [Cross Ref]

15. Mangat V., Vig R. Dynamic PSO-based associative classifier for medical datasets. *IETE Technical Review (Institution of Electronics and Telecommunication Engineers, India)* 2014;31(4):258–265. doi: 10.1080/02564602.2014.942237. [Cross Ref]

16. Khosla A., Kumar S., Aggarwal K. K. A framework for identification of fuzzy models through particle swarm optimization algorithm. Proceedings of the INDICON 2005: An International Conference of IEEE India Council; December 2005; ind. pp. 388–391. [Cross Ref]

17. Khosla A., Kumar S., Ghosh K. R. A comparison of computational efforts between particle swarm optimization and genetic algorithm for identification of fuzzy models. Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society (NAFIPS '07); June 2007; pp. 245–250. [Cross Ref]

18. Bouzaida S., Sakly A., M'Sahli F. Extracting TSK-type neuro-fuzzy model using the hunting search algorithm. *International Journal of General Systems*. 2014;43(1):32–43. doi: 10.1080/03081079.2013.848355. [Cross Ref]

19. Zhao L., Qian F., Yang Y., Zeng Y., Su H. Automatically extracting T-S fuzzy models using cooperative random learning particle swarm optimization. *Applied Soft Computing Journal*. 2010;10(3):938–944. doi: 10.1016/j.asoc.2009.10.012. [Cross Ref]

20. Juang C.-F., Chung I.-F., Hsu C.-H. Automatic construction of feedfoward/recurrent fuzzy systems by clustering-aided simplex particle swarm optimization. *Fuzzy Sets and Systems. An International Journal in Information Science and Engineering*. 2007;158(18):1979–1996. doi: 10.1016/j.fss.2007.04.009. [Cross Ref]

21. Chen C.-C. A PSO-based method for extracting fuzzy rules directly from numerical data. *Cybernetics and Systems*. 2006;37(7):707–723. doi: 10.1080/01969720600886980. [Cross Ref]

22. Juang C.-F., Lo C. Zero-order {TSK}-type fuzzy system learning using a two-phase swarm intelligence algorithm. *Fuzzy Sets and Systems. An International Journal in Information Science and Engineering*. 2008;159(21):2910–2926. doi: 10.1016/j.fss.2008.02.003. [Cross Ref]

23. Lin C.-J. An efficient immune-based symbiotic particle swarm optimization learning algorithm for {TSK}-type neuro-fuzzy networks design. *Fuzzy Sets and Systems. An International Journal in Information Science and Engineering*. 2008;159(21):2890–2909. doi: 10.1016/j.fss.2008.01.020. [Cross Ref]

24. Niu B., Zhu Y., He X., Shen H. A multi-swarm optimizer based fuzzy modeling approach for dynamic systems processing. *Neurocomputing*. 2008;71(7-9):1436–1448. doi: 10.1016/j.neucom.2007.05.010. [Cross Ref]

25. Zhao L., Yang Y., Zeng Y. Eliciting compact T-S fuzzy models using subtractive clustering and coevolutionary particle swarm optimization. *Neurocomputing*. 2009;72(10-12):2569–2575. doi: 10.1016/j.neucom.2008.11.001. [Cross Ref]

26. Du H., Zhang N. Application of evolving Takagi-Sugeno fuzzy model to nonlinear system identification. *Applied Soft Computing Journal*. 2008;8(1):676–686. doi: 10.1016/j.asoc.2007.05.006. [Cross Ref]

27. Su H., Yang Y. Differential evolution and quantum-inquired differential evolution for evolving Takagi-Sugeno fuzzy models. *Expert Systems with Applications*. 2011;38(6):6447–6451. doi: 10.1016/j.eswa.2010.11.107. [Cross Ref]

28. Grosswindhager S., Haffner L., Voigt A., Kozek M. Fuzzy modelling of a moving grate biomass furnace for simulation and control purposes. *Mathematical and Computer Modelling of Dynamical Systems. Methods, Tools and Applications in Engineering and Related Sciences*. 2014;20(2):194–208. doi: 10.1080/13873954.2013.821495. [Cross Ref]

29. Valian E., Mohanna S., Tavakoli S. Improved Cuckoo Search Algorithm for Feed forward Neural Network Training. *International Journal of Artificial Intelligence & Applications*. 2011;2(3):36–43. doi: 10.5121/ijaia.2011.2304. [Cross Ref]

30. Valian E., Tavakoli S., Mohanna S., Haghi A. Improved cuckoo search for reliability optimization problems. *Computers & Industrial Engineering*. 2013;64(1):459–468. doi: 10.1016/j.cie.2012.07.011. [Cross Ref]

31. Aghaei A., Kiani K., Bayati M. Optimizing fuzzy controller using cuckoo optimization algorithm (COA) *International Journal of Enhanced Research in Science Technology & Engineering*. December 2014;3(12):1–10.

32. Chen J. F., Do Q. H. A cooperative cuckoo search – hierarchical adaptive neuro-fuzzy inference system approach for predicting student academic performance. *Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology*. Sep 2014;27(5):2551–2561.

33. Box G. E. P., Jenkins G. M. *Time Series Analysis, Forecasting and Control*. San Francisco, Calif, USA: Holden-day; 1970.

34. Afzal Sikander A., Rajendra Prasad B. A novel order reduction method using cuckoo search algorithm. *IETE Journal of Research*. 2015;61(2):83–90. doi: 10.1080/03772063.2015.1009396. [Cross Ref]

35. Narwal A., Prasad B. R. A novel order reduction approach for LTI systems using cuckoo search optimization and stability equation. *IETE Journal of Research*. 2016;62(2):154–163. doi: 10.1080/03772063.2015.1075915. [Cross Ref]

36. Gotmare A., Patidar R., George N. V. Nonlinear system identification using a cuckoo search optimized adaptive Hammerstein model. *Expert Systems with Applications*. 2015;42(5):2538–2546. doi: 10.1016/j.eswa.2014.10.040. [Cross Ref]

37. Xu H. J., Liu J. K., Lu Z. R. Structural damage identification based on cuckoo search algorithm. *Advances in Structural Engineering*. 2016;19(5):849–859. doi: 10.1177/1369433216630128. [Cross Ref]

38. Takagi T., Sugeno M. Fuzzy identification of systems and its applications to modeling and control. *IEEE Transactions on Systems, Man and Cybernetics*. 1985;15(1):116–132.

39. Yang X.-S., Deb S. Multiobjective cuckoo search for design optimization. *Computers & Operations Research*. 2013;40(6):1616–1624. doi: 10.1016/j.cor.2011.09.026. [Cross Ref]

40. Yang X.-S., Deb S. Engineering optimisation by Cuckoo search. *International Journal of Mathematical Modelling and Numerical Optimisation*. 2010;1(4):330–343. doi: 10.1504/IJMMNO.2010.035430. [Cross Ref]

41. Balochian S., Ebrahimi E. Parameter Optimization via Cuckoo Optimization Algorithm of Fuzzy Controller for Liquid Level Control. *Journal of Engineering (United States)* 2013;2013 doi: 10.1155/2013/982354.982354 [Cross Ref]

42. Payne R., Sorenson M., Klitz K. *The Cuckoos*. Oxford University Press; 2005.

43. Yang X.-S., Deb S. Cuckoo search via Lévy flights. Proceedings of the World Congress on Nature and Biologically Inspired Computing (NABIC '09); December 2009; Coimbatore, India. pp. 210–214. [Cross Ref]

44. Yang X., Deb S. Cuckoo search: recent advances and applications. *Neural Computing and Applications*. 2014;24(1):169–174. doi: 10.1007/s00521-013-1367-1. [Cross Ref]

45. Walton S., Hassan O., Morgan K., Brown M. R. Modified cuckoo search: a new gradient free optimisation algorithm. *Chaos, Solitons and Fractals*. 2011;44(9):710–718. doi: 10.1016/j.chaos.2011.06.004. [Cross Ref]

46. Turki M., Bouzaida S., Sakly A., M'Sahli F. Modeling and on line control of nonlinear systems using neuro-fuzzy learning tuned by metaheuristic algorithms. *International Journal of Control and Automation*. 2014;7(5):323–342. doi: 10.14257/ijca.2014.7.5.33. [Cross Ref]

Articles from Computational Intelligence and Neuroscience are provided here courtesy of **Hindawi**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |