Home | About | Journals | Submit | Contact Us | Français |

**|**J Healthc Eng**|**v.2017; 2017**|**PMC5518499

Formats

Article sections

- Abstract
- 1. Introduction
- 2. Related Work
- 3. Problem Formulation
- 4. Evolutionary Algorithms
- 5. Classification Techniques
- 6. Hybrid Intelligent System for Diagnosing Diseases
- 7. Simulations and Results
- 8. Conclusion
- References

Authors

Related links

J Healthc Eng. 2017; 2017: 5907264.

Published online 2017 July 4. doi: 10.1155/2017/5907264

PMCID: PMC5518499

MadhuSudana Rao Nalluri,^{
1
} Kannan K.,^{
1
} Manisha M.,^{
1
} and Diptendu Sinha Roy^{
2
,}^{
*
}

*Diptendu Sinha Roy: Email: moc.liamg@rs.udnetpid

Academic Editor: Ashish Khare

Received 2016 December 10; Revised 2017 February 23; Accepted 2017 March 30.

Copyright © 2017 MadhuSudana Rao Nalluri et al.

This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

With the widespread adoption of e-Healthcare and telemedicine applications, accurate, intelligent disease diagnosis systems have been profoundly coveted. In recent years, numerous individual machine learning-based classifiers have been proposed and tested, and the fact that a single classifier cannot effectively classify and diagnose all diseases has been almost accorded with. This has seen a number of recent research attempts to arrive at a consensus using ensemble classification techniques. In this paper, a hybrid system is proposed to diagnose ailments using optimizing individual classifier parameters for two classifier techniques, namely, support vector machine (SVM) and multilayer perceptron (MLP) technique. We employ three recent evolutionary algorithms to optimize the parameters of the classifiers above, leading to six alternative hybrid disease diagnosis systems, also referred to as hybrid intelligent systems (HISs). Multiple objectives, namely, prediction accuracy, sensitivity, and specificity, have been considered to assess the efficacy of the proposed hybrid systems with existing ones. The proposed model is evaluated on 11 benchmark datasets, and the obtained results demonstrate that our proposed hybrid diagnosis systems perform better in terms of disease prediction accuracy, sensitivity, and specificity. Pertinent statistical tests were carried out to substantiate the efficacy of the obtained results.

The proliferations of computer usage across all aspects of life have resulted in accumulating a large number of systematic and related data. This has necessitated identifying useful patterns from raw datasets as the next logical step forward. Thus, data mining, a broad discipline encompassing classification, clustering, association, prediction, estimation, and visualization tasks [1], has emerged as a dynamic and significant field of research to address theoretical challenges as well as practical issues. Data mining and knowledge engineering techniques have been successfully applied to numerous areas, like education, pattern recognition, fraud detection, and medicine [2, 3].

The application of data mining and knowledge engineering techniques in the medical domain plays a prime role in the diagnosis of diseases and prognostication [4]. It assists healthcare professionals and doctors to analyze and predict diseases [5] and is often commonly referred to as medical engineering. Numerous machine learning algorithms have been developed to extract useful patterns from raw medical data over the years [6]. These patterns have been utilized for disease prediction using classification and clustering strategies. Medical research focuses on employing data mining for prediction of a broad range of diseases, including breast cancer [7], heart diseases [8], Parkinson's disease [9], hepatitis, and diabetes, only to name a few.

Over the years, several supervised machine learning techniques such as classification as well as several unsupervised machine learning techniques like clustering have been applied to available medical information [10, 11]. Individual classifiers, ensembles thereof, and hybrid systems have often been used to diagnose various diseases. Several techniques have been applied on medical data to improve such diagnosing efficacy, regarding performance parameters such as prediction accuracy, sensitivity, and specificity [12, 13].

This paper presents a hybrid system for diagnosis and prediction of numerous diseases using optimized parameters for classifiers. The classifier parameters are optimized using evolutionary algorithms to enhance classification performance. By juxtaposing the proposed parameter optimization step within existing classifier mechanisms, our method provides improved prediction accuracy. In this paper, 16 classifiers are executed in which two basics are with and without resampling, 6 hybrid intelligent systems without resampling and 6 hybrid intelligent systems with resampling technique. In summary, this paper presents a comparative analysis of parameter optimized versions of two classifiers, namely, support vector machine (SVM) and multilayer perceptron (MLP) for medical data. It has been concluded from experimental results presented in this paper that our proposed hybrid system outperforms state of the art (single or ensemble) for classifying medical data. To contrive the parameter optimization, we have employed three popular evolutionary algorithms, namely, particle swarm optimization (PSO), gravitational search algorithm (GSA), and firefly algorithm (FA) for optimizing parameters of SVM and MLP classifiers. Accordingly, we study the performance of six alternative hybrid systems for classifying medical data towards a diagnosis of such diseases. The performance of the proposed hybrid intelligent techniques is compared with the recent literature results (both simple and ensemble classifiers [14–16]). This hybrid intelligent system shows better performance than the recently published ensemble classifiers on 11 benchmark datasets.

The rest of this paper is organized as follows: A brief exposition of existing researches has been dealt with in Section 2, specifically focusing on several machine learning algorithms employed for processing medical datasets. The problem formulation of our proposed weighted multiobjective optimization for the classifying problem dealt with has been presented in Section 3. Section 4 provides the rudimentary steps and key features of the evolutionary algorithms employed for the parameter optimization of SVM and MLP classifiers, namely, particle swarm optimization (PSO), gravitational search algorithm (GSA), and firefly algorithm (FA). A very basic introduction of the two classifiers employed, namely, SVM and MLP, has been discussed in Section 5. Section 6 elaborately explains the development of the proposed hybrid classification system for disease diagnosis along with their key components and design principles involved. The performance of the proposed hybrid scheme is tested over 11 benchmark medical datasets, and Section 7 provides a brief account of the experimental setup and the experiments conducted and summarizes the results obtained. This section also presents a statistical analysis of obtained results for validating the acceptability of obtained statistical results. The conclusions of the research have been presented in Section 8.

There have been abundant attempts to analyze and diagnose ailments employing machine learning algorithms. This section gives a summary of the efforts in this field to put the contribution of our work in perspective. These researches, however, vary considerably in terms of classifiers applied and nature of systems employed; for example, some are simple and others are hybrid whereas some others present ensemble systems. There are also major varieties in terms of objective functions chosen, single or multiobjective formulation, the number of datasets on which these methods have been applied, performance parameters employed for validating the efficacy, and so forth.

Among the different disease datasets that have been studied in the literature, heart disease diagnosis has been very prominent within medical engineering circles, and a wide variety of machine learning techniques have been explored towards diagnosing the same. References [17–38] include some prominent contributions towards diagnosing heart diseases from various aspects using myriad machine learning techniques, details of which are presented hereafter. Chitra and Seenivasagam [18] proposed a cascaded neural network (CNN) classifier and support vector machine (SVM) to diagnose heart diseases. The performance of CNN and SVM was compared based on the accuracy, sensitivity, and specificity. Pattekari and Parveen [19] suggested an intelligent system, which used a naive Bayes classifier that was further improved by developing ensemble-based classifiers. Das et al. [17] developed a neural network ensemble model for heart disease diagnosis. The proposed technique used Statistical Analysis System (SAS) enterprise guide 4.3 programs for data preprocessing and SAS Enterprise miner 5.2 programs for recognizing the heart disease by combining three neural networks ensemble. The technique was further improved by combining other neural networks and was also used for various datasets. Das et al. [37] described an SAS-based Software 9.1.3 for diagnosing valvular heart diseases. The proposed method used a neural network ensemble. Predicted values, posterior probabilities, and voting posterior probabilities were applied.

Masethe and Masethe [21] used J48, naive Bayes, REPTREE, CART, and Bayes Net for diagnosing the efficacy of heart diseases. High accuracy was obtained using a J48 tree. Shaikh et al. [22] evaluated the performance of three classifiers, namely, k-NN, naive Bayesian, and decision tree based on four parameters, namely, precision, recall, accuracy, and *F*-measure. k-NN produced higher accuracy than other methods. Bhatla and Jyoti [26] compared naive Bayes, decision tree, and neural networks for the said diagnosis. For the decision tree, genetic algorithm and fuzzy logic were employed, and results presented used TANAGRA tool.

Kavitha and Christopher [23] performed classification of heart rate using a hybrid particle swarm optimization and fuzzy C-means (PSO-FCM) clustering. The proposed method performed feature selection using PSO. The fuzzy C-means cluster and classifier are combined to enhance the accuracy. Enhanced SVM was used for classifying heart diseases. The hybrid system could be trained to shorten the implementation time. Alizadehsani et al. [24] evaluated sequential minimal optimization (SMO), naive Bayes, bagging with SMO, and neural networks. They employed rapid miner tool, and high accuracy was obtained using bagging with SMO. Abhishek [38] employed j48, naive Bayes, neural networks with all attributes for diagnosing heart diseases with the WEKA machine learning software and concluded that j48 outperformed others regarding accuracy.

Jabbar et al. [20] used association mining and genetic algorithm in conjunction with heart disease prediction. The proposed method used Gini index statistics for association algorithm and crossover, the mutation for the genetic algorithm. They further employed a feature selection technique for improved accuracy. Ordonez et al. [36] presented an improved algorithm to determine constrained association rules by two techniques: mapping medical data and identifying constraints. The proposed method used mining attributes. Constrained association rules and parameters were used for the mapping. The technique produced interesting results by comparing this association rule with classification rule. Shenfield and Rostami [25] introduced a multiobjective approach to the evolutionary design of artificial neural networks for predicting heart disease.

Parthiban and Subramanian [27] developed a coactive neurofuzzy inference system (CANFIS) for prediction of heart diseases. The proposed model combined CANFIS, neural network, and fuzzy logic. It was then integrated with a genetic algorithm. Results showed that GA was useful for autotuning of the CANFIS parameters. Hedeshi and Abadeh [28] performed PSO algorithm with a boosting approach. The proposed method used fuzzy rule extraction with PSO and enhanced-particle swarm optimization 2 (En-PSO2). Karaolis et al. [35] used myocardial infarction (MI), percutaneous coronary intervention (PCI), and coronary artery bypass graft surgery (CABG) models. The proposed method used C4.5 decision tree algorithms. Results were compared based on false positive (FP), precision, and so forth. By further investigation with various datasets and employing extraction rule algorithms further, better results were obtained.

Kim et al. [30] proposed a fuzzy rule-based adaptive coronary heart disease prediction support model. The proposed method had three parts, namely, introducing fuzzy membership functions, a decision-tree rule induction technique, and fuzzy inference based on Mamdani's method. Outcomes were compared with neural network, logistic regression, decision tree, and Bayes Net. Chaurasia and Pal [31] offered three popular data mining algorithms: CART (classification and regression tree), ID3 (iterative dichotomized 3), and decision table (DT) for diagnosing heart diseases, and the results presented demonstrated that CART obtained higher accuracy within less time.

Olaniyi et al. [29] used neural network and support vector machine for heart diseases. Their proposed method used multilayer perceptron and demonstrated that SVM produced high accuracy. Yan et al. [32] proposed that multilayer perception with hidden layers is found by a cascade process. For the inductive reasoning of the methods, the proposed method used three assessment procedures, namely, cross-validation, hold out, and five bootstrapping samples for five intervals. Yan et al. [33] utilized multilayer perception for the diagnosis of five different cases of heart disease. The method employed a cascade learning process to find hidden layers and used back propagation for training the datasets. Further improvements to the accuracy were achieved by parameter adjustments. Shouman et al. [34] identified gaps in the research work for heart disease diagnosis. The proposed method applied both single and hybrid data mining techniques to establish baseline accuracy and compared. Based on the research, hybrid classifier produced higher accuracy than a single classifier.

Sartakhti et al. [39] presented a method for diagnosis of hepatitis by novel machine learning methods that hybridize support vector machine and simulated annealing process. The proposed method used two hyperparameters for radial basis function (RBF) kernel: *C* and gamma. For all potential combinations of *C* and gamma interval, *k*-fold cross-validation score had been calculated. Results demonstrated that tuning SVM parameters by simulated annealing increased the accuracy. Çalişir et al. [40] developed the principle component analysis and least square support vector machine (PS-LLSVM). The suggested method was carried out in two steps: (1) the feature extraction from hepatitis disease database and feature reduction by PCA and (2) the reduced features are fed to the LSSVM classifier. Li and Wong [41] proposed C4.5 and PCL classifier. The outcomes were compared between C4.5 (bagging, boosting, and single tree) and PCL, and it was concluded that PCL produced higher accuracy than C4.5 based on their observations.

Weng et al. [42] investigated the performance of different classifiers which predicts Parkinson's disease. The proposed method used an ANN classifier based on the evaluation criteria. Jane et al. [43] proposed a Q-back propagated time delay neural network (Q-BTDNN) classifier. It developed temporal classification models that performed the task of classification and prognostication in clinical decision-making system. It used to feed forward time-delay neural network (TDNN) where training was imparted by a Q-learning-induced back propagation (Q-BP) technique. A 10-fold-cross-validation was employed for assessing the classification model. The results obtained were considered for comparative analysis, and it produced high accuracy. Gürüler [44] described a combination of the *k*-means clustering-based feature weighting (KMCFW) method and a complex-valued artificial neural network (CVANN). The suggested method considered five different evaluation methods. The cluster centers were estimated using the KMC. Results obtained showed very high accuracy.

Bashir et al. [45] presented an ensemble framework for predicting people with diabetes with multilayer classification using enhanced bagging and optimized weighting. The proposed HM-BagMOOV method used KNN approach for missing data imputation and had three layers, namely, layer 1 containing naive Bayes (NB), quadratic discriminant analysis (QDA), linear regression (LR), instance-based learning (IBL), and SVM; layer 2 included ANN and RF; and layer 3 used multilayer weighted bagging prediction. The outcome showed that it produced good accuracy for all datasets. Iyer et al. [46] prescribed a method to diagnose the disease using decision tree and naive Bayes. The proposed method used 10-fold cross-validation. The technique had been further enhanced by using other classifiers and neural network techniques. Choubey and Sanchita [47] used genetic algorithm and multilayer perceptron techniques for the diagnosis of diabetics. The suggested methodology was implemented in two levels where genetic algorithm (GA) was used for feature selection and multilayer perceptron neural network (MLP NN) was used for classification of the selected characteristics. The results produced excellent accuracy that was further increased by considering receiver operating characteristic (ROC).

Kharya [48] used various data mining techniques for the diagnosis and prognosis of cancer. The proposed method used neural network, association rule mining, naïve Bayes, C4.5 decision tree algorithm, and Bayesian networks. The results showed that decision tree produced better accuracy than other classifiers. Chaurasia and Pal [49] investigated the performance of different classification techniques on breast cancer data. The proposed method used three classification techniques, namely, SMO, *k*-nearest neighbor algorithm (IBK), and best first (BF) tree. The results demonstrated that SMO produced higher accuracy than the other two techniques. In this article [50], an expert system (ES) is proposed for clinical diagnosis which is helpful for decision making in primary health care. The ES proposed used a rule-based system to identify several diseases based on clinical test reports.

Alzubaidi et al. studied ovarian cancer well [51]. In this work, features are selected using a hybrid global optimization technique. The hybridization process has involved mutual information, linear discriminate analysis, and genetic algorithm. The performance of the proposed hybrid technique is compared with support vector machine. This hybrid technique has shown significant performance improvements than support vector machine.

Gwak et al. [52] have proposed an ensemble framework for combining various crossover strategies using probability. The performance of this context had tested over 27 benchmark functions. It showed outperformance on eight tough benchmark functions. This ensemble framework further can be efficiently used for feature selection of big datasets.

Hsieh et al. [53] have developed and ensemble machine learning model for diagnosing breast cancer. In this model, information-gain has been adopted for feature selection. The list of classifiers used for developing ensemble classifier is neural fuzzy (NF), *k*-nearest neighbor (KNN), and the quadratic classifier (QC). The performance of ensemble framework is compared with individual classifier performance. The results demonstrate that ensemble framework has shown better performance than single classifier.

Review of existing literature for disease diagnosis techniques with machine learning indicates that there exists a plethora of individual classifiers as well as ensemble techniques. However, from such studies, it was also been conclusively evident that no individual classifier gives high prediction accuracy for different disease datasets. This has led to abundant ensemble classifiers for disease diagnosis, compromising the simplicity that an individual classifier offers. To this end, this paper indulges in designing a hybrid system that focuses on providing generalized performance across a broad range of benchmark datasets. The most significant contribution of the proposed hybrid disease classifiers is that unlike most research works mentioned before that targets a specific disease, this paper validates the efficacy of the proposed hybrid classifiers across six different diseases collected over eleven datasets. For instance, among all heart disease, related diagnosis systems only [33] consider five different datasets for the said disease. Also, there are very few attempts in validating diagnosis efficacy over multiple diseases. Shen et al. [54] and Bashir et al. [14] are few exceptions that validate their results for four and five different diseases, respectively. The proposed classifiers employ novel parameter optimization approaches using a few recent evolutionary algorithms, detailed design of which has been presented in subsequent sections.

In this paper, we deal with classifying data from different disease datasets using a hybrid technique that optimizes the parameters of SVM and MLP classifiers for improved disease prediction. The list of objective functions to be targeted while solving the said classification problem include (i) prediction accuracy, (ii) specificity, and (iii) sensitivity, which has been considered very commonly for this problem in existing literature [55–57]. Each of these objective functions captures some aspect of quality of disease classification. In this sense, the problem studied in this paper is a multiobjective optimization problem.

All the aforementioned measures are computed in terms of the following values: true positive (TP), true negative (TN), false positive (FP), and false negative (FN), and their significance is defined as follows: TP: total number of positives that are correctly identified as positive; TN: total number of negatives that are identified as negatives; FP: total number of negatives that are incorrectly identified as positives; and *FN*: total number of positives that are wrongly identified as negatives.

The objective functions considered for optimization in this work are prediction accuracy (PAC), specificity (SPY), and sensitivity (SEY). To model these functions, two random indicator variables are introduced for all the data objects to compute TP, TN, FP, and FN. These are *X*_{i1} and *X*_{i2}, where these are defined as follows:

$$\begin{array}{c}{X}_{i1}=I\left\{{\mathrm{C}\mathrm{L}}_{i}={\mathrm{P}\mathrm{C}}_{i}={C}_{+}\right\};\\ {X}_{i2}=I\left\{{\mathrm{C}\mathrm{L}}_{i}={\mathrm{P}\mathrm{C}}_{i}={C}_{-}\right\},\end{array}$$

(1)

where *C*_{+} represents the actual class label is positive (+), *C*_{−} represents the actual class label is negative (−), PC_{i} represents predicted class label of *i*th data object, and CL* _{i}* represents the actual class label of the

Let the classifier being developed for classifying a given dataset be a binary classifier and the dataset has *N* instances with *m*_{1} positive and *m*_{2} negative instances. Therefore,

$$\begin{array}{c}\mathrm{T}\mathrm{P}=\sum _{i=1}^{N}{X}_{i1},\\ \mathrm{T}\mathrm{N}=\sum _{i=1}^{N}{X}_{i2},\\ \mathrm{F}\mathrm{N}={m}_{1}-\mathrm{T}\mathrm{P}={m}_{1}-\sum _{i=1}^{N}{X}_{i1},\\ \mathrm{F}\mathrm{P}={m}_{2}{-TN=m}_{2}-\sum _{i=1}^{N}{X}_{i2}.\end{array}$$

(2)

The performance parameters for the classifiers can thus be obtained using the following three equations:

$$\begin{array}{c}\mathrm{P}\mathrm{r}\mathrm{e}\mathrm{d}\mathrm{i}\mathrm{c}\mathrm{t}\mathrm{i}\mathrm{o}\mathrm{n}\text{\hspace{0.17em}}\mathrm{a}\mathrm{c}\mathrm{c}\mathrm{u}\mathrm{r}\mathrm{a}\mathrm{c}\mathrm{y}\text{\hspace{0.17em}}\left(\mathrm{P}\mathrm{A}\mathrm{C}\right)=\frac{\mathrm{T}\mathrm{P}+\mathrm{T}\mathrm{N}}{\mathrm{T}\mathrm{P}+\mathrm{T}\mathrm{N}+\mathrm{F}\mathrm{P}+\mathrm{F}\mathrm{N}}=\frac{{\sum}_{i=1}^{N}{X}_{i1}+{\sum}_{i=1}^{N}{X}_{i2}}{{m}_{1}+{m}_{2}},\end{array}$$

(3)

$$\begin{array}{c}\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\mathrm{i}\mathrm{f}\mathrm{i}\mathrm{c}\mathrm{i}\mathrm{t}\mathrm{y}\text{\hspace{0.17em}}\left(\mathrm{S}\mathrm{P}\mathrm{Y}\right)=\frac{\mathrm{T}\mathrm{N}}{\mathrm{T}\mathrm{N}+\mathrm{F}\mathrm{P}}=\frac{{\sum}_{i=1}^{N}{X}_{i2}}{{m}_{2}},\end{array}$$

(4)

$$\begin{array}{c}\mathrm{S}\mathrm{e}\mathrm{n}\mathrm{s}\mathrm{i}\mathrm{t}\mathrm{i}\mathrm{v}\mathrm{i}\mathrm{t}\mathrm{y}\text{\hspace{0.17em}}\left(\mathrm{S}\mathrm{E}\mathrm{Y}\right)=\frac{\mathrm{T}\mathrm{P}}{\mathrm{T}\mathrm{P}+\mathrm{F}\mathrm{N}}=\frac{{\sum}_{i=1}^{N}{X}_{i1}}{{m}_{1}}.\end{array}$$

(5)

The aim of this research is to arrive at optimal values of classifier parameters through evolution such that some maxima are attained for PAC, SPY, and SEY. It is worthwhile to mention that even different sets of classifier parameter values with same PAC can have different values for SPY and SEY. Thus, there exist tradeoffs among (3), (4), and (5).

Any multiobjective optimization problem can then be solved either by converting the objective functions into a single linear or nonlinear objective function or by computing Pareto fronts using the concept of nondominance [58].

In this paper, a linear combination of objective functions has been taken to form a single linear compound objective function due to the requirement of additional computational effort for finding Pareto fronts in every iteration.

$$\begin{array}{c}\mathrm{M}\mathrm{a}\mathrm{x}\mathrm{i}\mathrm{m}\mathrm{i}\mathrm{z}\mathrm{e}\text{\hspace{0.17em}}Z={W}_{1}\ast \mathrm{P}\mathrm{A}\mathrm{C}+{W}_{2}\ast \mathrm{S}\mathrm{P}\mathrm{Y}+{W}_{3}\ast \mathrm{S}\mathrm{E}\mathrm{Y},\\ \mathrm{M}\mathrm{a}\mathrm{x}\mathrm{i}\mathrm{m}\mathrm{i}\mathrm{z}\mathrm{e}\text{\hspace{0.17em}}Z={W}_{1}\ast \frac{{\sum}_{i=1}^{N}{X}_{i1}+{\sum}_{i=1}^{N}{X}_{i2}}{{m}_{1}+{m}_{2}}+{W}_{2}\ast \frac{{\sum}_{i=1}^{N}{X}_{i2}}{{m}_{2}}+{W}_{3}\ast \frac{{\sum}_{i=1}^{N}{X}_{i1}}{{m}_{1}}\end{array}$$

(6)

subject to the constraints

$$\begin{array}{c}{W}_{1}+{W}_{2}+{W}_{3}=1,\end{array}$$

(7)

$$\begin{array}{c}1\ge {W}_{i}\ge 0\text{\hspace{0.17em}\hspace{0.17em}}\forall i,\end{array}$$

(8)

$$\begin{array}{c}{U}_{i}\ge {\mathrm{C}\mathrm{L}\mathrm{A}\mathrm{S}\mathrm{S}\mathrm{I}\mathrm{F}\mathrm{I}\mathrm{E}\mathrm{R}\_\mathrm{P}\mathrm{A}\mathrm{R}}_{i}\ge {L}_{i}\text{\hspace{0.17em}\hspace{0.17em}}\forall i,\end{array}$$

(9)

where CLASSIFIER_PAR_{i} is the *i*th sensitive parameter of the considered classifier, (7) represents the totality condition of the weights, (8) guarantees the nonnegativity condition, and (9) checks that the *i*th classifier parameter values is within the specified bounds.

In this section, we present a summary of the three evolutionary algorithms employed to optimize the parameters of SVM and MLP for classifying medical datasets for disease diagnosis. The discussions are restricted only to provide a brief overview. Detailed information and possible variations of these algorithms are beyond the scope of this paper.

Gravitational search algorithm (GSA) is one of the population-based stochastic search methods initially developed by Rashedi et al. in the year 2009 [59]. GSA is inspired by Newton's gravitational law in physics, where every particle in the universe attracts every other particle with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. GSA has been successfully applied to solve several engineering optimization problems [60, 61].

In GSA, several masses are considered on a *d*-dimensional space. The position of each mass resembles a point in the solution space of the problem to be solved. The fitness values of the agent, worst (*t*), and best (*t*) are used to compute the force (*F*) of mass. Equations corresponding to these parameters are provided in

$$\begin{array}{c}{q}_{i}\left(t\right)=\frac{{\mathrm{f}\mathrm{i}\mathrm{t}}_{i}\left(t\right)-\mathrm{w}\mathrm{o}\mathrm{r}\mathrm{s}\mathrm{t}\left(t\right)}{\mathrm{b}\mathrm{e}\mathrm{s}\mathrm{t}\left(t\right)-\mathrm{w}\mathrm{o}\mathrm{r}\mathrm{s}\mathrm{t}\left(t\right)},\end{array}$$

(10)

$$\begin{array}{c}{M}_{i}\left(t\right)=\frac{{q}_{i}\left(t\right)}{{\sum}_{j=1}^{s}{q}_{j}\left(t\right)},\end{array}$$

(11)

$$\begin{array}{c}\mathrm{b}\mathrm{e}\mathrm{s}\mathrm{t}\left(t\right)=\mathrm{min}\left\{{\mathrm{f}\mathrm{i}\mathrm{t}}_{k}\left(t\right):\forall k\right\},\end{array}$$

(12)

$$\begin{array}{c}\mathrm{w}\mathrm{o}\mathrm{r}\mathrm{s}\mathrm{t}\left(t\right)=\mathrm{max}\left\{{\mathrm{f}\mathrm{i}\mathrm{t}}_{k}\left(t\right):\forall k\right\}.\end{array}$$

(13)

To update the position of mass (*x*_{i}^{d}(*t* + 1)), velocity (*v*_{i}^{d}(*t*)) needs to be updated first. The velocity of the mass at the time (*t* + 1) majorly depends on the values of velocity and acceleration at that time instant *t*. Acceleration of the *i*th mass at instant *t* is (*a*_{i}^{d}(*t*)) depending on forces of all other heavy masses based on (14). The equation corresponding to the acceleration is given in (15). Equations corresponding to updating process of mass position and mass velocity are provided in (16) and (17).

$$\begin{array}{c}{F}_{i}^{d}\left(t\right)={\sum}_{j\in k\mathrm{b}\mathrm{e}\mathrm{s}\mathrm{t},j\ne i}{\mathrm{rand}}_{j}\left(G\left(t\right)\frac{{M}_{j}\left(t\right){M}_{i}\left(t\right)}{{R}_{ij}\left(t\right)+\in}\left({x}_{j}^{d}\left(t\right)-{x}_{i}^{d}\left(t\right)\right)\right),\end{array}$$

(14)

$$\begin{array}{c}{a}_{i}^{d}\left(t\right)=\frac{{F}_{i}^{d}\left(t\right)}{{M}_{i}\left(t\right)},\end{array}$$

(15)

$$\begin{array}{c}{V}_{i}^{d}\left(t+1\right)={\mathrm{rand}}_{i}\times {V}_{i}^{d}\left(t\right)+{a}_{i}^{d}\left(t\right),\end{array}$$

(16)

$$\begin{array}{c}{X}_{i}^{d}\left(t+1\right)={X}_{i}^{d}\left(t\right)+{V}_{i}^{d}\left(t+1\right),\end{array}$$

(17)

where rand_{i} and rand_{j} lie between 0 and 1. “” is a small value. The distance between agents *i* and *j* is denoted by *R*_{ij}(*t*). The best *k* agents are denoted with *k*best. *G* is a gravitational constant which is initialized with *G*_{0} at the beginning, and with the progress in time, the value of *G* decreases.

In 1995, Dr. Kennedy and Dr. Eberhart developed a population-based speculative computational optimization procedure called particle swarm optimization based on the social behavior of living organisms like fish schools and bird flocks [62]. In PSO, the particles are randomly initialized. Position and velocity of the particles are represented as *X*_{i} and *V*_{i}, respectively. The fitness function is computed for each particle. Personal best (pBest) and global best (gBest) are the two important factors in PSO. Each particle has its own personal best, which is the particles' individual best so far achieved until a time instant *t*. Global best is the overall best of all particles upto the time instant *t*. The algorithm is executed for a certain number of iterations. At each iteration, velocity is updated for all particles using a velocity updating scheme [63] as depicted in

$$\begin{array}{c}{V}_{id}\left(t\right)=w\ast {V}_{id}\left(t-1\right)+{c}_{1}\ast \mathrm{rand}\left(\right)\ast \left({\mathrm{p}\mathrm{B}\mathrm{e}\mathrm{s}\mathrm{t}}_{id}-{X}_{id}\left(t-1\right)\right)+{c}_{2}\ast \mathrm{rand}\left(\right)\ast \left({\mathrm{g}\mathrm{B}\mathrm{e}\mathrm{s}\mathrm{t}}_{id}-{X}_{id}\left(t-1\right)\right),\end{array}$$

(18)

where *w* represents the inertia weight, *c*_{1} and *c*_{2} are the personal and global learning factors, and rand() is a random number between [0,1].

The following equation updates the new position of the particle:

$$\begin{array}{c}{X}_{i}^{d}\left(t\right)={X}_{i}^{d}\left(t-1\right)+{V}_{i}^{d}\left(t\right).\end{array}$$

(19)

The basic steps of PSO are given in Algorithm 1.

The firefly algorithm is a recently proposed bioinspired, evolutionary metaheuristic that mimics the social behavior of firefly species. Fireflies produce short and rhythmic flashes, the pattern of which characterizes particular species. The artificial, firefly-inspired algorithm makes certain assumptions regarding its functioning, such as unisexual fireflies for ensuring that all artificial fireflies attract each other and that the attractiveness is proportional to their brightness to define the potential of relative firefly movements. The brightness of a firefly is defined based on the problem at hand that it needs to optimize. For the minimization problem, the brightness may be the reciprocal of the objective function value. The pseudocode of the basic firefly algorithm as given by Yang in [64] has been depicted in Algorithm 2. The list of equations used in firefly algorithm is given as follows:

$$\begin{array}{c}{X}_{i}^{t+1}={X}_{i}^{t}+{V}_{i}^{t},\end{array}$$

(20)

$$\begin{array}{c}{V}_{i}^{t}={\beta}_{0}{e}^{-\Upsilon {r}^{2}}d\left({X}_{i}-{X}_{j}\right)+\alpha \left(\mathrm{rand}-0.5\right),\end{array}$$

(21)

$$\begin{array}{c}\begin{array}{c}{r}_{ij}=d\left({X}_{i}-{X}_{j}\right)=\Vert {X}_{i}-{X}_{j}\Vert =\mathrm{E}\mathrm{u}\mathrm{c}\mathrm{l}\mathrm{i}\mathrm{d}\mathrm{e}\mathrm{a}\mathrm{n}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{d}\mathrm{i}\mathrm{s}\mathrm{t}\mathrm{a}\mathrm{n}\mathrm{c}\mathrm{e}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{b}\mathrm{e}\mathrm{t}\mathrm{w}\mathrm{e}\mathrm{e}\mathrm{n}\text{\hspace{0.17em}\hspace{0.17em}}{X}_{i}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{a}\mathrm{n}\mathrm{d}\text{\hspace{0.17em}\hspace{0.17em}}{X}_{j},\\ \beta \left(r\right)={\beta}_{0}{e}^{-\Upsilon {r}^{m}}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{w}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{e}\text{\hspace{0.17em}}m\ge 1,\end{array}\end{array}$$

(22)

where *β*_{0}, , *α* [0, 1].

Each firefly position is updated based on (20). The velocity of *i*th firefly is based on a fraction of attractiveness in the distance between fireflies *X*_{i} and *X*_{j} in an *m*-dimensional space and also on *α*, a small random value in the range 0 to 0.2; the equations related to velocity are given in (21) and (22).

Two classification techniques are used, and the basic details of these techniques are discussed in the subsequent sections.

Multilayer perceptron (MLP) is the most commonly used supervised feed forward artificial neural network. It is a modification of the linear perceptron algorithm. It consists of many nodes that are arranged in several layers. In general, MLP contains three or more processing layers: one input layer for receiving input features, one or more hidden layers, and an output layer for producing classification results based on the classes [65].

Each node is represented as an artificial neuron which converts the input features into output using a weighted sum of inputs and activation function.

The weighted input is given by

$$\begin{array}{c}{V}_{i}=\sum {W}_{ij}{X}_{i}+{\mathrm{\Theta}}_{i},\\ {Y}_{i}={f}_{i}\left({V}_{i}\right),\end{array}$$

(23)

where *V* is the weighted sum of input features, *W* represents weights, *X* represents the input features, and Θ is the bias based on the classes.

The activation function is denoted by *f*(*x*). The most frequently used activation functions are sigmoids. They are as follows:

$$\begin{array}{c}\begin{array}{c}f\left({v}_{i}\right)=\mathrm{tanh}\left({v}_{i}\right),\\ f\left({v}_{i}\right)={\left(1+{e}^{-vi}\right)}^{-1}.\end{array}\end{array}$$

(24)

The multilayer perceptron is trained using back propagation (BP). The weight update equation used in BP is given in

$$\begin{array}{c}\begin{array}{cc}{w}_{ji}\leftarrow {w}_{ji}+{\eta \delta}_{j}\text{\hspace{0.17em}}{x}_{ji}+\alpha \u2206{w}_{ij}\left(n-1\right)& \mathrm{w}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{e}\text{\hspace{0.17em}\hspace{0.17em}}1\ge \eta ,\alpha \ge 0.\end{array}\end{array}$$

(25)

The parameter's learning rate (*η*) and momentum (*α*) are evolved using evolutionary algorithms presented in Section 4. A very basic MLP algorithm is provided in Algorithm 3 [66, 67].

There is a chance of being caught at a local minimum during the process of back propagation learning, and hence, to overcome it in this research article, learning rate and momentum values are evolved using 3 evolutionary search algorithms (CSO, IWO, and FF). A 3-layer neural network has been executed with an input layer, one hidden layer, and an output layer. The size of the input layer is equal to the number features of the data, and also, the size of the output layer is nothing but the number classes. The size of the hidden layer is the average of input and output layer sizes. Moreover, the performance of these three algorithms is compared in the simulation and is discussed in the Results and Discussion of this paper.

Support vector machine (SVM) is one of the supervised machine learning algorithms, which are often used for binary classifications. It was originally developed by Vapnik in 1979 [68]. The training data is in the form of instance-value pairs (*x*_{i}, *y _{i}*). The SVM classifier finds an optimal hyperplane to separate negative and positive classes, and it is represented by

Based on the class labels, two hyperplanes are formed, which are as follows:

*F*(*x*) = *w*^{t} · *x* + *b* ≥ 0 for positive instances (*y*_{j} = +1) and *F*(*x*) = *w*^{t} · *x* + *b* ≤ 0 for negative instances (*y*_{j} = −1), where *w* is the weight vector, *x* is input vector, and *b* is bias. Classifications are made on the hyperplanes thus formed.

The optimization problem formed during the development of soft margin classifier is as follows:

$$\begin{array}{c}\mathrm{M}\mathrm{i}\mathrm{n}\mathrm{i}\mathrm{m}\mathrm{i}\mathrm{z}\mathrm{e}\u233bZ=\frac{1}{2}<w,w>+\text{\hspace{0.17em}}C{\sum}_{i}{\xi}_{i},\\ \mathrm{s}\mathrm{u}\mathrm{b}\mathrm{j}\mathrm{e}\mathrm{c}\mathrm{t}\text{\hspace{0.17em}}\mathrm{t}\mathrm{o}\u233b{Y}_{i}\text{\hspace{0.17em}}\left(<{w}_{i},{x}_{i}>+b\right)\ge \left(1-{\xi}_{i}\right).\end{array}$$

(26)

The parameter cost (*C*) mentioned in (26) will be evolved using the evolutionary algorithms mentioned in Section 4.

Diagnosing diseases from data collected from many patients with a varied degree of a specific disease is a classification problem. In medical information systems, single classifiers as well as ensemble classifiers have been studied for the disease diagnosis problem. In this section, we present the design of hybrid systems that employ evolutionary algorithms as well as classification techniques to classify diseases based on data. A few hybrid systems have been developed to optimize the parameters of the classifiers [69, 70]; however, the premises of such classifiers are different application domains.

The performance of any classifier broadly depends on three factors, namely, the technique used for the classification; data statistics (regression, entropy, kurtosis, standard deviation, number of features considered for training, size of the training data, etc.); and parameters of the classifier (learning rate, depth of the tree, maximum number of child nodes allowed for a parent node in the decision tree, pruning, fuzzy membership functions, activation functions, etc.). In this paper, we focus on optimizing the parameter classifiers using evolutionary algorithms, and thus, our designed system qualifies as a hybrid system. Figure 1 illustrates a schematic block diagram of the proposed hybrid system depicting the major steps to be carried out to arrive at disease diagnosis. The rectangle with dotted border illustrates the main emphasis of this paper. It represents that in this paper, we have studied how two classifiers, namely, SVM and MLP, perform as far as disease diagnosis is concerned. The parameters of these two classifiers have been optimized using three evolutionary algorithms, namely, PSO, GSA, and FA, with the goal of maximizing quality of diagnosis in terms of PAC, SPY, and SEY; or simply said, the goal is to optimize the three objectives as has been explained in Section 3. This has been depicted in the left half of the dotted rectangle in Figure 1. The basic steps involved in the hybrid system are summarized in Algorithm 4.

Preprocessing stage handles missing data of a feature by inserting most popular data or interval estimated data for that feature. As a part of preprocessing, features have also been normalized using min–max norm with the goal of reducing training phase time of classifiers, which takes quite some time due to the varied range of the feature values. In step 3, we have employed two classifiers (SVM and MLP). In step 4, the parameters selected for evolving in SVM are COST whereas for MLP, two parameters, namely, learning rate and momentum, have been selected for evolution. The range of these three parameters (cost, learning rate, and momentum) has been set as [0, 1]. In step 5, the objective function selected is either a single objective or multiobjective. If the method of optimization is multiobjective optimization, then for the sake of simplicity or uniformity, convert all the objective functions into either maximization or minimization. The multiple objectives considered for multiobjective optimization are given in (3)–(9). In step 6, three evolutionary algorithms (cat swarm optimization, gravitational search algorithm, and firefly algorithm) are selected as optimization techniques to find the optimum parameter values for the considered classifiers with respective to the multiple objectives: prediction accuracy, sensitivity, and specificity. Equations corresponding to multiobjective optimization are given in (3)–(9). In step 8, postprocessing of the results found in step 7 has to be done based on the optimization model selected in step 5. If the optimization model is single objective optimization, then to check the performance of the evolutionary algorithm, several statistical values like max, minimum, mean, median, and so forth have to be computed. If the selected optimization model is multiobjective (or weighted multiobjective) optimization model, the quality of nondominated solutions must found using the metrics like spacing, generational distance, and so forth.

The hybridization process ensures that the population of the evolutionary algorithms is constructed based on the classifier parameters by satisfying parameter bounds. During the execution of evolutionary algorithms, population fitness is computed by substituting the performance parameter values of the classifier executed on the dataset in step 4.

Once all the three EAs are executed individually, optimal parameter values for each of the two classifiers (SVM and MLP) are found, and subsequently, these six HISs are compared based on their fitness values. That HIS having the best fitness value for a particular dataset is considered as the proposed HIS for that particular dataset. The objective function and parameter values of the best hybrid intelligent system are treated as final optimal values.

By combining the two classifiers and the three evolutionary optimization techniques for optimizing chosen classifier parameters, a number of hybrid intelligent systems have been obtained as possible alternatives. These alternative hybrid intelligent systems (HISs) have been termed as GSA-based SVM (GSVM), FA-based SVM (FSVM), PSO-based SVM (PSVM), GSA-based MLP (GMLP), FA-based MLP (FMLP), and PSO-based MLP (PMLP). These six HISs are tested on all the eleven benchmark datasets considered in this work, once without employing resampling and then using resampling technique. Hence, these HISs produce a set of sixteen results for each of the disease datasets, eight for SVM and eight for MLP. Out of these eight results, one is for the basic classifier (only SVM and only MLP) without data resampling, another for the same with resampling data, and the remaining six are for the three evolutionary algorithms each, once with original data and again with resampling data. The benchmark datasets are tested with ADABOOST version of SVM and MLP. However, on average, the ADABOOST results are not competitive with the instance-based supervised resampling technique in Weka, and the corresponding performances are given in Table 1. Moreover, we continued our experiments using instance-based supervised resampling technique.

To check the performance of the proposed hybrid system, 11 medical datasets of various diseases are considered. These data have been collected from the UCI repository [71], and the same form the basis of almost all performance evaluations in disease diagnosis. A detailed account of the datasets employed in this paper has been summarized in Table 2. All the six hybrid system alternatives and basic classifier technique have been executed on each of the 11 datasets, once without resampling of the dataset and then repeated with resampled dataset.

All the three evolutionary algorithms are executed for 50 iterations by considering 20 agents per iteration. These algorithms have been implemented in Java. Weka 3.7.4 tool class libraries have been used for the implementation of SVM and MLP. Instance-based resampling, which is available in Weka, has been used for resampling purposes. For experimentation purposes, the datasets considered are divided into testing and training sets, and a 10-fold cross-validation is used to that effect. To compare the performance of our proposed hybrid system for the datasets employed, we have compared results we have obtained with the results presented in three very recent papers that use the same datasets (not all 11 datasets, only a subset is utilized by these papers). References [14] through [16] are recent literature, and they have been referred to in our work as the *base papers* for every dataset (as has been earmarked in legends in Figure 2). The results of datasets corresponding to diseases like breast cancer, hepatitis, BUPA liver, Pima, Cleveland, and Parkinson have been compared with those of [14], whereas the results of Statlog, Spect, Spectf, and Eric have been compared with those of BagMOOV [15]. Thyroid disease results alone are compared with those of [16]. In this work, the highest priority is given in favor of prediction accuracy. Hence, *w*_{1}, *w*_{2}, and *w*_{3} in (6) correspond to 0.95, 0.05/2, and 0.05/2, respectively.

In this section, we present a number of statistical analyses for the results obtained from our proposed hybrid system. The following subsections provide details about how these analyses are done.

The statistical analysis was done using Wilcoxon signed-rank test [72]. It tests the performance of all the techniques. The null hypothesis and alternative null hypothesis are set as follows:

*H*_{0}: median(X) is equal to median(Y).*H*_{1}: median(X) is not equal to median(Y).

The objective values corresponding to FMLP overall disease datasets are tested over rest of the five techniques on all disease datasets, once with and next without resampling for each of the hybrid system alternatives, namely, FSVM, GSVM, PSVM, GMLP, and PMLP.

The Wilcoxon signed-rank test was executed with the level of significances 0.01 and 0.05. The Matlab function “signrank()” was used to perform the statistical analysis and the conclusions arrived upon has been presented in Tables Tables3,3, ,4,4, ,5,5, and and66.

Student's *t*-test is used to test whether the sample *X* derived from a normal distribution can have the mean *m* without knowing standard deviation [73]. We execute FMLP for 20 times, and we also noted the best performance in each iteration. Student's *t*-test is executed on the three objectives: prediction accuracy (PAC), sensitivity (SEN), and specificity (SPE). Null hypothesis and alternative hypothesis are set as follows:

*H*_{0}:*μ*_{X}=*m*,*H*_{1}:*μ*_{X}≠*m*.

Student's *t*-test is performed by using ttest() function available in MATLAB. The performance of this test for various parameter values has been summarized in Tables Tables77 and and88.

The important distinct goals of multiobjective optimization are (1) finding solutions as close to the Pareto-optimal solutions as possible and (2) finding solutions as diverse as possible in the obtained nondominated front. In this work, to test the first goal is tested using generational distance (GD) and the second target is tested by computing spacing [58]. In the metric computation, two sets are used, namely, *Q* and *P*^{}, where *Q* is the Pareto front found by test algorithm and *P*^{} is the subset of true Pareto-optimal members. Before computing these metrics, the data in *Q* is to be normalized since various objective functions will have different ranges.

Generational distance (GD): Veldhuizen introduces this metric in the year 1990 [74]. This metric finds an average distance between the members of *Q* and *P*^{} as follows: GD = (∑_{i=1}^{|Q|}*d*_{i}^{p})^{1/p}/|*Q*|. For *p* = 2, the parameter *d _{i}* is the Euclidean distance between the members of

Spacing (SP): Schott introduces this metric in the year 1995 [75]. This metric finds the standard deviation of different *d*_{i} values. It can be calculated as follows: $S=\sqrt{\left(1/\left|Q\right|\right){\sum}_{i=1}^{\left|Q\right|}{\left({d}_{i}-\overline{d}\right)}^{2}},$ where *d*_{i} = min_{kQ∩k≠i}∑_{m=1}^{M}|*f*_{m}^{i} − *f*_{m}^{k}| and $\overline{d}$ is the mean value of *d*_{i}'s. A good algorithm will be having a minimal SP value. The set *Q* is caught by executing FMLP for 50 iterations with each iteration having 20 agents. In every iteration, the Pareto fronts are stored in external memory. The metrics for GD and SP for all the three objectives with and without resampling are given in Table 9.

The best values found in all the hybrid systems are discussed as follows.

The performance of all the 8 techniques (2 basic machine learning and six hybrid systems) over Cleveland dataset is depicted in Table 10. PMLP shows best sensitivity (84.79%), whereas FMLP shows better results for all the other performance parameters, like accuracy (85.8%), specificity (87.5%), *F*-measure (85.74%), recall (85.8%), and precision (85.91%) without resampling. On the contrary, with resampling, PMLP shows the best accuracy (94.1%), but for all the other parameters, like sensitivity (93.49%), specificity (94.77%), *F*-measure (94.05%), recall (94.05%), and precision (94.07%), PMLP (GMLP, FMLP) shows best results. A comparison of Cleveland result with the state-of-the-art result is given in Table 11. Table 10 summarizes the performance of the proposed hybrid alternatives for the Cleveland dataset, and Table 11 compares this performance with best results obtained in recent literature.

The performance of all the 8 techniques (2 machine learning and six hybrid systems) over Statlog dataset with and without resampling is given in Table 12. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision without resampling are achieved by FMLP, PMLP, FMLP, FMLP, FMLP, and FMLP, respectively; best values found have been bolded for easy identification in Table 12. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision with resampling are achieved by GMLP, GMLP, FMLP, GMLP, GMLP, and GMLP, respectively; best values found have been bolded for easy identification in Table 12. A comparison of Statlog result with the state-of-the-art result is given in Table 13. The highest prediction accuracy for Statlog is 85.9% (without resampling) and 90.7 (with resampling). The performance of all the considered techniques over Statlog dataset with resampling is better than without resampling. Table 12 summarizes the performance of proposed hybrid alternatives for this dataset, and Table 13 compares this performance with best results obtained in recent literature.

The performance of all the 8 techniques (2 machine learning and six hybrid systems) over Spect dataset with and without resampling is given in Table 14. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision without resampling are achieved by FMLP, FSVM, FMLP, FMLP, FMLP, and FMLP, respectively, with the values 85%, 88.4%, 74.2%, 83.3%, 85%, and 83.9%. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision with resampling are achieved by GMLP (PMLP), GMLP (FMLP, PMLP), GMLP (PMLP), PMLP (GMLP), GMLP (PMLP), and GMLP (PMLP), respectively, with the values 89.5%, 91.9%, 77.3%, 89.2%, 89.5%, and 89.1%. A comparison of Spect result with the state-of-the-art result is given in Table 15. The highest prediction accuracy for Spect is 85% (without resampling) and 89.5 (with resampling). The performance of all the considered techniques over Spect dataset with resampling is better than without resampling. Table 14 summarizes the performance of proposed hybrid alternatives for this dataset, and Table 15 compares this performance with best results obtained in recent literature.

The performance of all the 8 techniques (2 machine learning and six hybrid systems) over Spectf dataset with and without resampling is given in Table 16. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision without resampling are achieved by FMLP, FSVM, FMLP, PSVM, FMLP, and FMLP, respectively, with the values 82.4%, 88%, 83.3%, 80.6%, 82.4%, and 82.6%. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision with resampling are achieved by PMLP, GSVM, PMLP, PMLP, PMLP, and PMLP, respectively; best values found are bolded for easy identification in Table 16. A comparison of Spectf result with the state-of-the-art result is given in Table 17. The highest prediction accuracy for Spectf is 82.4% (without resampling) and 90.6% (with resampling). The performance of all the considered techniques over Spectf dataset with resampling is better than without resampling except in specificity. Table 16 summarizes the performance of proposed hybrid alternatives for this dataset, and Table 17 compares this performance with best results obtained in recent literature.

The performance of all the 8 techniques (2 basic machine learning and six hybrid systems) over ERIC dataset is depicted in Table 18. FMLP shows best results for parameters like accuracy (81.34%), specificity (79.1%), *F*-measure (81.02%), and recall (81.34%), whereas GMLP shows better results for sensitivity (88.41%) and precision (82.5%) without resampling. On the contrary, with resampling, GMLP shows best results for parameters like accuracy (91.39%), sensitivity (88.78%), specificity (93.69%), *F*-measure (91.40%), recall (91.39%), and precision (91.48%). A comparison of ERIC result with the state-of-the-art result is given in Table 19. Table 18 summarizes the performance of the proposed hybrid alternatives for the ERIC dataset, and Table 19 compares this performance with best performance in recent literature.

The performance of all the 8 techniques (2 basic machine learning and six hybrid systems) over breast cancer dataset is depicted in Table 20. GMLP shows best accuracy (97%) and precision (97.04%), whereas PSVM shows better results for the parameters like sensitivity (95.08%), specificity (98.02%), and *F*-measure (97%), and GMLP and PSVM together show the best result for recall (97%) without resampling. On the contrary, with resampling, FMLP shows the best accuracy (98%), but for all the other parameters, like sensitivity (96.61%), *F*-measure (98%), recall (98%), and precision (98%), PMLP (FMLP) shows best results and PSVM (GSVM and FSVM) shows best results for specificity (99.55%). A comparison of breast cancer result with the state-of-the-art result is given in Table 21. Table 20 summarizes the performance of the proposed hybrid alternatives for the breast cancer dataset, and Table 21 compares this performance with best results obtained in recent literature.

The performance of all the 8 techniques (2 basic machine learning and six hybrid systems) over Hepatitis dataset is depicted in Table 22. PMLP shows best results for specificity (90.55%), *F*-measure (86.77%), recall (87.1%), and precision (86.6%), whereas GSVM (PSVM and FSVM) shows better results for the parameters like accuracy (87.1%) and sensitivity (73.08%) without resampling. On the contrary, with resampling, FMLP (PMLP and GMLP) shows best results for parameters like accuracy (92.26%), sensitivity (80.77%), specificity (94.57%), *F*-measure (92.14%), recall (92.26%), and precision (92.08%). A comparison of Hepatitis result with the state-of-the-art result is given in Table 23. Table 22 summarizes the performance of the proposed hybrid alternatives for the Hepatitis dataset, and Table 23 compares this performance with best results obtained in recent literature.

The performance of all the 8 techniques (2 machine learning and 6 hybrid systems) over thyroid dataset with and without resampling is given in Table 24. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision without resampling is achieved by FMLP, FMLP (PMLP), FMLP, FMLP, FMLP (PMLP), and FMLP (PMLP), respectively, best values found have been bolded for easy identification in Table 24. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision with resampling are achieved by PMLP (fMLP), FMLP (PMLP), PMLP (FMLP), PMLP (FMLP), PMLP (FMLP), and PMLP (FMLP), respectively, with the values 98.6%, 98.2%, 98.74%, 98.6%, 98.6%, and 98.6%. A comparison of thyroid result with the state-of-the-art result is given in Table 25. The highest prediction accuracy for thyroid is 97.7% (without resampling) and 98.6% (with resampling). The performance of all the considered techniques over thyroid dataset with resampling is better than without resampling. Table 24 summarizes the performance of proposed hybrid alternatives for this dataset, and Table 25 compares this performance with best results obtained in recent literature.

The performance of all the 8 techniques (2 machine learning and six hybrid systems) over Parkinson dataset with and without resampling is given in Table 26. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision without resampling are achieved by FMLP, FMLP, PSVM (FSVM, GSVM), FMLP, FMLP, and FMLP, respectively, with the values 93.8%, 96.6%, 96.2%, 93.9%, 93.8%, and 94%. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision with resampling are achieved by GMLP, GMLP (FMLP, PMLP), PSVM (GSVM, FSVM), PMLP (GMLP), PMLP (GMLP), and PMLP (GMLP), respectively, with the values 96.9%, 97.4%, 100%, 96.9%, 96.9%, and 96.9%. A comparison of Parkinson result with the state-of-the-art result is given in Table 27. The highest prediction accuracy for Parkinson is 93.8% (without resampling) and 96.9% (with resampling). The performance of all the considered techniques over Pakinson dataset with resampling is better than without resampling. Table 26 summarizes the performance of proposed hybrid alternatives for this dataset, and Table 27 compares this performance with best results obtained in recent literature.

The performance of all the 8 techniques (2 machine learning and six hybrid systems) over Pima dataset with and without resampling is given in Table 28. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision without resampling are achieved by FMLP, FMLP, PSVM (FSVM, GSVM), FMLP, FMLP, and FMLP, respectively; best values found have been bolded for easy identification in Table 28. The highest accuracy, sensitivity, specificity, *F*-measure, recall and precision with resampling are achieved by FMLP, GMLP, FMLP, FMLP, FMLP, and FMLP, respectively; best values found have been bolded for easy identification in Table 28. A comparison of Pima result with the state-of-the-art result is given in Table 29. The highest prediction accuracy for Pima is 78.3% (without resampling) and 81% (with resampling). The performance of all the considered techniques over Pima dataset with resampling is better than without resampling. Table 28 summarizes the performance of proposed hybrid alternatives for this dataset, and Table 29 compares this performance with best results obtained in recent literature.

The performance of all the 8 techniques (2 machine learning and six hybrid systems) over BUPA dataset with and without resampling is given in Table 30. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision without resampling are achieved by GMLP, PSVM, FMLP, FMLP, FMLP, and FMLP, respectively, with the values 73%, 72%, 74.9%, 72.8%, 73%, and 72.8%. The highest accuracy, sensitivity, specificity, *F*-measure, recall, and precision with resampling are achieved by GMLP, FMLP, GMLP, GMLP, GMLP, and GMLP, respectively, with the values 73.3%, 67.5%, 78.4%, 73.3%, 73.3%, and 73.9%. A comparison of BUPA result with the state-of-the-art result is given in Table 31. The highest prediction accuracy for BUPA is 73% (without resampling) and 73.3% (with resampling). The performance of all the considered techniques over BUPA dataset with resampling is better than without resampling except in sensitivity. Table 30 summarizes the performance of proposed hybrid alternatives for this dataset, and Table 31 compares this performance with best results obtained in recent literature.

In GSA updating of an agent, the position is learned from all other agents, whereas in PSO updating of an agent position is based on two parameters called gBEST and pBEST. In each iteration of these two algorithms at most *n*, new solutions are brought forth. However, in FA in the worst case, each agent develops *O*(*n*) new solution by moving towards all other best solutions. Therefore, in the worst case of FA, space is managed more efficiently than the other two algorithms (GSA and PSO). The same is demonstrated over the 11 medical datasets.

From the previous observations, it is concluded that MLP without resampling shows improvement in all datasets when compared with latest literature results and the same is depicted in Figure 2. As mentioned earlier, in Figure 2, the blue bar represents best performance in literature. Best results obtained by any of the six HISs proposed in this paper has been depicted in Figure 2 alongside, once without resampling (orange bar) and then with resampled data (gray bar). Sensitivity and specificity values for all systems have been presented in Tables Tables3232 and and33,33, and it can be observed that our proposed hybrid system performs very well across all the datasets, in particular, the parameter optimized MLP. Table 34 summarizes the optimal parameter values of MLP. Hence, in comparison with the ensemble techniques, parameter optimized MLP gives a better result.

Table 3 gives the outcomes of the rank test for the results of with resampling at the level of significance (LOS) 0.01. If *h* value is zero, then *H*_{0} is true otherwise *H*_{1} is true. In Table 3, 161 ones are there out of a total of 165. It means that 161 times null hypothesis is false and four times null hypothesis is true.

Table 4 gives the outcomes of the rank test for the results of with resampling at the level of significance (LOS) 0.05. In Table 4, 163 ones are there out of 165. It means that 163 times null-hypothesis is false and four times null hypothesis is true.

Table 5 gives the outcomes of the rank test for the results of without resampling at the level of significance (LOS) 0.01. In Table 5, 164 ones are there out of 165. It means that 164 times null hypothesis is false and one time null hypothesis is true.

Table 6 gives the outcomes of the rank test for the results of without resampling at the level of significance (LOS) 0.05. In Table 6, 164 ones are there out of 165. It means that 164 times null hypothesis is false and one-time null hypothesis is true.

The outcomes of FMLP are taken as “*m*” value. Tables Tables77 and and88 give the results of *t*-test for both resampling and without resampling techniques with LOS 0.01 and 0.05. In these tables, *h* value is zero for all datasets at 0.01 and 0.05 LOS. Hence, null hypothesis is accepted for all datasets.

Due to the complex framework of ensemble approach and the moderate performance of the individual classifier, hybrid systems have a lot of promise in the diagnosis and prognosis of diseases. To overcome these, we proposed a disease diagnosis system by juxtaposing three evolutionary algorithms and SVM and MLP classifiers. Three evolutionary algorithms optimize the parameters of the two classifiers and such enhanced classifiers have been used to train and diagnose diseases. Accordingly, six hybrid diagnosis alternatives have been obtained by working out the combinations of classifiers and evolutionary algorithms. Based on results presented in this paper, it can be concluded that our hybridization approach provides high prediction accuracy than other methods in literature across a wide variety of disease datasets. Even among the six alternative parameter optimized classifier systems proposed, FMLP was found to be the relatively best across the majority of the 11 datasets considered. On average, MLP shows 2.2% and 6.814% improvement in prediction accuracy on the 11 datasets with and without resampling. The ranges of improvements shown by MLP in the objective sensitivity are −2.9 to 75.13 and −9.68 to 86.33 without and with resampling, respectively. The ranges of improvement shown by MLP in the objective specificity are −9.68 to 86.33 and −18.93 to 36.33 without and with resampling, respectively. From the experimental results, it is concluded that FMLP shows outperformance than recently developed ensemble classifiers ([14, 15]). As a part of the continuation of this research, we intend to process a very higher dimensional dataset with the major phases of feature selection and parameter evolution of the classifier. For feature selection, similarity metric-based hypergraph will be constructed and then by using hypergraph special properties, important topological and geometrical features will be identified. In phase 2, competitive and co-operative parallel hybrid intelligent systems will be employed for incorporating direct and indirect communication among the different systems at guaranteed run times that would allow the entire HISs to converge to a single value. This work is presently ongoing.

Two of the authors (MadhuSudana Rao Nalluri and Kannan K.) of this paper wish to thank the Department of Science and Technology, Government of India, for the financial sanction towards this work under FIST programme: SR/FST/MSI-107/2015.

The authors declare that there is no conflict of interest regarding the publication of this paper.

1. Liao S.-H., Chu P.-H., Hsiao P.-Y. Data mining techniques and applications–a decade review from 2000 to 2011. *Expert Systems with Applications*. 2012;39(12):11303–11311. doi: 10.1016/j.eswa.2012.02.063. [Cross Ref]

2. Ngai E. W. T., Xiu L., Chau D. C. K. Application of data mining techniques in customer relationship management: a literature review and classification. *Expert Systems with Applications*. 2009;36(2):2592–2602. doi: 10.1016/j.eswa.2008.02.021. [Cross Ref]

3. Ngai E. W., Hu Y., Wong Y. H., Chen Y., Sun X. The application of data mining techniques in financial fraud detection: a classification framework and an academic review of literature. *Decision Support Systems*. 2011;50(3):559–569. doi: 10.1016/j.dss.2010.08.006. [Cross Ref]

4. Esfandiari N., Babavalian M. R., Moghadam A. M., Tabar V. K. Knowledge discovery in medicine: current issue and future trend. *Expert Systems with Applications*. 2014;41(9):4434–4463. doi: 10.1016/j.eswa.2014.01.011. [Cross Ref]

5. Li Y., Bai C., Reddy C. K. A distributed ensemble approach for mining healthcare data under privacy constraints. *Information Sciences*. 2016;330:245–259. doi: 10.1016/j.ins.2015.10.011. [PMC free article] [PubMed] [Cross Ref]

6. Ramos-Pollán R., Guevara-López M. Á., Oliveira E. A software framework for building biomedical machine learning classifiers through grid computing resources. *Journal of Medical Systems*. 2012;36(4):2245–2257. doi: 10.1007/s10916-011-9692-3. [PubMed] [Cross Ref]

7. Malik A., Iqbal J. Extreme learning machine based approach for diagnosis and analysis of breast cancer. *Journal of the Chinese Institute of Engineers*. 2016;39(1):74–78.

8. Palaniappan S., Awang R. Intelligent heart disease prediction system using data mining techniques. 2008 IEEE/ACS International Conference on Computer Systems and Applications; 2008; Doha. pp. 108–115. [Cross Ref]

9. Hariharan M., Polat K., Sindhu R. A new hybrid intelligent system for accurate detection of Parkinson’s disease. *Computer Methods and Programs in Biomedicine*. 2014;113(3):904–913. doi: 10.1016/j.cmpb.2014.01.004. [PubMed] [Cross Ref]

10. Castelli I., Trentin E. Combination of supervised and unsupervised learning for training the activation functions of neural networks. *Pattern Recognition Letters*. 2014;37:178–191. doi: 10.1016/j.patrec.2013.06.013. [Cross Ref]

11. Xie B., Liu Y., Zhang H., Yu J. A novel supervised approach to learning efficient kernel descriptors for high accuracy object recognition. *Neurocomputing*. 2016;182:94–101. doi: 10.1021/jacs.7b01918. [PubMed] [Cross Ref]

12. Morris K., McNicholas P. D. Clustering, classification, discriminant analysis, and dimension reduction via generalized hyperbolic mixtures. *Computational Statistics & Data Analysis*. 2016;97:133–150. doi: 10.1016/j.csda.2015.10.008. [Cross Ref]

13. Elyasigomari V., Mirjafari M. S., Screen H. R., Shaheed M. H. Cancer classification using a novel gene selection approach by means of shuffling based on data clustering with optimization. *Applied Soft Computing*. 2015;35:43–51. doi: 10.1016/j.asoc.2015.06.015. [Cross Ref]

14. Bashir S., Qamar U., Khan F. H., Naseem L. HMV: a medical decision support framework using multi-layer classifiers for disease prediction. *Journal of Computational Science*. 2016;13:10–25. doi: 10.1016/j.jocs.2016.01.001. [Cross Ref]

15. Bashir S., Qamar U., Khan F. H. BagMOOV: a novel ensemble for heart disease prediction bootstrap aggregation with multi-objective optimized voting. *Australasian Physical & Engineering Sciences in Medicine*. 2015;38(2):305–323. doi: 10.1007/s13246-015-0337-6. [PubMed] [Cross Ref]

16. Temurtas F. A comparative study on thyroid disease diagnosis using neural networks. *Expert Systems with Applications*. 2009;36(1):944–949. doi: 10.1016/j.eswa.2007.10.010. [Cross Ref]

17. Das R., Turkoglu I., Sengur A. Effective diagnosis of heart disease through neural networks ensembles. *Expert Systems with Applications*. 2009;36(4):7675–7680. doi: 10.1016/j.eswa.2008.09.013. [Cross Ref]

18. Chitra R., Seenivasagam V. Heart disease prediction system using supervised learning classifier. *Bonfring International Journal of Software Engineering and Soft Computing*. 2013;3(1):p. 1.

19. Pattekari S. A., Parveen A. Prediction system for heart disease using Naïve Bayes. *International Journal of Advanced Computer and Mathematical Sciences*. 2012;3(3):290–294.

20. Jabbar M. A., Deekshatulu B. L., Chandra P. Heart disease prediction system using associative classification and genetic algorithm. 2012. http://arxiv.org/abs/1303.5919. [PubMed] [Cross Ref]

21. Masethe H. D., Masethe M. A. Prediction of heart disease using classification algorithms. Proceedings of the World Congress on Engineering and Computer Science; October 2014; San Francisco, USA. pp. 22–24.

22. Shaikh A., Mahoto N., Khuhawar F., Memon M. Performance evaluation of classification methods for heart disease dataset. *Sindh University Research Journal-SURJ (Science Series)* 2015;47(3) doi: 10.1177/1062860617702741. [PubMed] [Cross Ref]

23. Kavitha R., Christopher T. An effective classification of heart rate data using PSO-FCM clustering and enhanced support vector machine. *Indian Journal of Science and Technology*. 2015;8(30) doi: 10.4103/0970-2113.201322. [PubMed] [Cross Ref]

24. Alizadehsani R., Habibi J., Hosseini M. J., et al. A data mining approach for diagnosis of coronary artery disease. *Computer Methods and Programs in Biomedicine*. 2013;111(1):52–61. doi: 10.1016/j.cmpb.2013.03.004. [PubMed] [Cross Ref]

25. Shenfield A., Rostami S. A multi objective approach to evolving artificial neural networks for coronary heart disease classification. 2015 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB); 2015; Niagara Falls, ON. pp. 1–8. [Cross Ref]

26. Bhatla N., Jyoti K. An analysis of heart disease prediction using different data mining techniques. *International Journal of Engineering Research and Technology*. 2012;1(8):1–4. doi: 10.1111/echo.13512. [PubMed] [Cross Ref]

27. Parthiban L., Subramanian R. Intelligent heart disease prediction system using CANFIS and genetic algorithm. *International Journal of Biological, Biomedical and Medical Sciences*. 2008;3(3)

28. Hedeshi N. G., Abadeh M. S. Coronary artery disease detection using a fuzzy-boosting PSO approach. *Computational Intelligence and Neuroscience*. 2014;2014:12. doi: 10.1155/2014/783734.783734 [PMC free article] [PubMed] [Cross Ref]

29. Olaniyi E. O., Oyedotun O. K., Adnan K. Heart diseases diagnosis using neural networks arbitration. *International Journal of Intelligent Systems and Applications (IJISA)* 2015;7(12):p. 75.

30. Kim J. K., Lee J. S., Park D. K., Lim Y. S., Lee Y. H., Jung E. Y. Adaptive mining prediction model for content recommendation to coronary heart disease patients. *Cluster Computing*. 2014;17(3):881–891. doi: 10.1007/s10586-013-0308-1. [Cross Ref]

31. Chauraisa V., Pal S. Early prediction of heart diseases using data mining techniques. *Caribbean Journal of Science and Technology*. 2013;1:208–217.

32. Yan H., Jiang Y., Zheng J., Peng C., Li Q. A multilayer perceptron-based medical decision support system for heart disease diagnosis. *Expert Systems with Applications*. 2006;30(2):272–281. doi: 10.1016/j.eswa.2005.07.022. [Cross Ref]

33. Yan H., Zheng J., Jiang Y., Peng C., Li Q. Development of a decision support system for heart disease diagnosis using multilayer perceptron. Proceedings of the 2003 International Symposium on Circuits and Systems (ISCAS '03); 2003; pp. V-709–V-712. [Cross Ref]

34. Shouman M., Turner T., Stocker R. Using data mining techniques in heart disease diagnosis and treatment. 2012 Japan-Egypt Conference on Electronics, Communications and Computers; 2012; Alexandria. pp. 173–177. [Cross Ref]

35. Karaolis M., Moutiris J. A., Pattichis C. S. Assessment of the risk of coronary heart event based on data mining. 2008 8th IEEE International Conference on BioInformatics and BioEngineering; 2008; Athens. pp. 1–5. [Cross Ref]

36. Ordonez C., Omiecinski E., De Braal L., et al. Mining constrained association rules to predict heart disease. Proceedings 2001 IEEE International Conference on Data Mining; 2001; San Jose, CA, USA. pp. 433–440. [Cross Ref]

37. Das R., Turkoglu I., Sengur A. Diagnosis of valvular heart disease through neural networks ensembles. *Computer Methods and Programs in Biomedicine*. 2009;93(2):185–191. doi: 10.1016/j.cmpb.2008.09.005. [PubMed] [Cross Ref]

38. Taneja A. Heart disease prediction system using data mining techniques. *Oriental Journal of Computer Science and Technology*. 2013;6(4):457–466.

39. Sartakhti J. S., Zangooei M. H., Mozafari K. Hepatitis disease diagnosis using a novel hybrid method based on support vector machine and simulated annealing (SVM-SA) *Computer Methods and Programs in Biomedicine*. 2012;108(2):570–579. doi: 10.1016/j.cmpb.2011.08.003. ISSN 0169-2607. [PubMed] [Cross Ref]

40. Çalişir D., Dogantekin E. A new intelligent hepatitis diagnosis system: PCA–LSSVM. *Expert Systems with Applications*. 2011;38(8):10705–10708. doi: 10.1016/j.eswa.2011.01.014. [Cross Ref]

41. Li J., Wong L. *Advances in Web-Age Information Management*. Berlin Heidelberg: Springer; 2003. Using rules to analyse bio-medical data: a comparison between C4.5 and PCL; pp. 254–265.

42. Weng C.-H., Huang T. C.-K., Han R.-P. Disease prediction with different types of neural network classifiers. *Telematics and Informatics*. 2016;33(2):277–292. doi: 10.1016/j.tele.2015.08.006. [Cross Ref]

43. Jane Y. N., Nehemiah H. K., Arputharaj K. A Q-backpropagated time delay neural network for diagnosing severity of gait disturbances in Parkinson’s disease. *Journal of Biomedical Informatics*. 2016;60:169–176. doi: 10.1016/j.jbi.2016.01.014. [PubMed] [Cross Ref]

44. Gürüler H. *Neural Computing and Applications*. London: Springer; 2016. A novel diagnosis system for Parkinson’s disease using complex-valued artificial neural network with k-means clustering feature weighting method; pp. 1–10. [Cross Ref]

45. Bashir S., Qamar U., Khan F. H. IntelliHealth: a medical decision support application using a novel weighted multi-layer classifier ensemble framework. *Journal of Biomedical Informatics*. 2016;59:185–200. doi: 10.1016/j.jbi.2015.12.001. [PubMed] [Cross Ref]

46. Iyer A., Jeyalatha S., Sumbaly R. Diagnosis of diabetes using classification mining techniques. 2015. http://arxiv.org/abs/1502.03774.

47. Choubey D. K., Sanchita P. GA_MLP NN: a hybrid intelligent system for diabetes disease diagnosis. *International Journal of Intelligent Systems and Applications*. 2016;8(1):p. 49.

48. Kharya S. Using data mining techniques for diagnosis and prognosis of cancer disease. 2012. http://arxiv.org/abs/1205.1923.

49. Chaurasia V., Pal S. A novel approach for breast cancer detection using data mining techniques. *International Journal of Innovative Research in Computer and Communication Engineering*. 2014;2(1):2456–2465.

50. Fernandez-Millan R., Medina-Merodio J. A., Plata R. B., Martinez-Herraiz J. J., Gutierrez-Martinez J. M. A laboratory test expert system for clinical diagnosis support in primary health care. *Applied Sciences*. 2015;5(3):222–240. doi: 10.3390/app5030222. [Cross Ref]

51. Alzubaidi A., Cosma G., Brown D., Pockley A. G. A new hybrid global optimization approach for selecting clinical and biological features that are relevant to the effective diagnosis of ovarian cancer. 2016 IEEE Symposium Series on Computational Intelligence (SSCI); December 2016; Athens. pp. 1–8. [Cross Ref]

52. Gwak J., Jeon M., Pedrycz W. Bolstering efficient SSGAs based on an ensemble of probabilistic variable-wise crossover strategies. *Soft Computing*. 2016;20(6):2149–2176. doi: 10.1007/s00500-015-1630-8. [Cross Ref]

53. Hsieh S. L., Hsieh S. H., Cheng P. H., et al. Design ensemble machine learning model for breast cancer diagnosis. *Journal of Medical Systems*. 2012;36(5):2841–2847. doi: 10.1007/s10916-011-9762-6. [PubMed] [Cross Ref]

54. Shen L., Chen H., Yu Z., et al. Evolving support vector machines using fruit fly optimization for medical data classification. *Knowledge-Based Systems*. 2016;96:61–75. doi: 10.1016/j.knosys.2016.01.002. [Cross Ref]

55. Fawcett T. An introduction to ROC analysis. *Pattern Recognition Letters*. 2006;27(8):861–874. doi: 10.1016/j.patrec.2005.10.010. [Cross Ref]

56. Phua C., Lee V., Smith K., Gayler R. A comprehensive survey of data mining-based fraud detection research. 2010. http://arxiv.org/abs/1009.6119.

57. Freitas A. A. A critical review of multi-objective optimization in data mining: a position paper. *ACM SIGKDD Explorations Newsletter*. 2004;6(2):77–86. doi: 10.1145/1046456.1046467. [Cross Ref]

58. Deb K. *Multi-Objective Optimization Using Evolutionary Algorithms*. John Wiley & Sons; 2001.

59. Rashedi E., Nezamabadi-Pour H., Saryazdi S. GSA: a gravitational search algorithm. *Information Sciences*. 2009;179(13):2232–2248. doi: 10.1016/j.ins.2009.03.004. [Cross Ref]

60. Rashedi E., Nezamabadi-Pour H., Saryazdi S. Filter modeling using gravitational search algorithm. *Engineering Applications of Artificial Intelligence*. 2011;24(1):117–122. doi: 10.1016/j.engappai.2010.05.007. [Cross Ref]

61. Li C., Zhou J. Parameters identification of hydraulic turbine governing system using improved gravitational search algorithm. *Energy Conversion and Management*. 2011;52(1):374–381. doi: 10.1159/000470650. [PubMed] [Cross Ref]

62. Kennedy J., Eberhart R. Particle swarm optimization. Neural Networks, 1995. Proceedings., IEEE International Conference on; 1995; Perth, WA. pp. 1942–1948. [Cross Ref]

63. Huang C.-L., Dun J.-F. A distributed PSO–SVM hybrid system with feature selection and parameter optimization. *Applied Soft Computing*. 2008;8(4):1381–1391. doi: 10.1016/j.asoc.2007.10.007. [Cross Ref]

64. Yang X.-S. Firefly algorithm, stochastic test functions and design optimisation. *International Journal of bio-Inspired Computation*. 2010;2(2):78–84. doi: 10.1504/IJBIC.2010.032124. [Cross Ref]

65. Rosenblatt F. Principles of neurodynamics. Perceptrons and the theory of brain mechanisms. Cornell Aeronautical Lab Inc., Buffalo, NY, 1961.

66. Pal S. K., Mitra S. Multilayer perceptron, fuzzy sets, and classification. *IEEE Transactions on Neural Networks*. 1992;3(5):683–697. [PubMed]

67. Collobert R., Bengio S. Links between perceptrons, MLPs and SVMs. Proceedings of the Twenty-First International Conference on Machine Learning (ICML '04); 2004; Banff, Alberta, Canada. [Cross Ref]

68. Cortes C., Vapnik V. Support-vector networks. *Machine Learning*. 1995;20(3):273–297. doi: 10.1007/BF00994018. [Cross Ref]

69. Lin S. W., Ying K. C., Chen S. C., Lee Z. J. Particle swarm optimization for parameter determination and feature selection of support vector machines. *Expert Systems with Applications*. 2008;35(4):1817–1824. doi: 10.1016/j.eswa.2007.08.088. [Cross Ref]

70. Chatterjee S., Sarkar S., Hore S., Dey N., Ashour A. S., Balas V. E. *Neural Computing and Applications*. London: Springer; 2016. Particle swarm optimization trained neural network for structural failure prediction of multistoried RC buildings; pp. 1–12.

72. Wilcoxon F. Individual comparisons by ranking methods. *Biometrics Bulletin*. 1945;1(6):80–83. doi: 10.2307/3001968. [Cross Ref]

73. Martinez W. L., Martinez A. R. *Computational Statistics Handbook with MATLAB*. Vol. 22. CRC press; 2007.

74. Van Veldhuizen D. A. Multiobjective evolutionary algorithms: classifications, analyses, and new innovations. Air Force Institute of Technology Wright-Patterson AFB OH School of Engineering, 1999.

75. Schott J. R. Fault tolerant design using single and multicriteria genetic algorithm optimization. Air Force Institute of Technology Wright-Patterson AFB OH, 1995.

Articles from Journal of Healthcare Engineering are provided here courtesy of **Hindawi**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |