Home | About | Journals | Submit | Contact Us | Français |

**|**BMC Bioinformatics**|**v.6; 2005**|**PMC1276787

Formats

Article sections

- Abstract
- Background
- Implementation
- Results and discussion
- Conclusion
- Availability and requirements
- Authors' contributions
- Supplementary Material
- References

Authors

Related links

BMC Bioinformatics. 2005; 6: 260.

Published online 2005 October 19. doi: 10.1186/1471-2105-6-260

PMCID: PMC1276787

Reviewed by Joshua J Forman,^{1} Paul A Clemons,^{1} Stuart L Schreiber,^{1,}^{2} and Stephen J Haggarty^{}^{1}

Joshua J Forman: ude.dravrah.daorb@namrofj; Paul A Clemons: ude.dravrah.daorb@snomelcp; Stuart L Schreiber: ude.dravrah@rebierhcs_trauts; Stephen J Haggarty: ude.dravrah.daorb@ytraggah

Received 2005 June 1; Accepted 2005 October 19.

Copyright © 2005 Forman et al; licensee BioMed Central Ltd.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This article has been cited by other articles in PMC.

Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks.

Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors).

SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from http://chembank.broad.harvard.edu/resources/. Source code is available upon request.

The field of graph theory concerns itself with the formal study of graphs – structures containing vertices and edges linking these vertices. Scientifically, graphs can be used to represent networks embodying many different relationships among data, including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as nodes (vertices) and interactions (edges) that can carry different weights.

Graph-theoretic metrics, including eigenspectra, have been used to analyze diverse sets of data in the fields of computational chemistry and bioinformatics. Protein-protein interaction networks in *Saccharomyces cerevisiae*, for example, have been shown to exhibit scale-free properties [1], and databases of mRNAs can be mined using spectral properties of graphs created by their secondary structure [2]. Graph theory has also been used in conjunction with combinations of small-molecule probes to derive signatures of biological states using chemical-genomic profiling [3].

Despite the widespread use of graph theory in these fields, however, there are few user-friendly tools for analyzing network properties. SpectralNET is a graphical application that calculates a wide variety of graph-theoretic metrics, including eigenvalues and eigenvectors of the adjacency matrix (a simple matrix representation of the nodes and edges of a graph) [4], Laplacian matrix [5], and normalized Laplacian matrix, for networks that are either randomly generated or uploaded by the user. SpectralNET is available both as an ASP.NET web application and as a standalone .NET executable. While SpectralNET was originally written to analyze chemical genetic assay data, it should be of use to any researcher interested in graph-theoretic metrics and eigenspectra.

SpectralNET was originally written as an ASP.NET application in C#, and has subsequently been ported to a standalone .NET executable version (also written in C#). ASP.NET was originally chosen because it offered a fast, easy way to offer a thin client to users, obviating the need for large amounts of computational power on the client machine, as is often needed to perform large matrix calculations. A standalone version was created for three primary reasons: it avoids the problem of time-outs inherent when using a web interface (a potential issue when performing long-duration calculations), it is more easily distributable, and porting from ASP.NET to a .NET executable is a relatively simple matter.

Many computations are performed directly in C#, such as graph instantiation and metric calculation. Matrix computations (including eigendecomposition) are performed using the NMath Suite (CenterSpace Software, Corvallis, Oregon). Because the NMath Suite is a commercially licensed library, those receiving source code from the authors must supply their own means of performing matrix eigendecomposition in order to modify and redeploy the application. The implementation of the Fermi-Dirac integral, used in the calculation of spectral density, is ported from Michele Goano's implementation in FORTRAN (Goano, 1995). Because SpectralNET uses a third-party library for matrix calculations that is partially implemented using Managed Extensions for C++, SpectralNET will not be portable to Linux until the Mono implementation of this C++ language feature is complete.

Idealized random networks can be automatically generated by the application, or networks can be uploaded by the user for analysis. SpectralNET can automatically generate random Erdos-Renyi graphs [7], Barabasi-Albert (scale-free) graphs [8], re-wiring Barabasi-Albert graphs [9], Watts-Strogatz (small-world) graphs [10], or hierarchical graphs [11]. Each automatically generated graph type is customizable with algorithmic parameters. SpectralNET was designed with extensibility in mind, so that users may request additional random graph types provided they submit a succinct algorithm to the author or create their own.

Networks can be uploaded by the user in the form of a Pajek file [12] or a tab-delimited text file with one edge per line (see additional file 1: HumanPPI_nodenodeweight.txt for an example network definition file defining a network of human protein-protein interactions). Raw data files can also be uploaded to the application, where each line of data is represented as a labeled vertex. Vertices can be connected with edge weights equal to the square of the correlation of their associated input data, or according to their Euclidean distance as defined by the Eigenmap algorithm [13]. If raw data is uploaded by the user, principal component analysis (PCA) [14,15] can optionally be performed on the data before calculating edge weights.

After processing the input network, SpectralNET displays for the user a wide variety of graph-analytic metrics. For example, the degree and clustering coefficient is displayed for each vertex. The degree of a vertex is the number of edges incident upon that vertex; for weighted graphs, SpectralNET calculates this as the sum of these edges' weights. The clustering coefficient of a node represents the proportion of its neighbors that are connected to each other, and is calculated for a node *i *as:

${C}_{i}\left({k}_{i}\right)=\frac{2{n}_{i}}{{k}_{i}\left({k}_{i}-1\right)},\text{}\left(1\right)$

where *n*_{i }denotes the number of edges connecting neighbors of node *i *to each other, and *k*_{i }denotes the number of neighbors of node *i *[16]. In addition, the minimum, average, and maximum distances of each vertex are displayed, which are defined as the shortest, average, and maximum distances, respectively, from the node to any other node in the graph. The components of the adjacency, Laplacian, and normalized Laplacian eigenvectors corresponding to the vertex are also shown, where the adjacency matrix is defined as the matrix *A *with the following elements:

${A}_{ij}\left(G\right)=\{\begin{array}{ll}1\hfill & \text{if}i\ne \text{jand}\text{edge}\left(i,j\right)\hfill & 0\hfill & otherwise\hfill \end{array};(2)MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akYBe9v8qqaqFD0xXdHaVhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqVepeea0=as0db9vqpepesP0xe9Fve9Fve9GapdbiqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyqae0aaSbaaSqaaiabdMgaPjabdQgaQbqabaGcdaqadaqaaiabdEeahbGaayjkaiaawMcaaiabg2da9maaceaabaqbaeaabiGaaaqaaiabigdaXaqaaiabbMgaPjabbAgaMjabbccaGiabdMgaPjabgcMi5kabbQgaQjabbccaGiabbggaHjabb6gaUjabbsgaKjabbccaGiabgoGiKiabbccaGiabbwgaLjabbsgaKjabbEgaNjabbwgaLjabbccaGmaabmaabaGaemyAaKMaeiilaWIaemOAaOgacaGLOaGaayzkaaaabaGaeGimaadabaGaem4Ba8MaemiDaqNaemiAaGMaemyzauMaemOCaiNaem4DaCNaemyAaKMaem4CamNaemyzaugaaaGaay5EaaGaei4oaSJaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiiOaaQaeiikaGIaeGOmaiJaeiykaKcaaa@8470@$

the Laplacian matrix is defined as the matrix L with the following elements:

${L}_{ij}(G)=\{\begin{array}{ll}{d}_{i}\hfill & ifi=j\hfill \\ -w(e)\hfill & ifi\ne jandedgee\left(i,j\right)\hfill & 0\hfill & otherwise\hfill \\ ,\text{}\left(3\right)\end{array}MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciGacaGaaeqabaqabeGadaaakeaacqWGmbatdaWgaaWcbaGaemyAaKMaemOAaOgabeaakiabcIcaOiabdEeahjabcMcaPiabg2da9maaceaabaqbaeaabmGaaaqaaiabdsgaKnaaBaaaleaacqWGPbqAaeqaaaGcbaacbiGae8xAaKMae8NzayMae8hiaaIae8hiaaIaemyAaKMaeyypa0JaemOAaOgabaGaeyOeI0Iaem4DaCNaeiikaGIaemyzauMaeiykaKcabaGaemyAaKMaemOzayMae8hiaaIae8hiaaIaemyAaKMaeyiyIKRaemOAaOMae8hiaaIae8hiaaIaemyyaeMaemOBa4MaemizaqMae8hiaaIae8hiaaIaey4aIqIae8hiaaIae8hiaaIaemyzauMaemizaqMaem4zaCMaemyzauMae8hiaaIae8hiaaIaemyzauMae8hiaaIae8hiaaYaaeWaaeaacqWGPbqAcqGGSaalcqWGQbGAaiaawIcacaGLPaaaaeaacqaIWaamaeaacqWGVbWBcqWG0baDcqWGObaAcqWGLbqzcqWGYbGCcqWG3bWDcqWGPbqAcqWGZbWCcqWGLbqzaaGaeiilaWIaaCzcaiaaxMaadaqadaqaaiabiodaZaGaayjkaiaawMcaaaGaay5Eaaaaaa@7915@$

where *w*(*e*) denotes the weight of edge *e*; and the normalized Laplacian matrix is defined as the matrix $L$ with the following elements:

${L}_{ij}(G)=\{\begin{array}{ll}1\hfill & if\text{i=j}-\frac{1}{\sqrt{{d}_{i}{d}_{j}}}\hfill & if\text{iandjareadjacent,0otherwise(4) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacaWFmbWaaSbaaSqaaiabdMgaPjabdQgaQbqabaGccqGGOaakcqWGhbWrcqGGPaqkcqGH9aqpdaGabaqaauaabaqadiaaaeaacqaIXaqmaeaacqWGPbqAcqWGMbGzcaaMc8UaaGPaVlabdMgaPjabg2da9iabdQgaQbqaaiabgkHiTmaalaaabaGaeGymaedabaWaaOaaaeaacqWGKbazdaWgaaWcbaGaemyAaKgabeaakiabdsgaKnaaBaaaleaacqWGQbGAaeqaaaqabaaaaaGcbaGaemyAaKMaemOzayMaaGPaVlaaykW7cqWGPbqAcaaMc8UaaGPaVlabdggaHjabd6gaUjabdsgaKjaaykW7caaMc8UaemOAaOMaaGPaVlaaykW7cqWGHbqycqWGYbGCcqWGLbqzcaaMc8UaaGPaVlabdggaHjabdsgaKjabdQgaQjabdggaHjabdogaJjabdwgaLjabd6gaUjabdsha0jabcYcaSaqaaiabicdaWaqaaiabd+gaVjabdsha0jabdIgaOjabdwgaLjabdkhaYjabdEha3jabdMgaPjabdohaZjabdwgaLbaaaiaawUhaaiaaxMaacaWLjaGaeiikaGIaeGinaqJaeiykaKcaaa@8D64@}\hfill \hfill \end{array}$

where *d*_{i }denotes the degree of node *i *[5]. It should be noted that Chung defines the Laplacian matrix as the normalized form above, but we use the more commonly found definition (for an example, see Mohar [17]).

Many large networks derived from biological data are composed of multiple subgraphs that are not always connected together. SpectralNET computes many properties based on the selected or "active" connected component. For the active connected component, its size and average diameter are displayed in addition to graphs of degree distribution [18], clustering coefficient by degree, and average distance by degree [19]. Graphs of eigenvalues, eigenvectors, inverse participation ratios, and spectral densities of the three matrix types are also displayed. The inverse participation ratio is defined for each eigenvector as:

${I}_{j}={\displaystyle \sum _{k=1}^{N}{\left[{\left({e}_{j}\right)}_{k}\right]}^{4}}\text{}\left(5\right)$

where *e*_{j }represents the eigenvector. Spectral density, or the density of the eigenvalues, is plotted for each eigenvalue as $\lambda /\sqrt{Np\left(1-p\right)}$ on the horizontal axis and $p\sqrt{Np(1-p)}$ on the vertical axis, with the function p defined on any eigenvalue as:

$p\left(\lambda \right)=\frac{1}{N}{\displaystyle \sum _{j=1}^{N}\delta \left(\lambda -{\lambda}_{j}\right)}\text{}\left(6\right)$

where *λ *is the eigenvalue and *δ *represents the delta function, implemented as described above [20]. Most graphs can be mouse-clicked to select the vertex corresponding to a desired data point, and eigenvalue graphs can be sorted by value or by vertex degree. All calculated graph metrics can be exported as a tab-delimited text file for further analysis.

The main graph display window of SpectralNET offers two interactive graphical networks displays that support zooming and allow vertex selection by mouse-click. The default display view is the resulting graph processed by the Fruchterman-Reingold algorithm [21], which positions vertices by force-directed placement. The other available display is the network's Laplacian embedding, which locates vertices in two-dimensional Euclidean space using the corresponding second and third Laplacian eigenvector components (the first eigenvector component of the Laplacian matrix is degenerate). Exportation of the other Laplacian eigenvector components allows for visualization in higher dimensions.

In conjunction with uploaded raw data, Laplacian embedding allows the user to see a reduced-dimensionality view of high-dimensionality input, once this input is converted into a network. If the user chooses to process input data using the Eigenmap algorithm, Laplacian embedding shows the reduced-dimensionality result [13]. Dimensionality reduction has proven to be a useful tool in computational chemistry and bioinformatics; for example, Agrafiotis [22] used multidimensional scaling (MDS) to reduce the dimensionality of combinatorial library descriptors, and Lin [15] used PCA to analyze single nucleotide polymorphisms from genomic data. We chose to implement Laplacian embedding rather than MDS or other algorithms in SpectralNET because of promising results in the field of machine learning [23]. Although dimensionality reduction is especially useful for analyzing high-dimensional data, Laplacian embedding is an elegant display choice for any input network (see the next section for an example using a scale-free biological network). For a simpler (linear) dimensionality-reduced view of the input data, SpectralNET also has the option of viewing the results of PCA (though this view is not available when a network definition file, such as a Pajek file, is used). Both Laplacian embedding and PCA can be viewed in three dimensions with a Virtual Reality Modeling Language (VRML) viewer.

SpectralNET provides an easy-to-use interface for creating a randomly generated small-world network. All that is required is to supply the desired number of nodes, the desired number of neighbors to which to connect each node, and the desired random probability that an edge is re-wired. For this example we create a network with 300 nodes in which each node is connected to four neighbors, and edges are rewired with 4% probability.

The default view of the graph is its Fruchterman-Reingold display, which, as noted above, uses force-directed placement to draw graph nodes (Figure (Figure1).1). While the Fruchterman-Reingold display offers a quickly generated view of large networks, relatively little information about the global organization of the network is observable in the display of this small-world network (one cannot tell, for example, that the graph is a small-world network by its Fruchterman-Reingold display alone). In order to see the graph as drawn by the Laplacian eigenvector components of each node, the "Laplacian Embedding" radio button underneath the graph display is selected. In contrast to the Fruchterman-Reingold display, the Laplacian embedding of this small-world network (Figure (Figure2)2) conveys significantly more information about its topology. In this display, it is clear that the small-world network was generated by placing neighboring nodes next to each other in a ring-like fashion – the theoretical ring-structure is represented literally in the Laplacian embedding.

Real-world biological networks are also amenable to topological analysis using Laplacian embeddings. In order to generate a suitable biological network to analyze, the MIPS Mammalian Protein-Protein Interaction Database [26] was downloaded and parsed into a node-node-weight file for import into SpectralNET (see: additional file 1: HumanPPI_nodenodeweight.txt). The Laplacian embedding of the largest connected component of the resulting graph (Figure (Figure3)3) shows a central hub of highly connected proteins connected to four connected branches. Spectral analysis similar to that performed below shows that the network is scale-free in nature, as is further evidenced by the fact that there are many more low-degree proteins than high-degree proteins, with the relationship between number of proteins and protein degree following a power-law distribution (data not shown). The scale-free nature of this network suggests that highly-connected proteins in the central hub may perform a coordinating role for the proteins in this interaction network. Examining the most highly connected protein in the central hub of the network (indicated in Figure Figure3)3) shows that, indeed, it is the transcriptional co-activator SRC-1, which receives and augments signals from multiple pathways [27]. Readers with further interest in topological analysis of biological networks are encouraged to read Farkas et al. [28] for a global analysis of the transcriptional regulatory network of *S. cerevisiae *or Jeong et al. [29] for an analysis of the protein interaction network of the yeast.

In addition to the graphical display of networks, SpectralNET enables analysis of spectral properties of input networks, which can shed light on graph topology. One way this can be achieved is to compare a small-world network similar to, but not identical to, the randomly generated small-world network described above. This graph is a small-world network created by attaching complete subgraphs, varying in size from three to six nodes, to nodes arrayed in a ring (see additional file 2: Small-world_nodenodeweight for the network definition file, originally described by Comellas [24]) (Figure (Figure4).4). The spectral properties of this graph can be used to help identify the topology of the original graph, in this case by comparing their adjacency and Laplacian spectral densities (Figure (Figure5)5) [5,20]. Spectral density measures the density of surrounding eigenvalues at each eigenvalue and serves as an especially useful metric of global graph topology. The plot of these values for the example network is most similar to the corresponding plots for a Watts-Strogatz network (in this network, there are 500 nodes connected to 6 neighbors, with a re-wiring probability of 1%), despite the fact that there are only 33 nodes in the example network. Thus, even when an example network has relatively few nodes, comparison of spectral properties of the graph to idealized graphs can yield clues about network's topology.

In addition to performing spectral analysis of networks, SpectralNET can also perform dimensionality reduction on chemical datasets to analyze quantitative structure activity relationships (QSAR). In this example, we upload a set of chemical descriptor data into SpectralNET and analyze it using the Laplacian Eigenmap algorithm originally developed by Belkin and Niyogi [13]. This dataset contains one small molecule, each created by the same diversity-oriented synthesis pathway [25], per row of the input file. Each column of the data represents a different *molecular descriptor *– metrics used to capture an aspect of the compound, such as volume, surface area, number of rings, etc.

The Laplacian Eigenmap algorithm in SpectralNET connects these small molecules to their *K*-nearest neighbors (measured by Euclidean distance), where *K *is an algorithmic parameter supplied by the user. In this example, we choose *K *= 7 to yield a reasonable number of edges in the resulting graph. Weights are assigned to each edge in one of two ways – every edge can have a weight of one, or weights can be assigned to edges by the following formula:

${W}_{ij}={e}^{-\frac{{\Vert {x}_{1}-{x}_{2}\Vert}^{2}}{t}}\text{}\left(7\right)$

where *W*_{ij }represents the weight of an edge connecting edges *i *and *j *and *t *is an algorithmic parameter [13]. For the molecular descriptor dataset, edge weights of one were chosen (it should be noted that when applying the second method to this dataset, increasing values of *t *eventually resulted in convergence to the same result as this method around *t *= 20,000). SpectralNET also offers the choice of performing PCA on input data before performing the Laplacian Eigenmap algorithm, which is performed by default and remains enabled for this example.

The resultant Laplacian embedding of the graph, which can be viewed by selecting the "Laplacian Embedding" radio button underneath the graph view pane, is the reduced dimensionality result of the Laplacian Eigenmap algorithm (Figure (Figure6).6). Like PCA, the Laplacian Eigenmap algorithm performs dimensionality reduction on an input dataset such that relationships among the data are captured by fewer dimensions. Unlike PCA, however, it is not a linear transformation of the data, and the resulting non-linear dimensionality reduction can offer a more powerful view of the data than does PCA.

Because Laplacian Eigenmaps is a local, rather than global, algorithm, it seeks to preserve local topological features of the data in its reduced-dimensionality space [13]. Thus, it is difficult to compare its performance relative to a linear, global algorithm like PCA without labeled features on which to classify the data and a rigorous comparison across multiple datasets and datatypes. However, by visual inspection of points clustered together in the Laplacian Eigenmap result (from the highlighted areas in Figure Figure6),6), one can see that they are structurally similar relative to a set of random compounds selected from the space as a whole (Figure (Figure7),7), and the two outlier groups visible in the original image are also chemically similar (data not shown). The same dataset plotted on its first two principal components (via PCA) yields no significant clustering comparable to that of Laplacian Eigenmaps with instead one large and a second smaller diffuse cluster visible (Figure (Figure8).8). Additional support for nonlinear QSAR methods comes from Douali et al. [30], which found that a nonlinear QSAR approach using neural networks predicted activities very well, outperforming other methods found in the literature. A more rigorous comparison of these algorithms in the context of molecular descriptor data is ongoing.

SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. The software allows users to analyze idealized random networks or uploaded real-world datasets, and exposes metrics like the clustering coefficient, average distance, and degree distribution in an easy-to-use graphical manner. In addition, SpectralNET calculates and plots eigenspectra for three important matrices related to the network and provides several powerful graph visualizations.

SpectralNET is available as both a standalone .NET executable and an ASP.NET web application. Source code is available by request from the author.

**Project name: **SpectralNET

**Project home page: **http://chembank.broad.harvard.edu/resources/

**Operating system(s): **Windows

**Programming language: **C#

**Other requirements: **The .NET framework v1.1 or higher

**License: **The SpectralNET software is provided "as is" with no guarantee or warranty of any kind. SpectralNET is freely redistributable in binary format for all non-commercial use. Source code is available to non-commercial users by request of the primary author. Any other use of the software requires special permission from the primary author.

**Any restriction to use by non-academics: **Contact authors

JF developed and tested the software, wrote the initial version of the manuscript, and co-designed the software; PC provided feedback and data for molecular descriptor analysis, assisted with design of the software, and edited the manuscript; SS provided project guidance and edited the manuscript; SH initially conceived of and co-designed the software and edited the manuscript. All authors read and approved the final manuscript.

Human PPI network definition file. Network definition file representing a network of human protein-protein interactions. Data for this network was parsed from the MIPS Mammalian Protein-Protein Interaction Database. The numbers contained in this file correspond to the "shortLabel" annotation of proteins in the XML representation of the MIPS database.

Click here for file^{(891 bytes, txt)}

Small-world network definition file. Network definition file for a 33-node small-world network with attached complete subgraphs.

Click here for file^{(465 bytes, txt)}

We gratefully acknowledge the Broad Institute of Harvard University and MIT, the National Cancer Institute (Initiative for Chemical Genetics), and the National Institute of General Medical Sciences (Center of Excellence for Chemical Methodology and Library Development) for support of this research. S.L.S. is an Investigator at the Howard Hughes Medical Institute.

- Eisenberg E, Levanon E. Preferential attachment in the protein network evolution. Phys Rev Lett. 2003;91:138701. doi: 10.1103/PhysRevLett.91.138701. [PubMed] [Cross Ref]
- Fera D, Kim N, Shiddelfrim N, Zorn J, Laserson U, Gan HH, Schlick T. RAG: RNA-As-Graphs web resource. BMC Bioinformatics. 2004;5:88. doi: 10.1186/1471-2105-5-88. [PMC free article] [PubMed] [Cross Ref]
- Haggarty S, Clemons P, Schreiber S. Chemical genomic profiling of biological networks using graph theory and combinations of small molecule perturbations. J Am Chem Soc. 2003;125:10543–10545. doi: 10.1021/ja035413p. [PubMed] [Cross Ref]
- Chartrand G. Introductory Graph Theory. New York: Dover; 1985.
- Chung F. Spectral Graph Theory. Providence: American Mathematical Society; 1997.
- Goano M. Algorithm 745: Computation of the complete and incomplete Fermi-Dirac integral. ACM Trans Math Software. 1995;21:221–232. doi: 10.1145/210089.210090. [Cross Ref]
- Erdõs P, Rényi A. On random graphs I. Publ Math Debrecen. 1959;6:290–297.
- Barabási AL, Albert R. Emergence of scaling in random networks. Science. 1999;286:509512. [PubMed]
- Albert R, Barabási AL. Topology of evolving networks: Local events and universality. Phys Rev Lett. 2000;85:5234–5237. doi: 10.1103/PhysRevLett.85.5234. [PubMed] [Cross Ref]
- Watts D, Strogatz S. Collective dynamics of 'small-world' networks. Nature. 1998;393:440–442. doi: 10.1038/30918. [PubMed] [Cross Ref]
- Barabási AL, Dezso Z, Ravasz E, Yook SH, Oltvai Z. Scale-free and hierarchical structures in complex networks. Modeling of Complex Systems. 2003;661:1–16.
- Batagelj V, Mrvar A. PAJEK – program for large network analysis. Connections. 1998;21:47–57.
- Belkin M, Niyogi P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation. 2003;15:1373–1396. doi: 10.1162/089976603321780317. [Cross Ref]
- Hotelling H. Analysis of a complex of statistical variables into principal components. J Educ Psychol. 1931;24:417441.
- Lin Z, Altman R. Finding haplotype tagging SNPs by use of principal components analysis. Am J hum Genet. 2004;75:850–861. doi: 10.1086/425587. [PubMed] [Cross Ref]
- Nacher JC, Ueda N, Yamada T, Kanehisa M, Akutsu T. Clustering under the line graph transformation: application to reaction network. BMC Bioinformatics. 2004;5:207. doi: 10.1186/1471-2105-5-207. [PMC free article] [PubMed] [Cross Ref]
- Mohar B. Some applications of Laplace eigenvalues of graphs. In: Hahn G, Sabidussi G, editor. Graph Symmetry: Algebraic Methods and Applications. Dordrecht: Kluwer Academic Publishers; 1997. pp. 225–275.
- Barabási AL, Ravasz E, Vicsek T. Deterministic scale-free networks. Physica A. 2001;299:559564. doi: 10.1016/S0378-4371(01)00369-7. [Cross Ref]
- Chung F. The average distance and the independence number. J Graph Theory. 1988;12:229–235.
- Farkas I, Derenyi I, Barabási AL, Vicsek T. Spectra of "real-world" graphs: Beyond the semicircle law. Phys Rev E. 2001;64:026704. doi: 10.1103/PhysRevE.64.026704. [PubMed] [Cross Ref]
- Fruchterman T, Reingold E. Graph drawing by force-directed placement. Software: Practice and Experience. 1991;21:1129–1164.
- Agrafiotis D, Lobanov V. Multidimensional scaling of combinatorial libraries without explicit enumeration. J Comput Chem. 2001;22:1712–1722. doi: 10.1002/jcc.1126. [Cross Ref]
- Belkin M, Niyogi P. Semi-supervised learning on Riemannian manifolds. Machine Learning. 2004;56:209–239. doi: 10.1023/B:MACH.0000033120.25363.1e. [Cross Ref]
- Comellas F, Sampels M. Deterministic small-world networks. Physica A. 2002;309:231–235. doi: 10.1016/S0378-4371(02)00741-0. [Cross Ref]
- Stavenger R, Schreiber S. Asymmetric Catalysis in Diversity-Oriented Organic Synthesis: Enantioselective Synthesis of 4320 Encoded and Spatially Segregated Dihydropyrancarboxamides. Angew Chem Intl Ed. 2001;40:3417–3421. doi: 10.1002/1521-3773(20010917)40:18<3417::AID-ANIE3417>3.0.CO;2-E. [PubMed] [Cross Ref]
- Pagel P, Kovac S, Oesterheld M, Brauner B, Dunger-Kaltenback I, Frishman G, Montrone C, Mark P, Stümpflen V, Mewes H, Ruepp A, Frishman D. The MIPS mammalian protein-protein interaction database. Bioinformatics. 2005;21:832–834. doi: 10.1093/bioinformatics/bti115. [PubMed] [Cross Ref]
- Kalkhoven E, Valentine J, Heery D, Parker M. Isoforms of steroid receptor co-activator 1 differ in their ability to potentiate transcription by the oestrogen receptor. The EMBO Journal. 1998;17:232–243. doi: 10.1093/emboj/17.1.232. [PubMed] [Cross Ref]
- Farkas I, Jeong H, Vicsek T, Barabási AL, Oltvai Z. The topology of the transcription regulatory network in the yeast,
*S. cerevisiae*. Physica A. 2003;318:601–612. doi: 10.1016/S0378-4371(02)01731-4. [Cross Ref] - Jeong H, Mason S, Barabási AL, Oltvai Z. Lethality and centrality in protein networks. Nature. 2001;411:41–42. doi: 10.1038/35075138. [PubMed] [Cross Ref]
- Douali L, Villemin D, Cherqauoi C. Neural networks: Accurate nonlinear QSAR model for HEPT derivatives. Chem Inf Comput Sci. 2003;43:1200–1207. doi: 10.1021/ci034047q. [PubMed] [Cross Ref]

Articles from BMC Bioinformatics are provided here courtesy of **BioMed Central**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's Canada Institute for Scientific and Technical Information in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |