The noninvasive assessment of cardiac function is of first
importance for the diagnosis of cardiovascular diseases. Among all medical scanners only a few enables radiologists to evaluate the local cardiac motion. Tagged cardiac MRI is one of them. This protocol generates on Short-Axis (SA) sequences a dark grid which is deformed in accordance
with the cardiac motion. Tracking the grid allows specialists a local estimation of cardiac geometrical parameters within myocardium. The work described in this paper aims to automate the myocardial contours detection in order to optimize the detection and the tracking of the grid of tags within myocardium. The method we have developed for endocardial
and epicardial contours detection is based on the use of texture analysis
and active contours models. Texture analysis allows us to define energy
maps more efficient than those usually used in active contours methods
where attractor is often based on gradient and which were useless in our
case of study, for quality of tagged cardiac MRI is very poor.
We develop an overset-curvilinear immersed boundary (overset-CURVIB) method in a general non-inertial frame of reference to simulate a wide range of challenging biological flow problems. The method incorporates overset-curvilinear grids to efficiently handle multi-connected geometries and increase the resolution locally near immersed boundaries. Complex bodies undergoing arbitrarily large deformations may be embedded within the overset-curvilinear background grid and treated as sharp interfaces using the curvilinear immersed boundary (CURVIB) method (Ge and Sotiropoulos, Journal of Computational Physics, 2007). The incompressible flow equations are formulated in a general non-inertial frame of reference to enhance the overall versatility and efficiency of the numerical approach. Efficient search algorithms to identify areas requiring blanking, donor cells, and interpolation coefficients for constructing the boundary conditions at grid interfaces of the overset grid are developed and implemented using efficient parallel computing communication strategies to transfer information among sub-domains. The governing equations are discretized using a second-order accurate finite-volume approach and integrated in time via an efficient fractional-step method. Various strategies for ensuring globally conservative interpolation at grid interfaces suitable for incompressible flow fractional step methods are implemented and evaluated. The method is verified and validated against experimental data, and its capabilities are demonstrated by simulating the flow past multiple aquatic swimmers and the systolic flow in an anatomic left ventricle with a mechanical heart valve implanted in the aortic position.
We present a sixth order explicit compact finite difference scheme to solve the three dimensional (3D) convection diffusion equation. We first use multiscale multigrid method to solve the linear systems arising from a 19-point fourth order discretization scheme to compute the fourth order solutions on both the coarse grid and the fine grid. Then an operator based interpolation scheme combined with an extrapolation technique is used to approximate the sixth order accurate solution on the fine grid. Since the multigrid method using a standard point relaxation smoother may fail to achieve the optimal grid independent convergence rate for solving convection diffusion equation with a high Reynolds number, we implement the plane relaxation smoother in the multigrid solver to achieve better grid independency. Supporting numerical results are presented to demonstrate the efficiency and accuracy of the sixth order compact scheme (SOC), compared with the previously published fourth order compact scheme (FOC).
convection diffusion equation; Reynolds number; multigrid method; Richardson extrapolation; sixth order compact scheme
For large-scale wireless sensor networks (WSNs) with a minority of anchor nodes, multi-hop localization is a popular scheme for determining the geographical positions of the normal nodes. However, in practice existing multi-hop localization methods suffer from various kinds of problems, such as poor adaptability to irregular topology, high computational complexity, low positioning accuracy, etc. To address these issues in this paper, we propose a novel Multi-hop Localization algorithm based on Grid-Scanning (MLGS). First, the factors that influence the multi-hop distance estimation are studied and a more realistic multi-hop localization model is constructed. Then, the feasible regions of the normal nodes are determined according to the intersection of bounding square rings. Finally, a verifiably good approximation scheme based on grid-scanning is developed to estimate the coordinates of the normal nodes. Additionally, the positioning accuracy of the normal nodes can be improved through neighbors’ collaboration. Extensive simulations are performed in isotropic and anisotropic networks. The comparisons with some typical algorithms of node localization confirm the effectiveness and efficiency of our algorithm.
wireless sensor networks; multi-hop localization; feasible region; grid-scanning
This study concerns the development of a high performance workflow that, using grid technology, correlates different kinds of Bioinformatics data, starting from the base pairs of the nucleotide sequence to the exposed residues of the protein surface. The implementation of this workflow is based on the Italian Grid.it project infrastructure, that is a network of several computational resources and storage facilities distributed at different grid sites.
Workflows are very common in Bioinformatics because they allow to process large quantities of data by delegating the management of resources to the information streaming. Grid technology optimizes the computational load during the different workflow steps, dividing the more expensive tasks into a set of small jobs.
Grid technology allows efficient database management, a crucial problem for obtaining good results in Bioinformatics applications. The proposed workflow is implemented to integrate huge amounts of data and the results themselves must be stored into a relational database, which results as the added value to the global knowledge.
A web interface has been developed to make this technology accessible to grid users. Once the workflow has started, by means of the simplified interface, it is possible to follow all the different steps throughout the data processing. Eventually, when the workflow has been terminated, the different features of the protein, like the amino acids exposed on the protein surface, can be compared with the data present in the output database.
caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG™) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities.
Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters.
Stochastic collocation; Growth and remodeling; Derivative-free methods; Parameter sensitivity
Phylogenetic Oligonucleotide Arrays (POAs) were recently adapted for studying the huge microbial communities in a flexible and easy-to-use way. POA coupled with the use of explorative probes to detect the unknown part is now one of the most powerful approaches for a better understanding of microbial community functioning. However, the selection of probes remains a very difficult task. The rapid growth of environmental databases has led to an exponential increase of data to be managed for an efficient design. Consequently, the use of high performance computing facilities is mandatory. In this paper, we present an efficient parallelization method to select known and explorative oligonucleotide probes at large scale using computing grids. We implemented a software that generates and monitors thousands of jobs over the European Computing Grid Infrastructure (EGI). We also developed a new algorithm for the construction of a high-quality curated phylogenetic database to avoid erroneous design due to bad sequence affiliation. We present here the performance and statistics of our method on real biological datasets based on a phylogenetic prokaryotic database at the genus level and a complete design of about 20,000 probes for 2,069 genera of prokaryotes.
Reconstruction of the cerebral cortex from magnetic resonance (MR) images
is an important step in quantitative analysis of the human brain structure, for example, in sulcal morphometry and in studies of cortical thickness. Existing cortical reconstruction approaches are typically optimized for standard resolution (~1 mm) data and are not directly applicable to higher resolution images. A new PDE-based method is presented for the automated cortical reconstruction that is computationally efficient and scales well with grid resolution, and thus is particularly suitable for high-resolution MR images with submillimeter voxel size. The method uses a mathematical model of a field in an inhomogeneous dielectric. This field mapping, similarly to a Laplacian mapping, has nice laminar properties in the cortical layer, and helps to identify the unresolved boundaries between cortical banks in narrow sulci. The pial cortical surface is reconstructed by advection along the field gradient as a geometric deformable model constrained by topology-preserving level set approach. The method's performance is illustrated on exvivo images with 0.25–0.35 mm isotropic voxels. The method is further evaluated by cross-comparison with results of the FreeSurfer software on standard resolution data sets from the OASIS database featuring pairs of repeated scans for 20 healthy young subjects.
Tracking deforming objects involves estimating the global motion of the object and its local deformations as a function of time. Tracking algorithms using Kalman filters or particle filters have been proposed for finite dimensional representations of shape, but these are dependent on the chosen parametrization and cannot handle changes in curve topology. Geometric active contours provide a framework which is parametrization independent and allow for changes in topology. In the present work, we formulate a particle filtering algorithm in the geometric active contour framework that can be used for tracking moving and deforming objects. To the best of our knowledge, this is the first attempt to implement an approximate particle filtering algorithm for tracking on a (theoretically) infinite dimensional state space.
Tracking; particle filters; geometric active contours
Many statistical methods in biology utilize numerical integration in order to deal with moderately high-dimensional parameter spaces without closed form integrals. One such method is the PPL, a class of models for mapping and modeling genes for complex human disorders. While the most common approach to numerical integration in statistics is MCMC, this is not a good option for the PPL for a variety of reasons, leading us to develop an alternative integration method for this application. We utilize an established sub-region adaptive integration method, but adapt it to specific features of our application. These include division of the multi-dimensional integrals into three separate layers, implementing internal constraints on the parameter space, and calibrating the approximation to ensure adequate precision of results for our application. The proposed approach is compared with an empirically driven fixed grid scheme as well as other numerical integration methods. The new method is shown to require far fewer function evaluations compared to the alternatives while matching or exceeding the best of them in terms of accuracy. The savings in evaluations is sufficiently large that previously intractable problems are now feasible in real time.
complex traits; linkage and LD analyses; posterior probability of linkage; numerical integration; sub-region adaptive method
We present the first algorithm for solving the equation of radiative transfer (ERT) in the frequency domain (FD) on three-dimensional block-structured Cartesian grids (BSG). This algorithm allows for accurate modeling of light propagation in media of arbitrary shape with air-tissue refractive index mismatch at the boundary at increased speed compared to currently available structured grid algorithms. To accurately model arbitrarily shaped geometries the algorithm generates BSGs that are finely discretized only near physical boundaries and therefore less dense than fine grids. We discretize the FD-ERT using a combination of the upwind-step method and the discrete ordinates (SN) approximation. The source iteration technique is used to obtain the solution. We implement a first order interpolation scheme when traversing between coarse and fine grid regions. Effects of geometry and optical parameters on algorithm performance are evaluated using numerical phantoms (circular, cylindrical, and arbitrary shape) and varying the absorption and scattering coefficients, modulation frequency, and refractive index. The solution on a 3-level BSG is obtained up to 4.2 times faster than the solution on a single fine grid, with minimal increase in numerical error (less than 5%).
(170.3660) Light propagation in tissues; (000.4430) Numerical approximation and analysis
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos . To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.
An efficient and accurate method has been developed for the preparation of the customized crystallization grid screens employed in protein crystallizations.
Crystallization trials can be designed as a systematic gradient of the concentration of key reagents and/or pH centered on the original conditions. While the concept of the grid screen is simple, its implementation is tedious and difficult by hand. A procedure has been developed for preparing crystallization grid screens that is both efficient and achieves high accuracy because it relies on a limited number of solutions that are carefully prepared by hand. The ‘four-corners’ approach to designing grid screens uses the minimum and maximum concentrations of the components being varied in the grid screen as the sole stock solutions. For an N-dimensional grid only 2N corner solutions require detailed preparation, making the screens efficient. Furthermore, by keeping the concentrations as tight as possible to the grid, the potential impact of pipette errors is minimized, creating a highly precise screen.
crystallization; grid screens; pH; robotics
Introduction: Surgically implanted chambers with removable grids are routinely used for studying patterns of neuronal activity in primate brains; however, accessing target tissues is significantly constrained by standard grid designs. Typically, grids are configured with a series of guide holes drilled vertically, parallel to the walls of the chamber, thus targeted sites are limited to those in line vertically with one of the guide holes. Methods: By using the three-dimensional modeling software, a novel grid was designed to reach the targeted sites far beyond the standard reach of the chamber. The grid was fabricated using conventional machining techniques and three-dimensional printing. Results: A pilot study involving microinjection of the magnetic resonance (MR) contrast agent gadolinium into the discrete regions of interest (ROIs) in the temporal cortex of an awake, behaving monkey demonstrated the effectiveness of this new design of the guide grid. Using multiple different angles of approach, we were readily able to access 10 injection sites, which were up to 5 mm outside the traditional, orthogonal reach of the chamber.
neuronal activity; recording chamber; guide grid; solid modeling; three-dimensional printing
In magnetic resonance imaging (MRI), methods that use a non-Cartesian grid in
k-space are becoming increasingly important.
In this paper, we use a recently proposed implicit discretisation scheme which
generalises the standard approach based on gridding. While the latter succeeds for sufficiently uniform sampling sets and accurate
estimated density compensation weights, the implicit method further improves
the reconstruction quality when the sampling scheme or the weights are less regular.
Both approaches can be solved efficiently with the nonequispaced FFT. Due to several new techniques for the storage of an involved sparse matrix,
our examples include also the reconstruction of a large 3D data set. We present four case studies and report on efficient implementation of the related algorithms.
The Sudoku is a famous logic-placement game, originally popularized in Japan and today widely employed as pastime and as testbed for search algorithms. The classic Sudoku consists in filling a 9 × 9 grid, divided into nine 3 × 3 regions, so that each column, row, and region contains different digits from 1 to 9. This game is known to be NP-complete, with existing various complete and incomplete search algorithms able to solve different instances of it. In this paper, we present a new cuckoo search algorithm for solving Sudoku puzzles combining prefiltering phases and geometric operations. The geometric operators allow one to correctly move toward promising regions of the combinatorial space, while the prefiltering phases are able to previously delete from domains the values that do not conduct to any feasible solution. This integration leads to a more efficient domain filtering and as a consequence to a faster solving process. We illustrate encouraging experimental results where our approach noticeably competes with the best approximate methods reported in the literature.
To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research.
An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study.
The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications.
The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid.
While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.
An immune electron microscopy agglutination technique in which emphasis is placed upon the importance of antigen-antibody equivalence has been developed as a possible method for the serotyping of avian infectious bronchitis viruses. The Connecticut and Massachusetts 41 serotypes were used as a model system. Stock virus concentrations were standardized by physical particle counts of virions sedimented directly onto electron microscope specimen grids. Suspensions containing approximately 150 virions per grid square were allowed to react with dilutions of homologous and heterologous antisera. Virions in these constant virus-variable serum mixtures were sedimented directly onto electron microscope specimen grids, and the relative degree of aggregation per grid was determined from the mean percent aggregation of five randomly selected grid squares. In homologous assays, regions of relative antibody excess, of equivalence, and of relative antigen excess were clearly evident. At equivalence, the mean percent aggregation was significantly higher than in the regions of relative antibody or antigen excess. In the heterologous systems, the degree of aggregation differed little from that of the virus controls containing no antiserum.
A new numerical approach for modeling a class of flow–structure interaction problems typically encountered in biological systems is presented. In this approach, a previously developed, sharp-interface, immersed-boundary method for incompressible flows is used to model the fluid flow and a new, sharp-interface Cartesian grid, immersed boundary method is devised to solve the equations of linear viscoelasticity that governs the solid. The two solvers are coupled to model flow–structure interaction. This coupled solver has the advantage of simple grid generation and efficient computation on simple, single-block structured grids. The accuracy of the solid-mechanics solver is examined by applying it to a canonical problem. The solution methodology is then applied to the problem of laryngeal aerodynamics and vocal fold vibration during human phonation. This includes a three-dimensional eigen analysis for a multi-layered vocal fold prototype as well as two-dimensional, flow-induced vocal fold vibration in a modeled larynx. Several salient features of the aerodynamics as well as vocal-fold dynamics are presented.
immersed-boundary method; elasticity; flow–structure interaction; bio-flow mechanics; phonation; laryngeal flow; flow-induced vibration
An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent – analytical solutions are available for this case, thus allowing rigorous assessment of the solution accuracy; (ii) a pair of low dielectric charged spheres embedded in a ionic solvent to compute electrostatic interaction free energies as a function of the distance between sphere centers; (iii) surface potentials of proteins, nucleic acids and their larger-scale assemblies such as ribosomes; and (iv) electrostatic solvation free energies and their salt sensitivities – obtained with both linear and nonlinear Poisson-Boltzmann equation – for a large set of proteins. These latter results along with timings can serve as benchmarks for comparing the performance of different PBE solvers.
Poisson-Boltzmann equation; biomolecular electrostatics; implicit solvent model; algorithm; finite difference methods; Cartesian grid; adaptive; electrostatic potential
A nonparametric population method with support points from the empirical Bayes estimates (EBE) has recently been introduced (default method). However, EBE distribution may, with sparse and small datasets, not provide a suitable range of support points. This study aims to develop a method based on a prior parametric analysis capable of providing a nonparametric grid with adequate support points range. A new method extends the nonparametric grid with additional support points generated by simulation from the parametric distribution, hence the name extended-grid method. The joint probability density function is estimated at the extended grid. The performance of the new method was evaluated and compared to the default method via Monte Carlo simulations using simple IV bolus model and sparse (200 subject, two samples per subject) or small (30 subjects, three samples per subjects) datasets and two scenarios based on real case studies. Parameter distributions estimated by the default and the extended-grid method were compared to the true distributions; bias and precision were assessed at different percentiles. With small datasets, the bias was similar between methods (<10%); however, precision was markedly improved with the new method (by 43%). With sparse datasets, both bias (from 5.9% to 3%) and precision (by 60%) were improved. For simulated scenarios based on real study designs, extended-grid predictions were in a good agreement with true values. A new approach to obtain support points for the nonparametric method has been developed, and it displayed good estimation properties. The extended-grid method is automated, using the program PsN, for implementation into the NONMEM.
empirical Bayes estimates; extended grid method; NONMEM; nonparametric method; parameter distribution
Robust security is highly coveted in real wireless sensor network (WSN) applications since wireless sensors' sense critical data from the application environment. This article presents an efficient and adaptive mutual authentication framework that suits real heterogeneous WSN-based applications (such as smart homes, industrial environments, smart grids, and healthcare monitoring). The proposed framework offers: (i) key initialization; (ii) secure network (cluster) formation (i.e., mutual authentication and dynamic key establishment); (iii) key revocation; and (iv) new node addition into the network. The correctness of the proposed scheme is formally verified. An extensive analysis shows the proposed scheme coupled with message confidentiality, mutual authentication and dynamic session key establishment, node privacy, and message freshness. Moreover, the preliminary study also reveals the proposed framework is secure against popular types of attacks, such as impersonation attacks, man-in-the-middle attacks, replay attacks, and information-leakage attacks. As a result, we believe the proposed framework achieves efficiency at reasonable computation and communication costs and it can be a safeguard to real heterogeneous WSN applications.
security; mutual authentication; key establishment; node privacy; wireless sensor networks
The theory of digital topology is used in many different image processing and computer graphics algorithms. Most of the existing theories apply to uniform cartesian grids, and they are not readily extensible to new algorithms targeting at adaptive cartesian grids. This article provides a rigorous extension of the classical digital topology framework for adaptive octree grids, including the characterization of adjacency, connected components, and simple points. Motivating examples, proofs of the major propositions, and algorithm pseudocodes are provided.
Digital topology; adaptive octree grid; simple point characterization
The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries, Journal of Computational Physics 225 (2007) 1782–1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions unconditionally unstable iteration schemes result even when strong coupling FSI is employed. For such cases, however, combining the strong-coupling iteration with under-relaxation in conjunction with the Aitken’s acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI algorithm. The stabilizing role of under-relaxation is also clarified and an upper bound of the required for stability under-relaxation coefficient is derived.