PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1139159)

Clipboard (0)
None

Related Articles

1.  Macromolecular ab initio phasing enforcing secondary and tertiary structure 
IUCrJ  2015;2(Pt 1):95-105.
ARCIMBOLDO replaces the atomicity constraints required for ab initio phasing by enforcement of model stereochemistry. Small model fragments and local folds are exploited at resolutions up to 2 Å in different contexts, from supercomputers to the standalone ARCIMBOLDO_LITE, which solves straightforward cases on a single multicore machine.
Ab initio phasing of macromolecular structures, from the native intensities alone with no experimental phase information or previous particular structural knowledge, has been the object of a long quest, limited by two main barriers: structure size and resolution of the data. Current approaches to extend the scope of ab initio phasing include use of the Patterson function, density modification and data extrapolation. The authors’ approach relies on the combination of locating model fragments such as polyalanine α-helices with the program PHASER and density modification with the program SHELXE. Given the difficulties in discriminating correct small substructures, many putative groups of fragments have to be tested in parallel; thus calculations are performed in a grid or supercomputer. The method has been named after the Italian painter Arcimboldo, who used to compose portraits out of fruit and vegetables. With ARCIMBOLDO, most collections of fragments remain a ‘still-life’, but some are correct enough for density modification and main-chain tracing to reveal the protein’s true portrait. Beyond α-helices, other fragments can be exploited in an analogous way: libraries of helices with modelled side chains, β-strands, predictable fragments such as DNA-binding folds or fragments selected from distant homologues up to libraries of small local folds that are used to enforce nonspecific tertiary structure; thus restoring the ab initio nature of the method. Using these methods, a number of unknown macromolecules with a few thousand atoms and resolutions around 2 Å have been solved. In the 2014 release, use of the program has been simplified. The software mediates the use of massive computing to automate the grid access required in difficult cases but may also run on a single multicore workstation (http://chango.ibmb.csic.es/ARCIMBOLDO_LITE) to solve straightforward cases.
doi:10.1107/S2052252514024117
PMCID: PMC4285884  PMID: 25610631
ab initio phasing; α-helices; macromolecular structure; ARCIMBOLDO
2.  Structure solution of DNA-binding proteins and complexes with ARCIMBOLDO libraries 
The structure solution of DNA-binding protein structures and complexes based on the combination of location of DNA-binding protein motif fragments with density modification in a multi-solution frame is described.
Protein–DNA interactions play a major role in all aspects of genetic activity within an organism, such as transcription, packaging, rearrangement, replication and repair. The molecular detail of protein–DNA interactions can be best visualized through crystallography, and structures emphasizing insight into the principles of binding and base-sequence recognition are essential to understanding the subtleties of the underlying mechanisms. An increasing number of high-quality DNA-binding protein structure determinations have been witnessed despite the fact that the crystallographic particularities of nucleic acids tend to pose specific challenges to methods primarily developed for proteins. Crystallographic structure solution of protein–DNA complexes therefore remains a challenging area that is in need of optimized experimental and computational methods. The potential of the structure-solution program ARCIMBOLDO for the solution of protein–DNA complexes has therefore been assessed. The method is based on the combination of locating small, very accurate fragments using the program Phaser and density modification with the program SHELXE. Whereas for typical proteins main-chain α-helices provide the ideal, almost ubiquitous, small fragments to start searches, in the case of DNA complexes the binding motifs and DNA double helix constitute suitable search fragments. The aim of this work is to provide an effective library of search fragments as well as to determine the optimal ARCIMBOLDO strategy for the solution of this class of structures.
doi:10.1107/S1399004714007603
PMCID: PMC4051508  PMID: 24914984
protein–DNA complexes and macromolecule structure solutions; structure-solution pipelines; molecular replacement; density modification
3.  Experimental phasing with SHELXC/D/E: combining chain tracing with density modification 
Experimental phasing with SHELXC/D/E has been enhanced by the incorporation of main-chain tracing into the iterative density modification; this also provides a simple and effective way of exploiting noncrystallographic symmetry.
The programs SHELXC, SHELXD and SHELXE are designed to provide simple, robust and efficient experimental phasing of macromolecules by the SAD, MAD, SIR, SIRAS and RIP methods and are particularly suitable for use in automated structure-solution pipelines. This paper gives a general account of experimental phasing using these programs and describes the extension of iterative density modification in SHELXE by the inclusion of automated protein main-chain tracing. This gives a good indication as to whether the structure has been solved and enables interpretable maps to be obtained from poorer starting phases. The autotracing algorithm starts with the location of possible seven-residue α-­helices and common tripeptides. After extension of these fragments in both directions, various criteria are used to decide whether to accept or reject the resulting poly-Ala traces. Noncrystallographic symmetry (NCS) is applied to the traced fragments, not to the density. Further features are the use of a ‘no-go’ map to prevent the traces from passing through heavy atoms or symmetry elements and a splicing technique to combine the best parts of traces (including those generated by NCS) that partly overlap.
doi:10.1107/S0907444909038360
PMCID: PMC2852312  PMID: 20383001
experimental phasing of macromolecules; density modification; main-chain tracing; noncrystallographic symmetry; SHELX
4.  Extending molecular-replacement solutions with SHELXE  
Under favourable circumstances, density modification and polyalanine tracing with SHELXE can be used to improve and validate potential solutions from molecular replacement.
Although the program SHELXE was originally intended for the experimental phasing of macromolecules, it can also prove useful for expanding a small protein fragment to an almost complete polyalanine trace of the structure, given a favourable combination of native data resolution (better than about 2.1 Å) and solvent content. A correlation coefficient (CC) of more than 25% between the native structure factors and those calculated from the polyalanine trace appears to be a reliable indicator of success and has already been exploited in a number of pipelines. Here, a more detailed account of this usage of SHELXE for molecular-replacement solutions is given.
doi:10.1107/S0907444913027534
PMCID: PMC3817699  PMID: 24189237
molecular replacement; density modification; autotracing; SHELX
5.  Using SAD data in Phaser  
SAD data can be used in Phaser to solve novel structures, supplement molecular-replacement phase information or identify anomalous scatterers from a final refined model.
Phaser is a program that implements likelihood-based methods to solve macromolecular crystal structures, currently by molecular replacement or single-wavelength anomalous diffraction (SAD). SAD phasing is based on a likelihood target derived from the joint probability distribution of observed and calculated pairs of Friedel-related structure factors. This target combines information from the total structure factor (primarily non-anomalous scattering) and the difference between the Friedel mates (anomalous scattering). Phasing starts from a substructure, which is usually but not necessarily a set of anomalous scatterers. The substructure can also be a protein model, such as one obtained by molecular replacement. Additional atoms are found using a log-likelihood gradient map, which shows the sites where the addition of scattering from a particular atom type would improve the likelihood score. An automated completion algorithm adds new sites, choosing optionally among different atom types, adds anisotropic B-factor parameters if appropriate and deletes atoms that refine to low occupancy. Log-likelihood gradient maps can also identify which atoms in a refined protein structure are anomalous scatterers, such as metal or halide ions. These maps are more sensitive than conventional model-phased anomalous difference Fouriers and the iterative completion algorithm is able to find a significantly larger number of convincing sites.
doi:10.1107/S0907444910051371
PMCID: PMC3069749  PMID: 21460452
SAD phasing; likelihood; molecular replacement
6.  Crystallization and calcium/sulfur SAD phasing of the human EF-hand protein S100A2 
The structure of Ca2+-bound EF-hand protein S100A2 was determined by calcium and sulfur SAD at a wavelength of 0.90 Å.
Human S100A2 is an EF-hand protein and acts as a major tumour suppressor, binding and activating p53 in a Ca2+-dependent manner. Ca2+-bound S100A2 was crystallized and its structure was determined based on the anomalous scattering provided by six S atoms from methionine residues and four calcium ions present in the asymmetric unit. Although the diffraction data were recorded at a wavelength of 0.90 Å, which is usually not assumed to be suitable for calcium/sulfur SAD, the anomalous signal was satisfactory. A nine-atom substructure was determined at 1.8 Å resolution using SHELXD, and SHELXE was used for density modification and phase extension to 1.3 Å resolution. The electron-density map obtained was well interpretable and could be used for automated model building by ARP/wARP.
doi:10.1107/S1744309110030691
PMCID: PMC2935220  PMID: 20823519
S100A2; EF-hands; calcium; sulfur SAD
7.  Phaser crystallographic software 
Journal of Applied Crystallography  2007;40(Pt 4):658-674.
A description is given of Phaser-2.1: software for phasing macromolecular crystal structures by molecular replacement and single-wavelength anomalous dispersion phasing.
Phaser is a program for phasing macromolecular crystal structures by both molecular replacement and experimental phasing methods. The novel phasing algorithms implemented in Phaser have been developed using maximum likelihood and multivariate statistics. For molecular replacement, the new algorithms have proved to be significantly better than traditional methods in discriminating correct solutions from noise, and for single-wavelength anomalous dispersion experimental phasing, the new algorithms, which account for correlations between F + and F −, give better phases (lower mean phase error with respect to the phases given by the refined structure) than those that use mean F and anomalous differences ΔF. One of the design concepts of Phaser was that it be capable of a high degree of automation. To this end, Phaser (written in C++) can be called directly from Python, although it can also be called using traditional CCP4 keyword-style input. Phaser is a platform for future development of improved phasing methods and their release, including source code, to the crystallographic community.
doi:10.1107/S0021889807021206
PMCID: PMC2483472  PMID: 19461840
computer programs; molecular replacement; SAD phasing; likelihood; structural genomics
8.  SCEDS: protein fragments for molecular replacement in Phaser  
Protein fragments suitable for use in molecular replacement can be generated by normal-mode perturbation, analysis of the difference distance matrix of the original versus normal-mode perturbed structures, and SCEDS, a score that measures the sphericity, continuity, equality and density of the resulting fragments.
A method is described for generating protein fragments suitable for use as molecular-replacement (MR) template models. The template model for a protein suspected to undergo a conformational change is perturbed along combinations of low-frequency normal modes of the elastic network model. The unperturbed structure is then compared with each perturbed structure in turn and the structurally invariant regions are identified by analysing the difference distance matrix. These fragments are scored with SCEDS, which is a combined measure of the sphericity of the fragments, the continuity of the fragments with respect to the polypeptide chain, the equality in number of atoms in the fragments and the density of Cα atoms in the triaxial ellipsoid of the fragment extents. The fragment divisions with the highest SCEDS are then used as separate template models for MR. Test cases show that where the protein contains fragments that undergo a change in juxtaposition between template model and target, SCEDS can identify fragments that lead to a lower R factor after ten cycles of all-atom refinement with REFMAC5 than the original template structure. The method has been implemented in the software Phaser.
doi:10.1107/S0907444913021811
PMCID: PMC3817695  PMID: 24189233
difference distance matrix; normal-mode analysis
9.  Highly Sensitive and Specific Detection of Rare Variants in Mixed Viral Populations from Massively Parallel Sequence Data 
PLoS Computational Biology  2012;8(3):e1002417.
Viruses diversify over time within hosts, often undercutting the effectiveness of host defenses and therapeutic interventions. To design successful vaccines and therapeutics, it is critical to better understand viral diversification, including comprehensively characterizing the genetic variants in viral intra-host populations and modeling changes from transmission through the course of infection. Massively parallel sequencing technologies can overcome the cost constraints of older sequencing methods and obtain the high sequence coverage needed to detect rare genetic variants (<1%) within an infected host, and to assay variants without prior knowledge. Critical to interpreting deep sequence data sets is the ability to distinguish biological variants from process errors with high sensitivity and specificity. To address this challenge, we describe V-Phaser, an algorithm able to recognize rare biological variants in mixed populations. V-Phaser uses covariation (i.e. phasing) between observed variants to increase sensitivity and an expectation maximization algorithm that iteratively recalibrates base quality scores to increase specificity. Overall, V-Phaser achieved >97% sensitivity and >97% specificity on control read sets. On data derived from a patient after four years of HIV-1 infection, V-Phaser detected 2,015 variants across the ∼10 kb genome, including 603 rare variants (<1% frequency) detected only using phase information. V-Phaser identified variants at frequencies down to 0.2%, comparable to the detection threshold of allele-specific PCR, a method that requires prior knowledge of the variants. The high sensitivity and specificity of V-Phaser enables identifying and tracking changes in low frequency variants in mixed populations such as RNA viruses.
Author Summary
New sequencing technologies provide unprecedented resolution to study pathogen populations, such as the single stranded RNA viruses HIV, dengue (DENV), and West Nile (WNV), and how they evolve within infected individuals in response to immune, therapeutic, and vaccine pressures. While these new technologies provide high volumes of data, these data contain process errors. To detect biological variants, especially those occurring at low frequencies in the population, these technologies require a method to differentiate biological variants from process errors with high sensitivity and specificity. To address this challenge, we introduce the V-Phaser algorithm, which distinguished the covariation of biological variants from that of process errors. We validate the method by measuring how frequently it correctly identifies variants and errors on actual read sets with known variation. Further, using data derived from a patient following four years of HIV-1 infection, we show that V-Phaser can detect biological variants at frequencies comparable to approaches that require prior knowledge. V-Phaser is available for download at: http://www.broadinstitute.org/scientific-community/software.
doi:10.1371/journal.pcbi.1002417
PMCID: PMC3305335  PMID: 22438797
10.  V-Phaser 2: variant inference for viral populations 
BMC Genomics  2013;14:674.
Background
Massively parallel sequencing offers the possibility of revolutionizing the study of viral populations by providing ultra deep sequencing (tens to hundreds of thousand fold coverage) of complete viral genomes. However, differentiation of true low frequency variants from sequencing errors remains challenging.
Results
We developed a software package, V-Phaser 2, for inferring intrahost diversity within viral populations. This program adds three major new methodologies to the state of the art: a technique to efficiently utilize paired end read data for calling phased variants, a new strategy to represent and infer length polymorphisms, and an in line filter for erroneous calls arising from systematic sequencing artifacts. We have also heavily optimized memory and run time performance. This combination of algorithmic and technical advances allows V-Phaser 2 to fully utilize extremely deep paired end sequencing data (such as generated by Illumina sequencers) to accurately infer low frequency intrahost variants in viral populations in reasonable time on a standard desktop computer. V-Phaser 2 was validated and compared to both QuRe and the original V-Phaser on three datasets obtained from two viral populations: a mixture of eight known strains of West Nile Virus (WNV) sequenced on both 454 Titanium and Illumina MiSeq and a mixture of twenty-four known strains of WNV sequenced only on 454 Titanium. V-Phaser 2 outperformed the other two programs in both sensitivity and specificity while using more than five fold less time and memory.
Conclusions
We developed V-Phaser 2, a publicly available software tool (V-Phaser 2 can be accessed via: http://www.broadinstitute.org/scientific-community/science/projects/viral-genomics/v-phaser-2 and is freely available for academic use) that enables the efficient analysis of ultra-deep sequencing data produced by common next generation sequencing platforms for viral populations.
doi:10.1186/1471-2164-14-674
PMCID: PMC3907024  PMID: 24088188
Viral population; Variant calling; Length polymorphisms; Phasing; Next generation sequencing
11.  Holistic Face Categorization in Higher Order Visual Areas of the Normal and Prosopagnosic Brain: Toward a Non-Hierarchical View of Face Perception 
How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex.
doi:10.3389/fnhum.2010.00225
PMCID: PMC3025660  PMID: 21267432
face perception; visual cortex; Mooney; fusiform gyrus; prosopagnosia; FFA
12.  The restriction enzyme SgrAI: structure solution via combination of poor MIRAS and MR phases 
Phase information from both MIRAS and MR was used to produce an interpretable electron-density map of the novel type II restriction endonuclease SgrAI bound to DNA. The MR solution corrected an instructive error in the initially chosen averaging transformation.
Uninterpretable electron-density maps were obtained using either MIRAS phases or MR phases in attempts to determine the structure of the type II restriction endonuclease SgrAI bound to DNA. While neither solution strategy was particularly promising (map correlation coefficients of 0.29 and 0.22 with the final model, respectively, for the MIRAS and MR phases and Phaser Z scores of 4.0 and 4.3 for the rotation and translation searches), phase combination followed by density modification gave a readily interpretable map. MR with a distantly related model located a dimer in the asymmetric unit and provided the correct transformation to use in averaging electron density between SgrAI subunits. MIRAS data sets with low substitution and MR solutions from only distantly related models should not be ignored, as poor-quality starting phases can be significantly improved. The bootstrapping strategy employed to improve the initial MIRAS phases is described.
doi:10.1107/S0907444909003266
PMCID: PMC2659886  PMID: 19307723
SgrAI; MIRAS; phase combination; molecular replacement; density averaging; restriction enzymes
13.  Application of DEN refinement and automated model building to a difficult case of molecular-replacement phasing: the structure of a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum  
DEN refinement and automated model building with AutoBuild were used to determine the structure of a putative succinyl-diaminopimelate desuccinylase from C. glutamicum. This difficult case of molecular-replacement phasing shows that the synergism between DEN refinement and AutoBuild outperforms standard refinement protocols.
Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence.
doi:10.1107/S090744491104978X
PMCID: PMC3322598  PMID: 22505259
reciprocal-space refinement; DEN refinement; real-space refinement; automated model building; succinyl-diaminopimelate desuccinylase
14.  Automated identification of elemental ions in macromolecular crystal structures 
The solvent-picking procedure in phenix.refine has been extended and combined with Phaser anomalous substructure completion and analysis of coordination geometry to identify and place elemental ions.
Many macromolecular model-building and refinement programs can automatically place solvent atoms in electron density at moderate-to-high resolution. This process frequently builds water molecules in place of elemental ions, the identification of which must be performed manually. The solvent-picking algorithms in phenix.refine have been extended to build common ions based on an analysis of the chemical environment as well as physical properties such as occupancy, B factor and anomalous scattering. The method is most effective for heavier elements such as calcium and zinc, for which a majority of sites can be placed with few false positives in a diverse test set of structures. At atomic resolution, it is observed that it can also be possible to identify tightly bound sodium and magnesium ions. A number of challenges that contribute to the difficulty of completely automating the process of structure completion are discussed.
doi:10.1107/S1399004714001308
PMCID: PMC3975891  PMID: 24699654
refinement; ions; PHENIX
15.  Phaser.MRage: automated molecular replacement 
The functionality of the molecular-replacement pipeline phaser.MRage is introduced and illustrated with examples.
Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement.
doi:10.1107/S0907444913022750
PMCID: PMC3817702  PMID: 24189240
molecular replacement; pipeline; automation; phaser.MRage
16.  Exploiting the anisotropy of anomalous scattering boosts the phasing power of SAD and MAD experiments 
It is shown that the anisotropy of anomalous scattering (AAS) is a significant and ubiquitous effect in data sets collected at an absorption edge and that its exploitation can substantially enhance the phasing power of single- or multi-wavelength anomalous diffraction. The improvements in the phases are typically of the same order of magnitude as those obtained in a conventional approach by adding a second-wavelength data set to a SAD experiment.
The X-ray polarization anisotropy of anomalous scattering in crystals of brominated nucleic acids and selenated proteins is shown to have significant effects on the diffraction data collected at an absorption edge. For conventionally collected single- or multi-wavelength anomalous diffraction data, the main manifestation of the anisotropy of anomalous scattering is the breakage of the equivalence between symmetry-related reflections, inducing intensity differences between them that can be exploited to yield extra phase information in the structure-solution process. A new formalism for describing the anisotropy of anomalous scattering which allows these effects to be incorporated into the general scheme of experimental phasing methods using an extended Harker construction is introduced. This requires a paradigm shift in the data-processing strategy, since the usual separation of the data-merging and phasing steps is abandoned. The data are kept unmerged down to the Harker construction, where the symmetry-breaking is explicitly modelled and refined and becomes a source of supplementary phase information. These ideas have been implemented in the phasing program SHARP. Refinements using actual data show that exploitation of the anisotropy of anomalous scattering can deliver substantial extra phasing power compared with conventional approaches using the same raw data. Examples are given that show improvements in the phases which are typically of the same order of magnitude as those obtained in a conventional approach by adding a second-wavelength data set to a SAD experiment. It is argued that such gains, which come essentially for free, i.e. without the collection of new data, are highly significant, since radiation damage can frequently preclude the collection of a second-wavelength data set. Finally, further developments in synchrotron instrumentation and in the design of data-collection strategies that could help to maximize these gains are outlined.
doi:10.1107/S0907444908010202
PMCID: PMC2467528  PMID: 18566507
anisotropy of anomalous scattering; phasing; SAD; MAD; polarized resonant diffraction
17.  Solving structures of protein complexes by molecular replacement with Phaser  
Four case studies in using maximum-likelihood molecular replacement, as implemented in the program Phaser, to solve structures of protein complexes are described.
Molecular replacement (MR) generally becomes more difficult as the number of components in the asymmetric unit requiring separate MR models (i.e. the dimensionality of the search) increases. When the proportion of the total scattering contributed by each search component is small, the signal in the search for each component in isolation is weak or non-existent. Maximum-likelihood MR functions enable complex asymmetric units to be built up from individual components with a ‘tree search with pruning’ approach. This method, as implemented in the automated search procedure of the program Phaser, has been very successful in solving many previously intractable MR problems. However, there are a number of cases in which the automated search procedure of Phaser is suboptimal or encounters difficulties. These include cases where there are a large number of copies of the same component in the asymmetric unit or where the components of the asymmetric unit have greatly varying B factors. Two case studies are presented to illustrate how Phaser can be used to best advantage in the standard ‘automated MR’ mode and two case studies are used to show how to modify the automated search strategy for problematic cases.
doi:10.1107/S0907444906045975
PMCID: PMC2483468  PMID: 17164524
macromolecular crystallography; molecular replacement; maximum likelihood
18.  ANODE: anomalous and heavy-atom density calculation 
Journal of Applied Crystallography  2011;44(Pt 6):1285-1287.
The program ANODE determines anomalous (or heavy-atom) densities by reversing the usual procedure for experimental phase determination. Instead of adding a phase shift to the heavy-atom phases to obtain a starting value for the native protein phase, this phase shift is subtracted from the native phase to obtain the heavy-atom substructure phase.
The new program ANODE estimates anomalous or heavy-atom density by reversing the usual procedure for experimental phase determination by methods such as single- and multiple-wavelength anomalous diffraction and single isomorphous replacement anomalous scattering. Instead of adding a phase shift to the heavy-atom phases to obtain a starting value for the native protein phase, this phase shift is subtracted from the native phase to obtain the heavy-atom substructure phase. The required native phase is calculated from the information in a Protein Data Bank file of the structure. The resulting density enables even very weak anomalous scatterers such as sulfur to be located. Potential applications include the identification of unknown atoms and the validation of molecular replacement solutions.
doi:10.1107/S0021889811041768
PMCID: PMC3246834  PMID: 22477786
anomalous density; heavy-atom density; experimental phasing; computer programs
19.  Developing an efficient scheduling template of a chemotherapy treatment unit 
The Australasian Medical Journal  2011;4(10):575-588.
This study was undertaken to improve the performance of a Chemotherapy Treatment Unit by increasing the throughput and reducing the average patient’s waiting time. In order to achieve this objective, a scheduling template has been built. The scheduling template is a simple tool that can be used to schedule patients' arrival to the clinic. A simulation model of this system was built and several scenarios, that target match the arrival pattern of the patients and resources availability, were designed and evaluated. After performing detailed analysis, one scenario provide the best system’s performance. A scheduling template has been developed based on this scenario. After implementing the new scheduling template, 22.5% more patients can be served.
Introduction
CancerCare Manitoba is a provincially mandated cancer care agency. It is dedicated to provide quality care to those who have been diagnosed and are living with cancer. MacCharles Chemotherapy unit is specially built to provide chemotherapy treatment to the cancer patients of Winnipeg. In order to maintain an excellent service, it tries to ensure that patients get their treatment in a timely manner. It is challenging to maintain that goal because of the lack of a proper roster, the workload distribution and inefficient resource allotment. In order to maintain the satisfaction of the patients and the healthcare providers, by serving the maximum number of patients in a timely manner, it is necessary to develop an efficient scheduling template that matches the required demand with the availability of resources. This goal can be reached using simulation modelling. Simulation has proven to be an excellent modelling tool. It can be defined as building computer models that represent real world or hypothetical systems, and hence experimenting with these models to study system behaviour under different scenarios.1, 2
A study was undertaken at the Children's Hospital of Eastern Ontario to identify the issues behind the long waiting time of a emergency room.3 A 20-­‐day field observation revealed that the availability of the staff physician and interaction affects the patient wait time. Jyväskylä et al.4 used simulation to test different process scenarios, allocate resources and perform activity-­‐based cost analysis in the Emergency Department (ED) at the Central Hospital. The simulation also supported the study of a new operational method, named "triage-team" method without interrupting the main system. The proposed triage team method categorises the entire patient according to the urgency to see the doctor and allows the patient to complete the necessary test before being seen by the doctor for the first time. The simulation study showed that it will decrease the throughput time of the patient and reduce the utilisation of the specialist and enable the ordering all the tests the patient needs right after arrival, thus quickening the referral to treatment.
Santibáñez et al.5 developed a discrete event simulation model of British Columbia Cancer Agency"s ambulatory care unit which was used to study the impact of scenarios considering different operational factors (delay in starting clinic), appointment schedule (appointment order, appointment adjustment, add-­‐ons to the schedule) and resource allocation. It was found that the best outcomes were obtained when not one but multiple changes were implemented simultaneously. Sepúlveda et al.6 studied the M. D. Anderson Cancer Centre Orlando, which is a cancer treatment facility and built a simulation model to analyse and improve flow process and increase capacity in the main facility. Different scenarios were considered like, transferring laboratory and pharmacy areas, adding an extra blood draw room and applying different scheduling techniques of patients. The study shows that by increasing the number of short-­‐term (four hours or less) patients in the morning could increase chair utilisation.
Discrete event simulation also helps improve a service where staff are ignorant about the behaviour of the system as a whole; which can also be described as a real professional system. Niranjon et al.7 used simulation successfully where they had to face such constraints and lack of accessible data. Carlos et al. 8 used Total quality management and simulation – animation to improve the quality of the emergency room. Simulation was used to cover the key point of the emergency room and animation was used to indicate the areas of opportunity required. This study revealed that a long waiting time, overload personnel and increasing withdrawal rate of patients are caused by the lack of capacity in the emergency room.
Baesler et al.9 developed a methodology for a cancer treatment facility to find stochastically a global optimum point for the control variables. A simulation model generated the output using a goal programming framework for all the objectives involved in the analysis. Later a genetic algorithm was responsible for performing the search for an improved solution. The control variables that were considered in this research are number of treatment chairs, number of drawing blood nurses, laboratory personnel, and pharmacy personnel. Guo et al. 10 presented a simulation framework considering demand for appointment, patient flow logic, distribution of resources, scheduling rules followed by the scheduler. The objective of the study was to develop a scheduling rule which will ensure that 95% of all the appointment requests should be seen within one week after the request is made to increase the level of patient satisfaction and balance the schedule of each doctor to maintain a fine harmony between "busy clinic" and "quiet clinic".
Huschka et al.11 studied a healthcare system which was about to change their facility layout. In this case a simulation model study helped them to design a new healthcare practice by evaluating the change in layout before implementation. Historical data like the arrival rate of the patients, number of patients visited each day, patient flow logic, was used to build the current system model. Later, different scenarios were designed which measured the changes in the current layout and performance.
Wijewickrama et al.12 developed a simulation model to evaluate appointment schedule (AS) for second time consultations and patient appointment sequence (PSEQ) in a multi-­‐facility system. Five different appointment rule (ARULE) were considered: i) Baily; ii) 3Baily; iii) Individual (Ind); iv) two patients at a time (2AtaTime); v) Variable Interval and (V-­‐I) rule. PSEQ is based on type of patients: Appointment patients (APs) and new patients (NPs). The different PSEQ that were studied in this study were: i) first-­‐ come first-­‐serve; ii) appointment patient at the beginning of the clinic (APBEG); iii) new patient at the beginning of the clinic (NPBEG); iv) assigning appointed and new patients in an alternating manner (ALTER); v) assigning a new patient after every five-­‐appointment patients. Also patient no show (0% and 5%) and patient punctuality (PUNCT) (on-­‐time and 10 minutes early) were also considered. The study found that ALTER-­‐Ind. and ALTER5-­‐Ind. performed best on 0% NOSHOW, on-­‐time PUNCT and 5% NOSHOW, on-­‐time PUNCT situation to reduce WT and IT per patient. As NOSHOW created slack time for waiting patients, their WT tends to reduce while IT increases due to unexpected cancellation. Earliness increases congestion whichin turn increases waiting time.
Ramis et al.13 conducted a study of a Medical Imaging Center (MIC) to build a simulation model which was used to improve the patient journey through an imaging centre by reducing the wait time and making better use of the resources. The simulation model also used a Graphic User Interface (GUI) to provide the parameters of the centre, such as arrival rates, distances, processing times, resources and schedule. The simulation was used to measure the waiting time of the patients in different case scenarios. The study found that assigning a common function to the resource personnel could improve the waiting time of the patients.
The objective of this study is to develop an efficient scheduling template that maximises the number of served patients and minimises the average patient's waiting time at the given resources availability. To accomplish this objective, we will build a simulation model which mimics the working conditions of the clinic. Then we will suggest different scenarios of matching the arrival pattern of the patients with the availability of the resources. Full experiments will be performed to evaluate these scenarios. Hence, a simple and practical scheduling template will be built based on the indentified best scenario. The developed simulation model is described in section 2, which consists of a description of the treatment room, and a description of the types of patients and treatment durations. In section 3, different improvement scenarios are described and their analysis is presented in section 4. Section 5 illustrates a scheduling template based on one of the improvement scenarios. Finally, the conclusion and future direction of our work is exhibited in section 6.
Simulation Model
A simulation model represents the actual system and assists in visualising and evaluating the performance of the system under different scenarios without interrupting the actual system. Building a proper simulation model of a system consists of the following steps.
Observing the system to understand the flow of the entities, key players, availability of resources and overall generic framework.
Collecting the data on the number and type of entities, time consumed by the entities at each step of their journey, and availability of resources.
After building the simulation model it is necessary to confirm that the model is valid. This can be done by confirming that each entity flows as it is supposed to and the statistical data generated by the simulation model is similar to the collected data.
Figure 1 shows the patient flow process in the treatment room. On the patient's first appointment, the oncologist comes up with the treatment plan. The treatment time varies according to the patient’s condition, which may be 1 hour to 10 hours. Based on the type of the treatment, the physician or the clinical clerk books an available treatment chair for that time period.
On the day of the appointment, the patient will wait until the booked chair is free. When the chair is free a nurse from that station comes to the patient, verifies the name and date of birth and takes the patient to a treatment chair. Afterwards, the nurse flushes the chemotherapy drug line to the patient's body which takes about five minutes and sets up the treatment. Then the nurse leaves to serve another patient. Chemotherapy treatment lengths vary from less than an hour to 10 hour infusions. At the end of the treatment, the nurse returns, removes the line and notifies the patient about the next appointment date and time which also takes about five minutes. Most of the patients visit the clinic to take care of their PICC line (a peripherally inserted central catheter). A PICC is a line that is used to inject the patient with the chemical. This PICC line should be regularly cleaned, flushed to maintain patency and the insertion site checked for signs of infection. It takes approximately 10–15 minutes to take care of a PICC line by a nurse.
Cancer Care Manitoba provided access to the electronic scheduling system, also known as "ARIA" which is comprehensive information and image management system that aggregates patient data into a fully-­‐electronic medical chart, provided by VARIAN Medical System. This system was used to find out how many patients are booked in every clinic day. It also reveals which chair is used for how many hours. It was necessary to search a patient's history to find out how long the patient spends on which chair. Collecting the snapshot of each patient gives the complete picture of a one day clinic schedule.
The treatment room consists of the following two main limited resources:
Treatment Chairs: Chairs that are used to seat the patients during the treatment.
Nurses: Nurses are required to inject the treatment line into the patient and remove it at the end of the treatment. They also take care of the patients when they feel uncomfortable.
Mc Charles Chemotherapy unit consists of 11 nurses, and 5 stations with the following description:
Station 1: Station 1 has six chairs (numbered 1 to 6) and two nurses. The two nurses work from 8:00 to 16:00.
Station 2: Station 2 has six chairs (7 to 12) and three nurses. Two nurses work from 8:00 to 16:00 and one nurse works from 12:00 to 20:00.
Station 3: Station 4 has six chairs (13 to 18) and two nurses. The two nurses work from 8:00 to 16:00.
Station 4: Station 4 has six chairs (19 to 24) and three nurses. One nurse works from 8:00 to 16:00. Another nurse works from 10:00 to 18:00.
Solarium Station: Solarium Station has six chairs (Solarium Stretcher 1, Solarium Stretcher 2, Isolation, Isolation emergency, Fire Place 1, Fire Place 2). There is only one nurse assigned to this station that works from 12:00 to 20:00. The nurses from other stations can help when need arises.
There is one more nurse known as the "float nurse" who works from 11:00 to 19:00. This nurse can work at any station. Table 1 summarises the working hours of chairs and nurses. All treatment stations start at 8:00 and continue until the assigned nurse for that station completes her shift.
Currently, the clinic uses a scheduling template to assign the patients' appointments. But due to high demand of patient appointment it is not followed any more. We believe that this template can be improved based on the availability of nurses and chairs. Clinic workload was collected from 21 days of field observation. The current scheduling template has 10 types of appointment time slot: 15-­‐minute, 1-­‐hour, 1.5-­‐hour, 2-­‐hour, 3-­‐hour, 4-­‐hour, 5-­‐hour, 6-­‐hour, 8-­‐hour and 10-­‐hour and it is designed to serve 95 patients. But when the scheduling template was compared with the 21 days observations, it was found that the clinic is serving more patients than it is designed for. Therefore, the providers do not usually follow the scheduling template. Indeed they very often break the time slots to accommodate slots that do not exist in the template. Hence, we find that some of the stations are very busy (mostly station 2) and others are underused. If the scheduling template can be improved, it will be possible to bring more patients to the clinic and reduce their waiting time without adding more resources.
In order to build or develop a simulation model of the existing system, it is necessary to collect the following data:
Types of treatment durations.
Numbers of patients in each treatment type.
Arrival pattern of the patients.
Steps that the patients have to go through in their treatment journey and required time of each step.
Using the observations of 2,155 patients over 21 days of historical data, the types of treatment durations and the number of patients in each type were estimated. This data also assisted in determining the arrival rate and the frequency distribution of the patients. The patients were categorised into six types. The percentage of these types and their associated service times distributions are determined too.
ARENA Rockwell Simulation Software (v13) was used to build the simulation model. Entities of the model were tracked to verify that the patients move as intended. The model was run for 30 replications and statistical data was collected to validate the model. The total number of patients that go though the model was compared with the actual number of served patients during the 21 days of observations.
Improvement Scenarios
After verifying and validating the simulation model, different scenarios were designed and analysed to identify the best scenario that can handle more patients and reduces the average patient's waiting time. Based on the clinic observation and discussion with the healthcare providers, the following constraints have been stated:
The stations are filled up with treatment chairs. Therefore, it is literally impossible to fit any more chairs in the clinic. Moreover, the stakeholders are not interested in adding extra chairs.
The stakeholders and the caregivers are not interested in changing the layout of the treatment room.
Given these constraints the options that can be considered to design alternative scenarios are:
Changing the arrival pattern of the patients: that will fit over the nurses' availability.
Changing the nurses' schedule.
Adding one full time nurse at different starting times of the day.
Figure 2 compares the available number of nurses and the number of patients' arrival during different hours of a day. It can be noticed that there is a rapid growth in the arrival of patients (from 13 to 17) between 8:00 to 10:00 even though the clinic has the equal number of nurses during this time period. At 12:00 there is a sudden drop of patient arrival even though there are more available nurses. It is clear that there is an imbalance in the number of available nurses and the number of patient arrivals over different hours of the day. Consequently, balancing the demand (arrival rate of patients) and resources (available number of nurses) will reduce the patients' waiting time and increases the number of served patients. The alternative scenarios that satisfy the above three constraints are listed in Table 2. These scenarios respect the following rules:
Long treatments (between 4hr to 11hr) have to be scheduled early in the morning to avoid working overtime.
Patients of type 1 (15 minutes to 1hr treatment) are the most common. They can be fitted in at any time of the day because they take short treatment time. Hence, it is recommended to bring these patients in at the middle of the day when there are more nurses.
Nurses get tired at the end of the clinic day. Therefore, fewer patients should be scheduled at the late hours of the day.
In Scenario 1, the arrival pattern of the patient was changed so that it can fit with the nurse schedule. This arrival pattern is shown Table 3. Figure 3 shows the new patients' arrival pattern compared with the current arrival pattern. Similar patterns can be developed for the remaining scenarios too.
Analysis of Results
ARENA Rockwell Simulation software (v13) was used to develop the simulation model. There is no warm-­‐up period because the model simulates day-­‐to-­‐day scenarios. The patients of any day are supposed to be served in the same day. The model was run for 30 days (replications) and statistical data was collected to evaluate each scenario. Tables 4 and 5 show the detailed comparison of the system performance between the current scenario and Scenario 1. The results are quite interesting. The average throughput rate of the system has increased from 103 to 125 patients per day. The maximum throughput rate can reach 135 patients. Although the average waiting time has increased, the utilisation of the treatment station has increased by 15.6%. Similar analysis has been performed for the rest of the other scenarios. Due to the space limitation the detailed results are not given. However, Table 6 exhibits a summary of the results and comparison between the different scenarios. Scenario 1 was able to significantly increase the throughput of the system (by 21%) while it still results in an acceptable low average waiting time (13.4 minutes). In addition, it is worth noting that adding a nurse (Scenarios 3, 4, and 5) does not significantly reduce the average wait time or increase the system's throughput. The reason behind this is that when all the chairs are busy, the nurses have to wait until some patients finish the treatment. As a consequence, the other patients have to wait for the commencement of their treatment too. Therefore, hiring a nurse, without adding more chairs, will not reduce the waiting time or increase the throughput of the system. In this case, the only way to increase the throughput of the system is by adjusting the arrival pattern of patients over the nurses' schedule.
Developing a Scheduling Template based on Scenario 1
Scenario 1 provides the best performance. However a scheduling template is necessary for the care provider to book the patients. Therefore, a brief description is provided below on how scheduling the template is developed based on this scenario.
Table 3 gives the number of patients that arrive hourly, following Scenario 1. The distribution of each type of patient is shown in Table 7. This distribution is based on the percentage of each type of patient from the collected data. For example, in between 8:00-­‐9:00, 12 patients will come where 54.85% are of Type 1, 34.55% are of Type 2, 15.163% are of Type 3, 4.32% are of Type 4, 2.58% are of Type 5 and the rest are of Type 6. It is worth noting that, we assume that the patients of each type arrive as a group at the beginning of the hourly time slot. For example, all of the six patients of Type 1 from 8:00 to 9:00 time slot arrive at 8:00.
The numbers of patients from each type is distributed in such a way that it respects all the constraints described in Section 1.3. Most of the patients of the clinic are from type 1, 2 and 3 and they take less amount of treatment time compared with the patients of other types. Therefore, they are distributed all over the day. Patients of type 4, 5 and 6 take a longer treatment time. Hence, they are scheduled at the beginning of the day to avoid overtime. Because patients of type 4, 5 and 6 come at the beginning of the day, most of type 1 and 2 patients come at mid-­‐day (12:00 to 16:00). Another reason to make the treatment room more crowded in between 12:00 to 16:00 is because the clinic has the maximum number of nurses during this time period. Nurses become tired at the end of the clinic which is a reason not to schedule any patient after 19:00.
Based on the patient arrival schedule and nurse availability a scheduling template is built and shown in Figure 4. In order to build the template, if a nurse is available and there are patients waiting for service, a priority list of these patients will be developed. They are prioritised in a descending order based on their estimated slack time and secondarily based on the shortest service time. The secondary rule is used to break the tie if two patients have the same slack. The slack time is calculated using the following equation:
Slack time = Due time - (Arrival time + Treatment time)
Due time is the clinic closing time. To explain how the process works, assume at hour 8:00 (in between 8:00 to 8:15) two patients in station 1 (one 8-­‐hour and one 15-­‐ minute patient), two patients in station 2 (two 12-­‐hour patients), two patients in station 3 (one 2-­‐hour and one 15-­‐ minute patient) and one patient in station 4 (one 3-­‐hour patient) in total seven patients are scheduled. According to Figure 2, there are seven nurses who are available at 8:00 and it takes 15 minutes to set-­‐up a patient. Therefore, it is not possible to schedule more than seven patients in between 8:00 to 8:15 and the current scheduling is also serving seven patients by this time. The rest of the template can be justified similarly.
doi:10.4066/AMJ.2011.837
PMCID: PMC3562880  PMID: 23386870
20.  Automated nucleic acid chain tracing in real time 
IUCrJ  2014;1(Pt 6):387-392.
A method is presented for the automatic building of nucleotide chains into electron density which is fast enough to be used in interactive model-building software. Likely nucleotides lying in the vicinity of the current view are located and then grown into connected chains in a fraction of a second. When this development is combined with existing tools, assisted manual model building is as simple as or simpler than for proteins.
The crystallographic structure solution of nucleotides and nucleotide complexes is now commonplace. The resulting electron-density maps are often poorer than for proteins, and as a result interpretation in terms of an atomic model can require significant effort, particularly in the case of large structures. While model building can be performed automatically, as with proteins, the process is time-consuming, taking minutes to days depending on the software and the size of the structure. A method is presented for the automatic building of nucleotide chains into electron density which is fast enough to be used in interactive model-building software, with extended chain fragments built around the current view position in a fraction of a second. The speed of the method arises from the determination of the ‘fingerprint’ of the sugar and phosphate groups in terms of conserved high-density and low-density features, coupled with a highly efficient scoring algorithm. Use cases include the rapid evaluation of an initial electron-density map, addition of nucleotide fragments to prebuilt protein structures, and in favourable cases the completion of the structure while automated model-building software is still running. The method has been incorporated into the Coot software package.
doi:10.1107/S2052252514019290
PMCID: PMC4224457  PMID: 25485119
nucleic acid chain tracing; Coot
21.  Experimental phasing: best practice and pitfalls 
The pitfalls of experimental phasing are described.
Developments in protein crystal structure determination by experimental phasing are reviewed, emphasizing the theoretical continuum between experimental phasing, density modification, model building and refinement. Traditional notions of the composition of the substructure and the best coefficients for map generation are discussed. Pitfalls such as determining the enantiomorph, identifying centrosymmetry (or pseudo-symmetry) in the substructure and crystal twinning are discussed in detail. An appendix introduces com­bined real–imaginary log-likelihood gradient map coefficients for SAD phasing and their use for substructure completion as implemented in the software Phaser. Supplementary material includes animated probabilistic Harker diagrams showing how maximum-likelihood-based phasing methods can be used to refine parameters in the case of SIR and MIR; it is hoped that these will be useful for those teaching best practice in experimental phasing methods.
doi:10.1107/S0907444910006335
PMCID: PMC2852310  PMID: 20382999
enantiomers; handedness; absolute configuration; chirality; twinning; experimental phasing
22.  Direct phase selection of initial phases from single-wavelength anomalous dispersion (SAD) for the improvement of electron density and ab initio structure determination 
A novel direct phase-selection method to select optimized phases from the ambiguous phases of a subset of reflections to replace the corresponding initial SAD phases has been developed. With the improved phases, the completeness of built residues of protein molecules is enhanced for efficient structure determination.
Optimization of the initial phasing has been a decisive factor in the success of the subsequent electron-density modification, model building and structure determination of biological macromolecules using the single-wavelength anomalous dispersion (SAD) method. Two possible phase solutions (ϕ1 and ϕ2) generated from two symmetric phase triangles in the Harker construction for the SAD method cause the well known phase ambiguity. A novel direct phase-selection method utilizing the θDS list as a criterion to select optimized phases ϕam from ϕ1 or ϕ2 of a subset of reflections with a high percentage of correct phases to replace the corresponding initial SAD phases ϕSAD has been developed. Based on this work, reflections with an angle θDS in the range 35–145° are selected for an optimized improvement, where θDS is the angle between the initial phase ϕSAD and a preliminary density-modification (DM) phase ϕDM NHL. The results show that utilizing the additional direct phase-selection step prior to simple solvent flattening without phase combination using existing DM programs, such as RESOLVE or DM from CCP4, significantly improves the final phases in terms of increased correlation coefficients of electron-density maps and diminished mean phase errors. With the improved phases and density maps from the direct phase-selection method, the completeness of residues of protein molecules built with main chains and side chains is enhanced for efficient structure determination.
doi:10.1107/S1399004714013868
PMCID: PMC4157445  PMID: 25195747
direct phase selection; ab initio structure determination; electron-density improvement
23.  Finite volume analysis of temperature effects induced by active MRI implants: 2. Defects on active MRI implants causing hot spots 
Background
Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach ¼ of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to.
Methods
First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants.
Results
The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor.
Conclusion
The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q Vind < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q Vind > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for.
doi:10.1186/1475-925X-5-35
PMCID: PMC1513583  PMID: 16729878
24.  Peptide model helices in lipid membranes: insertion, positioning, and lipid response on aggregation studied by X-ray scattering 
European Biophysics Journal  2010;40(4):417-436.
Studying membrane active peptides or protein fragments within the lipid bilayer environment is particularly challenging in the case of synthetically modified, labeled, artificial, or recently discovered native structures. For such samples the localization and orientation of the molecular species or probe within the lipid bilayer environment is the focus of research prior to an evaluation of their dynamic or mechanistic behavior. X-ray scattering is a powerful method to study peptide/lipid interactions in the fluid, fully hydrated state of a lipid bilayer. For one, the lipid response can be revealed by observing membrane thickening and thinning as well as packing in the membrane plane; at the same time, the distinct positions of peptide moieties within lipid membranes can be elucidated at resolutions of up to several angstroms by applying heavy-atom labeling techniques. In this study, we describe a generally applicable X-ray scattering approach that provides robust and quantitative information about peptide insertion and localization as well as peptide/lipid interaction within highly oriented, hydrated multilamellar membrane stacks. To this end, we have studied an artificial, designed β-helical peptide motif in its homodimeric and hairpin variants adopting different states of oligomerization. These peptide lipid complexes were analyzed by grazing incidence diffraction (GID) to monitor changes in the lateral lipid packing and ordering. In addition, we have applied anomalous reflectivity using synchrotron radiation as well as in-house X-ray reflectivity in combination with iodine-labeling in order to determine the electron density distribution ρ(z) along the membrane normal (z axis), and thereby reveal the hydrophobic mismatch situation as well as the position of certain amino acid side chains within the lipid bilayer. In the case of multiple labeling, the latter technique is not only applicable to demonstrate the peptide’s reconstitution but also to generate evidence about the relative peptide orientation with respect to the lipid bilayer.
Electronic supplementary material
The online version of this article (doi:10.1007/s00249-010-0645-4) contains supplementary material, which is available to authorized users.
doi:10.1007/s00249-010-0645-4
PMCID: PMC3070074  PMID: 21181143
Peptide lipid interactions; Membrane active peptides; Model helices; X-ray scattering; Hydrophobic mismatch; Lipid chain correlation
25.  Meaningful Interpretation of Subdiffusive Measurements in Living Cells (Crowded Environment) by Fluorescence Fluctuation Microscopy 
In living cell or its nucleus, the motions of molecules are complicated due to the large crowding and expected heterogeneity of the intracellular environment. Randomness in cellular systems can be either spatial (anomalous) or temporal (heterogeneous). In order to separate both processes, we introduce anomalous random walks on fractals that represented crowded environments. We report the use of numerical simulation and experimental data of single-molecule detection by fluorescence fluctuation microscopy for detecting resolution limits of different mobile fractions in crowded environment of living cells. We simulate the time scale behavior of diffusion times τD(τ) for one component, e.g. the fast mobile fraction, and a second component, e.g. the slow mobile fraction. The less the anomalous exponent α the higher the geometric crowding of the underlying structure of motion that is quantified by the ratio of the Hausdorff dimension and the walk exponent d f /dw and specific for the type of crowding generator used. The simulated diffusion time decreases for smaller values of α ≠ 1 but increases for a larger time scale τ at a given value of α ≠ 1. The effect of translational anomalous motion is substantially greater if α differs much from 1. An α value close to 1 contributes little to the time dependence of subdiffusive motions. Thus, quantitative determination of molecular weights from measured diffusion times and apparent diffusion coefficients, respectively, in temporal auto- and crosscorrelation analyses and from time-dependent fluorescence imaging data are difficult to interpret and biased in crowded environments of living cells and their cellular compartments; anomalous dynamics on different time scales τ must be coupled with the quantitative analysis of how experimental parameters change with predictions from simulated subdiffusive dynamics of molecular motions and mechanistic models. We first demonstrate that the crowding exponent α also determines the resolution of differences in diffusion times between two components in addition to photophyscial parameters well-known for normal motion in dilute solution. The resolution limit between two different kinds of single molecule species is also analyzed under translational anomalous motion with broken ergodicity. We apply our theoretical predictions of diffusion times and lower limits for the time resolution of two components to fluorescence images in human prostate cancer cells transfected with GFP-Ago2 and GFP-Ago1. In order to mimic heterogeneous behavior in crowded environments of living cells, we need to introduce so-called continuous time random walks (CTRW). CTRWs were originally performed on regular lattice. This purely stochastic molecule behavior leads to subdiffusive motion with broken ergodicity in our simulations. For the first time, we are able to quantitatively differentiate between anomalous motion without broken ergodicity and anomalous motion with broken ergodicity in time-dependent fluorescence microscopy data sets of living cells. Since the experimental conditions to measure a selfsame molecule over an extended period of time, at which biology is taken place, in living cells or even in dilute solution are very restrictive, we need to perform the time average over a subpopulation of different single molecules of the same kind. For time averages over subpopulations of single molecules, the temporal auto- and crosscorrelation functions are first found. Knowing the crowding parameter α for the cell type and cellular compartment type, respectively, the heterogeneous parameter γ can be obtained from the measurements in the presence of the interacting reaction partner, e.g. ligand, with the same α value. The product α⋅γ=γ˜ is not a simple fitting parameter in the temporal auto- and two-color crosscorrelation functions because it is related to the proper physical models of anomalous (spatial) and heterogeneous (temporal) randomness in cellular systems. We have already derived an analytical solution for γ˜ in the special case of γ = 3/2 . In the case of two-color crosscorrelation or/and two-color fluorescence imaging (co-localization experiments), the second component is also a two-color species gr, for example a different molecular complex with an additional ligand. Here, we first show that plausible biological mechanisms from FCS/ FCCS and fluorescence imaging in living cells are highly questionable without proper quantitative physical models of subdiffusive motion and temporal randomness. At best, such quantitative FCS/ FCCS and fluorescence imaging data are difficult to interpret under crowding and heterogeneous conditions. It is challenging to translate proper physical models of anomalous (spatial) and heterogeneous (temporal) randomness in living cells and their cellular compartments like the nucleus into biological models of the cell biological process under study testable by single-molecule approaches. Otherwise, quantitative FCS/FCCS and fluorescence imaging measurements in living cells are not well described and cannot be interpreted in a meaningful way.
doi:10.2174/138920110791591454
PMCID: PMC3583073  PMID: 20553227
Anomalous motion; broken ergodicity; Continuous Time Random Walks (CTRW); Continuous Time Random Walks (CTRW) on fractal supports; cellular crowding; Cytoplasmic Assembly of Nuclear RISC; ergodicity; FCS; FCCS; Fluorescence Fluctuation Microscopy; GFP-Ago1; GFP-Ago2; heterogeneity; living cells; meaningful interpretation of subdiffusive measurements; microRNA trafficking; physical model of crowding; physical model of heterogeneity; random walks on fractal supports; resolution limits of measured diffusion times for two components; RNA Activation (RNAa); Single Molecule; Small Activating RNA (saRNA); Temporal autocorrelation; Temporal two-color crosscorrelation; Fluorescence imaging; Time dependence of apparent diffusion coefficients.

Results 1-25 (1139159)