PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (30)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
Document Types
1.  Flexible torsion-angle noncrystallographic symmetry restraints for improved macromolecular structure refinement 
Flexible torsion angle-based NCS restraints have been implemented in phenix.refine, allowing improved model refinement at all resolutions. Rotamer correction and rotamer consistency checks between NCS-related amino-acid side chains further improve the final model quality.
One of the great challenges in refining macromolecular crystal structures is a low data-to-parameter ratio. Historically, knowledge from chemistry has been used to help to improve this ratio. When a macromolecule crystallizes with more than one copy in the asymmetric unit, the noncrystallographic symmetry relationships can be exploited to provide additional restraints when refining the working model. However, although globally similar, NCS-related chains often have local differences. To allow for local differences between NCS-related molecules, flexible torsion-based NCS restraints have been introduced, coupled with intelligent rotamer handling for protein chains, and are available in phenix.refine for refinement of models at all resolutions.
doi:10.1107/S1399004714003277
PMCID: PMC4014122  PMID: 24816103
macromolecular crystallography; noncrystallographic symmetry; NCS; refinement; automation
2.  Automated identification of elemental ions in macromolecular crystal structures 
The solvent-picking procedure in phenix.refine has been extended and combined with Phaser anomalous substructure completion and analysis of coordination geometry to identify and place elemental ions.
Many macromolecular model-building and refinement programs can automatically place solvent atoms in electron density at moderate-to-high resolution. This process frequently builds water molecules in place of elemental ions, the identification of which must be performed manually. The solvent-picking algorithms in phenix.refine have been extended to build common ions based on an analysis of the chemical environment as well as physical properties such as occupancy, B factor and anomalous scattering. The method is most effective for heavier elements such as calcium and zinc, for which a majority of sites can be placed with few false positives in a diverse test set of structures. At atomic resolution, it is observed that it can also be possible to identify tightly bound sodium and magnesium ions. A number of challenges that contribute to the difficulty of completely automating the process of structure completion are discussed.
doi:10.1107/S1399004714001308
PMCID: PMC3975891  PMID: 24699654
refinement; ions; PHENIX
3.  Automating crystallographic structure solution and refinement of protein–ligand complexes 
A software system for automated protein–ligand crystallography has been implemented in the Phenix suite. This significantly reduces the manual effort required in high-throughput crystallographic studies.
High-throughput drug-discovery and mechanistic studies often require the determination of multiple related crystal structures that only differ in the bound ligands, point mutations in the protein sequence and minor conformational changes. If performed manually, solution and refinement requires extensive repetition of the same tasks for each structure. To accelerate this process and minimize manual effort, a pipeline encompassing all stages of ligand building and refinement, starting from integrated and scaled diffraction intensities, has been implemented in Phenix. The resulting system is able to successfully solve and refine large collections of structures in parallel without extensive user intervention prior to the final stages of model completion and validation.
doi:10.1107/S139900471302748X
PMCID: PMC3919266  PMID: 24419387
protein–ligand complexes; automation; crystallographic structure solution and refinement
4.  Model morphing and sequence assignment after molecular replacement 
A procedure for model building is described that combines morphing a model to match a density map, trimming the morphed model and aligning the model to a sequence.
A procedure termed ‘morphing’ for improving a model after it has been placed in the crystallographic cell by molecular replacement has recently been developed. Morphing consists of applying a smooth deformation to a model to make it match an electron-density map more closely. Morphing does not change the identities of the residues in the chain, only their coordinates. Consequently, if the true structure differs from the working model by containing different residues, these differences cannot be corrected by morphing. Here, a procedure that helps to address this limitation is described. The goal of the procedure is to obtain a relatively complete model that has accurate main-chain atomic positions and residues that are correctly assigned to the sequence. Residues in a morphed model that do not match the electron-density map are removed. Each segment of the resulting trimmed morphed model is then assigned to the sequence of the molecule using information about the connectivity of the chains from the working model and from connections that can be identified from the electron-density map. The procedure was tested by application to a recently determined structure at a resolution of 3.2 Å and was found to increase the number of correctly identified residues in this structure from the 88 obtained using phenix.resolve sequence assignment alone (Terwilliger, 2003 ▶) to 247 of a possible 359. Additionally, the procedure was tested by application to a series of templates with sequence identities to a target structure ranging between 7 and 36%. The mean fraction of correctly identified residues in these cases was increased from 33% using phenix.resolve sequence assignment to 47% using the current procedure. The procedure is simple to apply and is available in the Phenix software package.
doi:10.1107/S0907444913017770
PMCID: PMC3817698  PMID: 24189236
morphing; model building; sequence assignment; model–map correlation; loop-building
5.  Modelling dynamics in protein crystal structures by ensemble refinement 
eLife  2012;1:e00311.
Single-structure models derived from X-ray data do not adequately account for the inherent, functionally important dynamics of protein molecules. We generated ensembles of structures by time-averaged refinement, where local molecular vibrations were sampled by molecular-dynamics (MD) simulation whilst global disorder was partitioned into an underlying overall translation–libration–screw (TLS) model. Modeling of 20 protein datasets at 1.1–3.1 Å resolution reduced cross-validated Rfree values by 0.3–4.9%, indicating that ensemble models fit the X-ray data better than single structures. The ensembles revealed that, while most proteins display a well-ordered core, some proteins exhibit a ‘molten core’ likely supporting functionally important dynamics in ligand binding, enzyme activity and protomer assembly. Order–disorder changes in HIV protease indicate a mechanism of entropy compensation for ordering the catalytic residues upon ligand binding by disordering specific core residues. Thus, ensemble refinement extracts dynamical details from the X-ray data that allow a more comprehensive understanding of structure–dynamics–function relationships.
DOI: http://dx.doi.org/10.7554/eLife.00311.001
eLife digest
It has been clear since the early days of structural biology in the late 1950s that proteins and other biomolecules are continually changing shape, and that these changes have an important influence on both the structure and function of the molecules. X-ray diffraction can provide detailed information about the structure of a protein, but only limited information about how its structure fluctuates over time. Detailed information about the dynamic behaviour of proteins is essential for a proper understanding of a variety of processes, including catalysis, ligand binding and protein–protein interactions, and could also prove useful in drug design.
Currently most of the X-ray crystal structures in the Protein Data Bank are ‘snap-shots’ with limited or no information about protein dynamics. However, X-ray diffraction patterns are affected by the dynamics of the protein, and also by distortions of the crystal lattice, so three-dimensional (3D) models of proteins ought to take these phenomena into account. Molecular-dynamics (MD) computer simulations transform 3D structures into 4D ‘molecular movies’ by predicting the movement of individual atoms.
Combining MD simulations with crystallographic data has the potential to produce more realistic ensemble models of proteins in which the atomic fluctuations are represented by multiple structures within the ensemble. Moreover, in addition to improved structural information, this process—which is called ensemble refinement—can provide dynamical information about the protein. Earlier attempts to do this ran into problems because the number of model parameters needed was greater than the number of observed data points. Burnley et al. now overcome this problem by modelling local molecular vibrations with MD simulations and, at the same time, using a course-grain model to describe global disorder of longer length scales.
Ensemble refinement of high-resolution X-ray diffraction datasets for 20 different proteins from the Protein Data Bank produced a better fit to the data than single structures for all 20 proteins. Ensemble refinement also revealed that 3 of the 20 proteins had a ‘molten core’, rather than the well-ordered residues core found in most proteins: this is likely to be important in various biological functions including ligand binding, filament formation and enzymatic function. Burnley et al. also showed that a HIV enzyme underwent an order–disorder transition that is likely to influence how this enzyme works, and that similar transitions might influence the interactions between the small-molecule drug Imatinib (also known as Gleevec) and the enzymes it targets. Ensemble refinement could be applied to the majority of crystallography data currently being collected, or collected in the past, so further insights into the properties and interactions of a variety of proteins and other biomolecules can be expected.
DOI: http://dx.doi.org/10.7554/eLife.00311.002
doi:10.7554/eLife.00311
PMCID: PMC3524795  PMID: 23251785
protein; crystallography; structure; function; dynamics; None
6.  The Phenix Software for Automated Determination of Macromolecular Structures 
Methods (San Diego, Calif.)  2011;55(1):94-106.
X-ray crystallography is a critical tool in the study of biological systems. It is able to provide information that has been a prerequisite to understanding the fundamentals of life. It is also a method that is central to the development of new therapeutics for human disease. Significant time and effort are required to determine and optimize many macromolecular structures because of the need for manual interpretation of complex numerical data, often using many different software packages, and the repeated use of interactive three-dimensional graphics. The Phenix software package has been developed to provide a comprehensive system for macromolecular crystallographic structure solution with an emphasis on automation. This has required the development of new algorithms that minimize or eliminate subjective input in favour of built-in expert-systems knowledge, the automation of procedures that are traditionally performed by hand, and the development of a computational framework that allows a tight integration between the algorithms. The application of automated methods is particularly appropriate in the field of structural proteomics, where high throughput is desired. Features in Phenix for the automation of experimental phasing with subsequent model building, molecular replacement, structure refinement and validation are described and examples given of running Phenix from both the command line and graphical user interface.
doi:10.1016/j.ymeth.2011.07.005
PMCID: PMC3193589  PMID: 21821126
Macromolecular Crystallography; Automation; Phenix; X-ray; Diffraction; Python
7.  Improved crystallographic models through iterated local density-guided model deformation and reciprocal-space refinement 
A density-based procedure is described for improving a homology model that is locally accurate but differs globally. The model is deformed to match the map and refined, yielding an improved starting point for density modification and further model-building.
An approach is presented for addressing the challenge of model rebuilding after molecular replacement in cases where the placed template is very different from the structure to be determined. The approach takes advantage of the observation that a template and target structure may have local structures that can be superimposed much more closely than can their complete structures. A density-guided procedure for deformation of a properly placed template is introduced. A shift in the coordinates of each residue in the structure is calculated based on optimizing the match of model density within a 6 Å radius of the center of that residue with a prime-and-switch electron-density map. The shifts are smoothed and applied to the atoms in each residue, leading to local deformation of the template that improves the match of map and model. The model is then refined to improve the geometry and the fit of model to the structure-factor data. A new map is then calculated and the process is repeated until convergence. The procedure can extend the routine applicability of automated molecular replacement, model building and refinement to search models with over 2 Å r.m.s.d. representing 65–100% of the structure.
doi:10.1107/S0907444912015636
PMCID: PMC3388814  PMID: 22751672
molecular replacement; automation; macromolecular crystallography; structure similarity; modeling; Phenix; morphing
8.  Graphical tools for macromolecular crystallography in PHENIX  
Journal of Applied Crystallography  2012;45(Pt 3):581-586.
The foundations and current features of a widely used graphical user interface for macromolecular crystallography are described.
A new Python-based graphical user interface for the PHENIX suite of crystallography software is described. This interface unifies the command-line programs and their graphical displays, simplifying the development of new interfaces and avoiding duplication of function. With careful design, graphical interfaces can be displayed automatically, instead of being manually constructed. The resulting package is easily maintained and extended as new programs are added or modified.
doi:10.1107/S0021889812017293
PMCID: PMC3359726  PMID: 22675231
macromolecular crystallography; graphical user interfaces; PHENIX
9.  Towards automated crystallographic structure refinement with phenix.refine  
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods.
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. It has several automation features and is also highly flexible. Several hundred parameters enable extensive customizations for complex use cases. Multiple user-defined refinement strategies can be applied to specific parts of the model in a single refinement run. An intuitive graphical user interface is available to guide novice users and to assist advanced users in managing refinement projects. X-ray or neutron diffraction data can be used separately or jointly in refinement. phenix.refine is tightly integrated into the PHENIX suite, where it serves as a critical component in automated model building, final structure refinement, structure validation and deposition to the wwPDB. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods.
doi:10.1107/S0907444912001308
PMCID: PMC3322595  PMID: 22505256
structure refinement; PHENIX; joint X-ray/neutron refinement; maximum likelihood; TLS; simulated annealing; subatomic resolution; real-space refinement; twinning; NCS
10.  Use of knowledge-based restraints in phenix.refine to improve macromolecular refinement at low resolution 
Recent developments in PHENIX are reported that allow the use of reference-model torsion restraints, secondary-structure hydrogen-bond restraints and Ramachandran restraints for improved macromolecular refinement in phenix.refine at low resolution.
Traditional methods for macromolecular refinement often have limited success at low resolution (3.0–3.5 Å or worse), producing models that score poorly on crystallographic and geometric validation criteria. To improve low-resolution refinement, knowledge from macromolecular chemistry and homology was used to add three new coordinate-restraint functions to the refinement program phenix.refine. Firstly, a ‘reference-model’ method uses an identical or homologous higher resolution model to add restraints on torsion angles to the geometric target function. Secondly, automatic restraints for common secondary-structure elements in proteins and nucleic acids were implemented that can help to preserve the secondary-structure geometry, which is often distorted at low resolution. Lastly, we have implemented Ramachandran-based restraints on the backbone torsion angles. In this method, a ϕ,ψ term is added to the geometric target function to minimize a modified Ramachandran landscape that smoothly combines favorable peaks identified from non­redundant high-quality data with unfavorable peaks calculated using a clash-based pseudo-energy function. All three methods show improved MolProbity validation statistics, typically complemented by a lowered R free and a decreased gap between R work and R free.
doi:10.1107/S0907444911047834
PMCID: PMC3322597  PMID: 22505258
macromolecular crystallography; low resolution; refinement; automation
11.  phenix.mr_rosetta: molecular replacement and model rebuilding with Phenix and Rosetta 
The combination of algorithms from the structure-modeling field with those of crystallographic structure determination can broaden the range of templates that are useful for structure determination by the method of molecular replacement. Automated tools in phenix.mr_rosetta simplify the application of these combined approaches by integrating Phenix crystallographic algorithms and Rosetta structure-modeling algorithms and by systematically generating and evaluating models with a combination of these methods. The phenix.mr_rosetta algorithms can be used to automatically determine challenging structures. The approaches used in phenix.mr_rosetta are described along with examples that show roles that structure-modeling can play in molecular replacement.
doi:10.1007/s10969-012-9129-3
PMCID: PMC3375004  PMID: 22418934
Molecular replacement; Automation; Macromolecular crystallography; Rosetta; Phenix
12.  Joint X-ray and neutron refinement with phenix.refine  
The implementation of crystallographic structure-refinement procedures that include both X-ray and neutron data (separate or jointly) in the PHENIX system is described.
Approximately 85% of the structures deposited in the Protein Data Bank have been solved using X-ray crystallography, making it the leading method for three-dimensional structure determination of macromolecules. One of the limitations of the method is that the typical data quality (resolution) does not allow the direct determination of H-atom positions. Most hydrogen positions can be inferred from the positions of other atoms and therefore can be readily included into the structure model as a priori knowledge. However, this may not be the case in biologically active sites of macromolecules, where the presence and position of hydrogen is crucial to the enzymatic mechanism. This makes the application of neutron crystallo­graphy in biology particularly important, as H atoms can be clearly located in experimental neutron scattering density maps. Without exception, when a neutron structure is determined the corresponding X-ray structure is also known, making it possible to derive the complete structure using both data sets. Here, the implementation of crystallographic structure-refinement procedures that include both X-ray and neutron data (separate or jointly) in the PHENIX system is described.
doi:10.1107/S0907444910026582
PMCID: PMC2967420  PMID: 21041930
structure refinement; neutrons; joint X-ray and neutron refinement; PHENIX
13.  Coupling of Receptor Conformation and Ligand Orientation Determine Graded Activity 
Nature chemical biology  2010;6(11):837-843.
SUMMARY
Small molecules stabilize specific protein conformations from a larger ensemble, enabling molecular switches that control diverse cellular functions. We show here that the converse also holds true, where the conformational state of the estrogen receptor can direct distinct orientations of the bound ligand. “Gain of allostery” mutations that mimic the effects of ligand in driving protein conformation allowed crystallization of the partial agonist ligand WAY-169916 with both the canonical active and inactive conformations of the estrogen receptor. The intermediate transcriptional activity induced by WAY169916 is associated with the ligand binding differently to the active and inactive conformations of the receptor. Analyses of a series of chemical derivatives demonstrated that altering the ensemble of ligand binding orientations changes signaling output. The coupling of different ligand binding orientations to distinct active and inactive protein conformations defines a novel mechanism for titrating allosteric signaling activity.
doi:10.1038/nchembio.451
PMCID: PMC2974172  PMID: 20924370
14.  Recent developments in phasing and structure refinement for macromolecular crystallography 
Summary
Central to crystallographic structure solution is obtaining accurate phases in order to build a molecular model, ultimately followed by refinement of that model to optimize its fit to the experimental diffraction data and prior chemical knowledge. Recent advances in phasing and model refinement and validation algorithms make it possible to arrive at better electron density maps and more accurate models.
doi:10.1016/j.sbi.2009.07.014
PMCID: PMC2763973  PMID: 19700309
15.  Evidence of Functional Protein Dynamics from X-Ray Crystallographic Ensembles 
PLoS Computational Biology  2010;6(8):e1000911.
It is widely recognized that representing a protein as a single static conformation is inadequate to describe the dynamics essential to the performance of its biological function. We contrast the amino acid displacements below and above the protein dynamical transition temperature, TD∼215K, of hen egg white lysozyme using X-ray crystallography ensembles that are analyzed by molecular dynamics simulations as a function of temperature. We show that measuring structural variations across an ensemble of X-ray derived models captures the activation of conformational states that are of functional importance just above TD, and they remain virtually identical to structural motions measured at 300K. Our results highlight the ability to observe functional structural variations across an ensemble of X-ray crystallographic data, and that residue fluctuations measured in MD simulations at room temperature are in quantitative agreement with the experimental observable.
Author Summary
There is a well-recognized gap between the dynamical motions of proteins required to execute function and the experimental techniques capable of capturing that motion at the atomic level. We show that much experimental detail of dynamical motion is already present in X-ray crystallographic data, which arises from being solved by different research groups using different methodologies under different crystallization conditions, which then capture an ensemble of structures whose variations can be quantified on a residue-by-residue level using local density correlations. We contrast the amino acid displacements below and above the protein dynamical transition temperature, TD∼215K, of hen egg white lysozyme by comparing the X-ray ensemble to MD ensembles as a function of temperature. We show that measuring structural variations across an ensemble of X-ray derived models captures the activation of conformational states that are of functional importance just above TD and they remain virtually identical to structural motions measured at 300K. It provides a novel analysis of large X-ray ensemble data that is useful for the broader structural biology community.
doi:10.1371/journal.pcbi.1000911
PMCID: PMC2928775  PMID: 20865158
16.  phenix.model_vs_data: a high-level tool for the calculation of crystallographic model and data statistics 
Journal of Applied Crystallography  2010;43(Pt 4):669-676.
Application of phenix.model_vs_data to the contents of the Protein Data Bank shows that the vast majority of deposited structures can be automatically analyzed to reproduce the reported quality statistics. However, the small fraction of structures that elude automated re-analysis highlight areas where new software developments can help retain valuable information for future analysis.
phenix.model_vs_data is a high-level command-line tool for the computation of crystallographic model and data statistics, and the evaluation of the fit of the model to data. Analysis of all Protein Data Bank structures that have experimental data available shows that in most cases the reported statistics, in particular R factors, can be reproduced within a few percentage points. However, there are a number of outliers where the recomputed R values are significantly different from those originally reported. The reasons for these discrepancies are discussed.
doi:10.1107/S0021889810015608
PMCID: PMC2906258  PMID: 20648263
PHENIX; Protein Data Bank; data quality; model quality; structure validation; R factors
17.  PHENIX: a comprehensive Python-based system for macromolecular structure solution 
The PHENIX software for macromolecular structure determination is described.
Macromolecular X-ray crystallography is routinely applied to understand biological processes at a molecular level. How­ever, significant time and effort are still required to solve and complete many of these structures because of the need for manual interpretation of complex numerical data using many software packages and the repeated use of interactive three-dimensional graphics. PHENIX has been developed to provide a comprehensive system for macromolecular crystallo­graphic structure solution with an emphasis on the automation of all procedures. This has relied on the development of algorithms that minimize or eliminate subjective input, the development of algorithms that automate procedures that are traditionally performed by hand and, finally, the development of a framework that allows a tight integration between the algorithms.
doi:10.1107/S0907444909052925
PMCID: PMC2815670  PMID: 20124702
PHENIX; Python; macromolecular crystallography; algorithms
18.  On the use of logarithmic scales for analysis of diffraction data 
Conventional and free R factors and their difference, as well as the ratio of the number of measured reflections to the number of atoms in the crystal, were studied as functions of the resolution at which the structures were reported. When the resolution was taken uniformly on a logarithmic scale, the most frequent values of these functions were quasi-linear over a large resolution range.
Predictions of the possible model parameterization and of the values of model characteristics such as R factors are important for macromolecular refinement and validation protocols. One of the key parameters defining these and other values is the resolution of the experimentally measured diffraction data. The higher the resolution, the larger the number of diffraction data N ref, the larger its ratio to the number N at of non-H atoms, the more parameters per atom can be used for modelling and the more precise and detailed a model can be obtained. The ratio N ref/N at was calculated for models deposited in the Protein Data Bank as a function of the resolution at which the structures were reported. The most frequent values for this distribution depend essentially linearly on resolution when the latter is expressed on a uniform logarithmic scale. This defines simple analytic formulae for the typical Matthews coefficient and for the typically allowed number of parameters per atom for crystals diffracting to a given resolution. This simple dependence makes it possible in many cases to estimate the expected resolution of the experimental data for a crystal with a given Matthews coefficient. When expressed using the same logarithmic scale, the most frequent values for R and R free factors and for their difference are also essentially linear across a large resolution range. The minimal R-factor values are practically constant at resolutions better than 3 Å, below which they begin to grow sharply. This simple dependence on the resolution allows the prediction of expected R-factor values for unknown structures and may be used to guide model refinement and validation.
doi:10.1107/S0907444909039638
PMCID: PMC2789003  PMID: 19966414
resolution; logarithmic scale; R factor; data-to-parameter ratio
19.  Averaged kick maps: less noise, more signal…and probably less bias 
Averaged kick maps are the sum of a series of individual kick maps, where each map is calculated from atomic coordinates modified by random shifts. These maps offer the possibility of an improved and less model-biased map interpretation.
Use of reliable density maps is crucial for rapid and successful crystal structure determination. Here, the averaged kick (AK) map approach is investigated, its application is generalized and it is compared with other map-calculation methods. AK maps are the sum of a series of kick maps, where each kick map is calculated from atomic coordinates modified by random shifts. As such, they are a numerical analogue of maximum-likelihood maps. AK maps can be unweighted or maximum-likelihood (σA) weighted. Analysis shows that they are comparable and correspond better to the final model than σA and simulated-annealing maps. The AK maps were challenged by a difficult structure-validation case, in which they were able to clarify the problematic region in the density without the need for model rebuilding. The conclusion is that AK maps can be useful throughout the entire progress of crystal structure determination, offering the possibility of improved map interpretation.
doi:10.1107/S0907444909021933
PMCID: PMC2733881  PMID: 19690370
kick maps; OMIT maps; density-map calculation; model bias; maximum likelihood
20.  Automatic multiple-zone rigid-body refinement with a large convergence radius 
Journal of Applied Crystallography  2009;42(Pt 4):607-615.
Systematic investigation of a large number of trial rigid-body refinements leads to an optimized multiple-zone protocol with a larger convergence radius.
Rigid-body refinement is the constrained coordinate refinement of one or more groups of atoms that each move (rotate and translate) as a single body. The goal of this work was to establish an automatic procedure for rigid-body refinement which implements a practical compromise between runtime requirements and convergence radius. This has been achieved by analysis of a large number of trial refinements for 12 classes of random rigid-body displacements (that differ in magnitude of introduced errors), using both least-squares and maximum-likelihood target functions. The results of these tests led to a multiple-zone protocol. The final parameterization of this protocol was optimized empirically on the basis of a second large set of test refinements. This multiple-zone protocol is implemented as part of the phenix.refine program.
doi:10.1107/S0021889809023528
PMCID: PMC2712840  PMID: 19649324
rigid-body refinement; multiple-zone protocols
21.  Decision-making in structure solution using Bayesian estimates of map quality: the PHENIX AutoSol wizard 
Ten measures of experimental electron-density-map quality are examined and the skewness of electron density is found to be the best indicator of actual map quality. A Bayesian approach to estimating map quality is developed and used in the PHENIX AutoSol wizard to make decisions during automated structure solution.
Estimates of the quality of experimental maps are important in many stages of structure determination of macromolecules. Map quality is defined here as the correlation between a map and the corresponding map obtained using phases from the final refined model. Here, ten different measures of experimental map quality were examined using a set of 1359 maps calculated by re-analysis of 246 solved MAD, SAD and MIR data sets. A simple Bayesian approach to estimation of map quality from one or more measures is presented. It was found that a Bayesian estimator based on the skewness of the density values in an electron-density map is the most accurate of the ten individual Bayesian estimators of map quality examined, with a correlation between estimated and actual map quality of 0.90. A combination of the skewness of electron density with the local correlation of r.m.s. density gives a further improvement in estimating map quality, with an overall correlation coefficient of 0.92. The PHENIX AutoSol wizard carries out automated structure solution based on any combination of SAD, MAD, SIR or MIR data sets. The wizard is based on tools from the PHENIX package and uses the Bayesian estimates of map quality described here to choose the highest quality solutions after experimental phasing.
doi:10.1107/S0907444909012098
PMCID: PMC2685735  PMID: 19465773
structure solution; scoring; Protein Data Bank; phasing; decision-making; PHENIX; experimental electron-density maps
22.  Generalized X-ray and neutron crystallographic analysis: more accurate and complete structures for biological macromolecules 
X-ray and neutron crystallographic data have been combined in a joint structure-refinement procedure that has been developed using recent advances in modern computational methodologies, including cross-validated maximum-likelihood target functions with gradient-based optimization and simulated annealing.
X-ray and neutron crystallographic techniques provide complementary information on the structure and function of biological macromolecules. X-ray and neutron (XN) crystallo­graphic data have been combined in a joint structure-refinement procedure that has been developed using recent advances in modern computational methodologies, including cross-validated maximum-likelihood target functions with gradient-based optimization and simulated annealing. The XN approach for complete (including hydrogen) macromolecular structure analysis provides more accurate and complete structures, as demonstrated for diisopropyl fluorophosphatase, photoactive yellow protein and human aldose reductase. Furthermore, this method has several practical advantages, including the easier determination of the orientation of water molecules, hydroxyl groups and some amino-acid side chains.
doi:10.1107/S0907444909011548
PMCID: PMC2685734  PMID: 19465771
joint X-ray and neutron crystallography; structure refinement
23.  Crystallographic model quality at a glance 
The representation of crystallographic model characteristics in the form of a polygon allows the quick comparison of a model with a set of previously solved structures.
A crystallographic macromolecular model is typically characterized by a list of quality criteria, such as R factors, deviations from ideal stereochemistry and average B factors, which are usually provided as tables in publications or in structural databases. In order to facilitate a quick model-quality evaluation, a graphical representation is proposed. Each key parameter such as R factor or bond-length deviation from ‘ideal values’ is shown graphically as a point on a ‘ruler’. These rulers are plotted as a set of lines with the same origin, forming a hub and spokes. Different parts of the rulers are coloured differently to reflect the frequency (red for a low frequency, blue for a high frequency) with which the corresponding values are observed in a reference set of structures determined previously. The points for a given model marked on these lines are connected to form a polygon. A polygon that is strongly compressed or dilated along some axes reveals unusually low or high values of the corresponding characteristics. Polygon vertices in ‘red zones’ indicate parameters which lie outside typical values.
doi:10.1107/S0907444908044296
PMCID: PMC2651759  PMID: 19237753
model quality; PDB; validation; refinement; PHENIX
24.  Iterative-build OMIT maps: map improvement by iterative model building and refinement without model bias 
A procedure for carrying out iterative model building, density modification and refinement is presented in which the density in an OMITregion is essentially unbiased by an atomic model. Density from a set of overlapping OMIT regions can be combined to create a composite ‘iterative-build’ OMIT map that is everywhere unbiased by an atomic model but also everywhere benefiting from the model-based information present elsewhere in the unit cell. The procedure may have applications in the validation of specific features in atomic models as well as in overall model validation. The procedure is demonstrated with a molecular-replacement structure and with an experimentally phased structure and a variation on the method is demonstrated by removing model bias from a structure from the Protein Data Bank.
doi:10.1107/S0907444908004319
PMCID: PMC2424225  PMID: 18453687
25.  Iterative-build OMIT maps: map improvement by iterative model building and refinement without model bias 
An OMIT procedure is presented that has the benefits of iterative model building density modification and refinement yet is essentially unbiased by the atomic model that is built.
A procedure for carrying out iterative model building, density modification and refinement is presented in which the density in an OMIT region is essentially unbiased by an atomic model. Density from a set of overlapping OMIT regions can be combined to create a composite ‘iterative-build’ OMIT map that is everywhere unbiased by an atomic model but also everywhere benefiting from the model-based information present elsewhere in the unit cell. The procedure may have applications in the validation of specific features in atomic models as well as in overall model validation. The procedure is demonstrated with a molecular-replacement structure and with an experimentally phased structure and a variation on the method is demonstrated by removing model bias from a structure from the Protein Data Bank.
doi:10.1107/S0907444908004319
PMCID: PMC2424225  PMID: 18453687
model building; model validation; macromolecular models; Protein Data Bank; refinement; OMIT maps; bias; structure refinement; PHENIX

Results 1-25 (30)