PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-20 (20)
 

Clipboard (0)
None
Journals
Year of Publication
Document Types
1.  Flexible torsion-angle noncrystallographic symmetry restraints for improved macromolecular structure refinement 
Flexible torsion angle-based NCS restraints have been implemented in phenix.refine, allowing improved model refinement at all resolutions. Rotamer correction and rotamer consistency checks between NCS-related amino-acid side chains further improve the final model quality.
One of the great challenges in refining macromolecular crystal structures is a low data-to-parameter ratio. Historically, knowledge from chemistry has been used to help to improve this ratio. When a macromolecule crystallizes with more than one copy in the asymmetric unit, the noncrystallographic symmetry relationships can be exploited to provide additional restraints when refining the working model. However, although globally similar, NCS-related chains often have local differences. To allow for local differences between NCS-related molecules, flexible torsion-based NCS restraints have been introduced, coupled with intelligent rotamer handling for protein chains, and are available in phenix.refine for refinement of models at all resolutions.
doi:10.1107/S1399004714003277
PMCID: PMC4014122  PMID: 24816103
macromolecular crystallography; noncrystallographic symmetry; NCS; refinement; automation
2.  Automated identification of elemental ions in macromolecular crystal structures 
The solvent-picking procedure in phenix.refine has been extended and combined with Phaser anomalous substructure completion and analysis of coordination geometry to identify and place elemental ions.
Many macromolecular model-building and refinement programs can automatically place solvent atoms in electron density at moderate-to-high resolution. This process frequently builds water molecules in place of elemental ions, the identification of which must be performed manually. The solvent-picking algorithms in phenix.refine have been extended to build common ions based on an analysis of the chemical environment as well as physical properties such as occupancy, B factor and anomalous scattering. The method is most effective for heavier elements such as calcium and zinc, for which a majority of sites can be placed with few false positives in a diverse test set of structures. At atomic resolution, it is observed that it can also be possible to identify tightly bound sodium and magnesium ions. A number of challenges that contribute to the difficulty of completely automating the process of structure completion are discussed.
doi:10.1107/S1399004714001308
PMCID: PMC3975891  PMID: 24699654
refinement; ions; PHENIX
3.  Automating crystallographic structure solution and refinement of protein–ligand complexes 
A software system for automated protein–ligand crystallography has been implemented in the Phenix suite. This significantly reduces the manual effort required in high-throughput crystallographic studies.
High-throughput drug-discovery and mechanistic studies often require the determination of multiple related crystal structures that only differ in the bound ligands, point mutations in the protein sequence and minor conformational changes. If performed manually, solution and refinement requires extensive repetition of the same tasks for each structure. To accelerate this process and minimize manual effort, a pipeline encompassing all stages of ligand building and refinement, starting from integrated and scaled diffraction intensities, has been implemented in Phenix. The resulting system is able to successfully solve and refine large collections of structures in parallel without extensive user intervention prior to the final stages of model completion and validation.
doi:10.1107/S139900471302748X
PMCID: PMC3919266  PMID: 24419387
protein–ligand complexes; automation; crystallographic structure solution and refinement
4.  Model morphing and sequence assignment after molecular replacement 
A procedure for model building is described that combines morphing a model to match a density map, trimming the morphed model and aligning the model to a sequence.
A procedure termed ‘morphing’ for improving a model after it has been placed in the crystallographic cell by molecular replacement has recently been developed. Morphing consists of applying a smooth deformation to a model to make it match an electron-density map more closely. Morphing does not change the identities of the residues in the chain, only their coordinates. Consequently, if the true structure differs from the working model by containing different residues, these differences cannot be corrected by morphing. Here, a procedure that helps to address this limitation is described. The goal of the procedure is to obtain a relatively complete model that has accurate main-chain atomic positions and residues that are correctly assigned to the sequence. Residues in a morphed model that do not match the electron-density map are removed. Each segment of the resulting trimmed morphed model is then assigned to the sequence of the molecule using information about the connectivity of the chains from the working model and from connections that can be identified from the electron-density map. The procedure was tested by application to a recently determined structure at a resolution of 3.2 Å and was found to increase the number of correctly identified residues in this structure from the 88 obtained using phenix.resolve sequence assignment alone (Terwilliger, 2003 ▶) to 247 of a possible 359. Additionally, the procedure was tested by application to a series of templates with sequence identities to a target structure ranging between 7 and 36%. The mean fraction of correctly identified residues in these cases was increased from 33% using phenix.resolve sequence assignment to 47% using the current procedure. The procedure is simple to apply and is available in the Phenix software package.
doi:10.1107/S0907444913017770
PMCID: PMC3817698  PMID: 24189236
morphing; model building; sequence assignment; model–map correlation; loop-building
5.  Improved crystallographic models through iterated local density-guided model deformation and reciprocal-space refinement 
A density-based procedure is described for improving a homology model that is locally accurate but differs globally. The model is deformed to match the map and refined, yielding an improved starting point for density modification and further model-building.
An approach is presented for addressing the challenge of model rebuilding after molecular replacement in cases where the placed template is very different from the structure to be determined. The approach takes advantage of the observation that a template and target structure may have local structures that can be superimposed much more closely than can their complete structures. A density-guided procedure for deformation of a properly placed template is introduced. A shift in the coordinates of each residue in the structure is calculated based on optimizing the match of model density within a 6 Å radius of the center of that residue with a prime-and-switch electron-density map. The shifts are smoothed and applied to the atoms in each residue, leading to local deformation of the template that improves the match of map and model. The model is then refined to improve the geometry and the fit of model to the structure-factor data. A new map is then calculated and the process is repeated until convergence. The procedure can extend the routine applicability of automated molecular replacement, model building and refinement to search models with over 2 Å r.m.s.d. representing 65–100% of the structure.
doi:10.1107/S0907444912015636
PMCID: PMC3388814  PMID: 22751672
molecular replacement; automation; macromolecular crystallography; structure similarity; modeling; Phenix; morphing
6.  Towards automated crystallographic structure refinement with phenix.refine  
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods.
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. It has several automation features and is also highly flexible. Several hundred parameters enable extensive customizations for complex use cases. Multiple user-defined refinement strategies can be applied to specific parts of the model in a single refinement run. An intuitive graphical user interface is available to guide novice users and to assist advanced users in managing refinement projects. X-ray or neutron diffraction data can be used separately or jointly in refinement. phenix.refine is tightly integrated into the PHENIX suite, where it serves as a critical component in automated model building, final structure refinement, structure validation and deposition to the wwPDB. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods.
doi:10.1107/S0907444912001308
PMCID: PMC3322595  PMID: 22505256
structure refinement; PHENIX; joint X-ray/neutron refinement; maximum likelihood; TLS; simulated annealing; subatomic resolution; real-space refinement; twinning; NCS
7.  Use of knowledge-based restraints in phenix.refine to improve macromolecular refinement at low resolution 
Recent developments in PHENIX are reported that allow the use of reference-model torsion restraints, secondary-structure hydrogen-bond restraints and Ramachandran restraints for improved macromolecular refinement in phenix.refine at low resolution.
Traditional methods for macromolecular refinement often have limited success at low resolution (3.0–3.5 Å or worse), producing models that score poorly on crystallographic and geometric validation criteria. To improve low-resolution refinement, knowledge from macromolecular chemistry and homology was used to add three new coordinate-restraint functions to the refinement program phenix.refine. Firstly, a ‘reference-model’ method uses an identical or homologous higher resolution model to add restraints on torsion angles to the geometric target function. Secondly, automatic restraints for common secondary-structure elements in proteins and nucleic acids were implemented that can help to preserve the secondary-structure geometry, which is often distorted at low resolution. Lastly, we have implemented Ramachandran-based restraints on the backbone torsion angles. In this method, a ϕ,ψ term is added to the geometric target function to minimize a modified Ramachandran landscape that smoothly combines favorable peaks identified from non­redundant high-quality data with unfavorable peaks calculated using a clash-based pseudo-energy function. All three methods show improved MolProbity validation statistics, typically complemented by a lowered R free and a decreased gap between R work and R free.
doi:10.1107/S0907444911047834
PMCID: PMC3322597  PMID: 22505258
macromolecular crystallography; low resolution; refinement; automation
8.  Joint X-ray and neutron refinement with phenix.refine  
The implementation of crystallographic structure-refinement procedures that include both X-ray and neutron data (separate or jointly) in the PHENIX system is described.
Approximately 85% of the structures deposited in the Protein Data Bank have been solved using X-ray crystallography, making it the leading method for three-dimensional structure determination of macromolecules. One of the limitations of the method is that the typical data quality (resolution) does not allow the direct determination of H-atom positions. Most hydrogen positions can be inferred from the positions of other atoms and therefore can be readily included into the structure model as a priori knowledge. However, this may not be the case in biologically active sites of macromolecules, where the presence and position of hydrogen is crucial to the enzymatic mechanism. This makes the application of neutron crystallo­graphy in biology particularly important, as H atoms can be clearly located in experimental neutron scattering density maps. Without exception, when a neutron structure is determined the corresponding X-ray structure is also known, making it possible to derive the complete structure using both data sets. Here, the implementation of crystallographic structure-refinement procedures that include both X-ray and neutron data (separate or jointly) in the PHENIX system is described.
doi:10.1107/S0907444910026582
PMCID: PMC2967420  PMID: 21041930
structure refinement; neutrons; joint X-ray and neutron refinement; PHENIX
9.  PHENIX: a comprehensive Python-based system for macromolecular structure solution 
The PHENIX software for macromolecular structure determination is described.
Macromolecular X-ray crystallography is routinely applied to understand biological processes at a molecular level. How­ever, significant time and effort are still required to solve and complete many of these structures because of the need for manual interpretation of complex numerical data using many software packages and the repeated use of interactive three-dimensional graphics. PHENIX has been developed to provide a comprehensive system for macromolecular crystallo­graphic structure solution with an emphasis on the automation of all procedures. This has relied on the development of algorithms that minimize or eliminate subjective input, the development of algorithms that automate procedures that are traditionally performed by hand and, finally, the development of a framework that allows a tight integration between the algorithms.
doi:10.1107/S0907444909052925
PMCID: PMC2815670  PMID: 20124702
PHENIX; Python; macromolecular crystallography; algorithms
10.  On the use of logarithmic scales for analysis of diffraction data 
Conventional and free R factors and their difference, as well as the ratio of the number of measured reflections to the number of atoms in the crystal, were studied as functions of the resolution at which the structures were reported. When the resolution was taken uniformly on a logarithmic scale, the most frequent values of these functions were quasi-linear over a large resolution range.
Predictions of the possible model parameterization and of the values of model characteristics such as R factors are important for macromolecular refinement and validation protocols. One of the key parameters defining these and other values is the resolution of the experimentally measured diffraction data. The higher the resolution, the larger the number of diffraction data N ref, the larger its ratio to the number N at of non-H atoms, the more parameters per atom can be used for modelling and the more precise and detailed a model can be obtained. The ratio N ref/N at was calculated for models deposited in the Protein Data Bank as a function of the resolution at which the structures were reported. The most frequent values for this distribution depend essentially linearly on resolution when the latter is expressed on a uniform logarithmic scale. This defines simple analytic formulae for the typical Matthews coefficient and for the typically allowed number of parameters per atom for crystals diffracting to a given resolution. This simple dependence makes it possible in many cases to estimate the expected resolution of the experimental data for a crystal with a given Matthews coefficient. When expressed using the same logarithmic scale, the most frequent values for R and R free factors and for their difference are also essentially linear across a large resolution range. The minimal R-factor values are practically constant at resolutions better than 3 Å, below which they begin to grow sharply. This simple dependence on the resolution allows the prediction of expected R-factor values for unknown structures and may be used to guide model refinement and validation.
doi:10.1107/S0907444909039638
PMCID: PMC2789003  PMID: 19966414
resolution; logarithmic scale; R factor; data-to-parameter ratio
11.  Averaged kick maps: less noise, more signal…and probably less bias 
Averaged kick maps are the sum of a series of individual kick maps, where each map is calculated from atomic coordinates modified by random shifts. These maps offer the possibility of an improved and less model-biased map interpretation.
Use of reliable density maps is crucial for rapid and successful crystal structure determination. Here, the averaged kick (AK) map approach is investigated, its application is generalized and it is compared with other map-calculation methods. AK maps are the sum of a series of kick maps, where each kick map is calculated from atomic coordinates modified by random shifts. As such, they are a numerical analogue of maximum-likelihood maps. AK maps can be unweighted or maximum-likelihood (σA) weighted. Analysis shows that they are comparable and correspond better to the final model than σA and simulated-annealing maps. The AK maps were challenged by a difficult structure-validation case, in which they were able to clarify the problematic region in the density without the need for model rebuilding. The conclusion is that AK maps can be useful throughout the entire progress of crystal structure determination, offering the possibility of improved map interpretation.
doi:10.1107/S0907444909021933
PMCID: PMC2733881  PMID: 19690370
kick maps; OMIT maps; density-map calculation; model bias; maximum likelihood
12.  Decision-making in structure solution using Bayesian estimates of map quality: the PHENIX AutoSol wizard 
Ten measures of experimental electron-density-map quality are examined and the skewness of electron density is found to be the best indicator of actual map quality. A Bayesian approach to estimating map quality is developed and used in the PHENIX AutoSol wizard to make decisions during automated structure solution.
Estimates of the quality of experimental maps are important in many stages of structure determination of macromolecules. Map quality is defined here as the correlation between a map and the corresponding map obtained using phases from the final refined model. Here, ten different measures of experimental map quality were examined using a set of 1359 maps calculated by re-analysis of 246 solved MAD, SAD and MIR data sets. A simple Bayesian approach to estimation of map quality from one or more measures is presented. It was found that a Bayesian estimator based on the skewness of the density values in an electron-density map is the most accurate of the ten individual Bayesian estimators of map quality examined, with a correlation between estimated and actual map quality of 0.90. A combination of the skewness of electron density with the local correlation of r.m.s. density gives a further improvement in estimating map quality, with an overall correlation coefficient of 0.92. The PHENIX AutoSol wizard carries out automated structure solution based on any combination of SAD, MAD, SIR or MIR data sets. The wizard is based on tools from the PHENIX package and uses the Bayesian estimates of map quality described here to choose the highest quality solutions after experimental phasing.
doi:10.1107/S0907444909012098
PMCID: PMC2685735  PMID: 19465773
structure solution; scoring; Protein Data Bank; phasing; decision-making; PHENIX; experimental electron-density maps
13.  Generalized X-ray and neutron crystallographic analysis: more accurate and complete structures for biological macromolecules 
X-ray and neutron crystallographic data have been combined in a joint structure-refinement procedure that has been developed using recent advances in modern computational methodologies, including cross-validated maximum-likelihood target functions with gradient-based optimization and simulated annealing.
X-ray and neutron crystallographic techniques provide complementary information on the structure and function of biological macromolecules. X-ray and neutron (XN) crystallo­graphic data have been combined in a joint structure-refinement procedure that has been developed using recent advances in modern computational methodologies, including cross-validated maximum-likelihood target functions with gradient-based optimization and simulated annealing. The XN approach for complete (including hydrogen) macromolecular structure analysis provides more accurate and complete structures, as demonstrated for diisopropyl fluorophosphatase, photoactive yellow protein and human aldose reductase. Furthermore, this method has several practical advantages, including the easier determination of the orientation of water molecules, hydroxyl groups and some amino-acid side chains.
doi:10.1107/S0907444909011548
PMCID: PMC2685734  PMID: 19465771
joint X-ray and neutron crystallography; structure refinement
14.  Crystallographic model quality at a glance 
The representation of crystallographic model characteristics in the form of a polygon allows the quick comparison of a model with a set of previously solved structures.
A crystallographic macromolecular model is typically characterized by a list of quality criteria, such as R factors, deviations from ideal stereochemistry and average B factors, which are usually provided as tables in publications or in structural databases. In order to facilitate a quick model-quality evaluation, a graphical representation is proposed. Each key parameter such as R factor or bond-length deviation from ‘ideal values’ is shown graphically as a point on a ‘ruler’. These rulers are plotted as a set of lines with the same origin, forming a hub and spokes. Different parts of the rulers are coloured differently to reflect the frequency (red for a low frequency, blue for a high frequency) with which the corresponding values are observed in a reference set of structures determined previously. The points for a given model marked on these lines are connected to form a polygon. A polygon that is strongly compressed or dilated along some axes reveals unusually low or high values of the corresponding characteristics. Polygon vertices in ‘red zones’ indicate parameters which lie outside typical values.
doi:10.1107/S0907444908044296
PMCID: PMC2651759  PMID: 19237753
model quality; PDB; validation; refinement; PHENIX
15.  Iterative-build OMIT maps: map improvement by iterative model building and refinement without model bias 
A procedure for carrying out iterative model building, density modification and refinement is presented in which the density in an OMITregion is essentially unbiased by an atomic model. Density from a set of overlapping OMIT regions can be combined to create a composite ‘iterative-build’ OMIT map that is everywhere unbiased by an atomic model but also everywhere benefiting from the model-based information present elsewhere in the unit cell. The procedure may have applications in the validation of specific features in atomic models as well as in overall model validation. The procedure is demonstrated with a molecular-replacement structure and with an experimentally phased structure and a variation on the method is demonstrated by removing model bias from a structure from the Protein Data Bank.
doi:10.1107/S0907444908004319
PMCID: PMC2424225  PMID: 18453687
16.  Iterative-build OMIT maps: map improvement by iterative model building and refinement without model bias 
An OMIT procedure is presented that has the benefits of iterative model building density modification and refinement yet is essentially unbiased by the atomic model that is built.
A procedure for carrying out iterative model building, density modification and refinement is presented in which the density in an OMIT region is essentially unbiased by an atomic model. Density from a set of overlapping OMIT regions can be combined to create a composite ‘iterative-build’ OMIT map that is everywhere unbiased by an atomic model but also everywhere benefiting from the model-based information present elsewhere in the unit cell. The procedure may have applications in the validation of specific features in atomic models as well as in overall model validation. The procedure is demonstrated with a molecular-replacement structure and with an experimentally phased structure and a variation on the method is demonstrated by removing model bias from a structure from the Protein Data Bank.
doi:10.1107/S0907444908004319
PMCID: PMC2424225  PMID: 18453687
model building; model validation; macromolecular models; Protein Data Bank; refinement; OMIT maps; bias; structure refinement; PHENIX
17.  Iterative model building, structure refinement and density modification with the PHENIX AutoBuild wizard 
The highly automated PHENIX AutoBuild wizard is described. The procedure can be applied equally well to phases derived from isomorphous/anomalous and molecular-replacement methods.
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.
doi:10.1107/S090744490705024X
PMCID: PMC2394820  PMID: 18094468
model building; model completion; macromolecular models; Protein Data Bank; structure refinement; PHENIX
18.  On macromolecular refinement at subatomic resolution with interatomic scatterers 
Modelling deformation electron density using interatomic scatters is simpler than multipolar methods, produces comparable results at subatomic resolution and can easily be applied to macromolecules.
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.
doi:10.1107/S0907444907046148
PMCID: PMC2808317  PMID: 18007035
structure refinement; subatomic resolution; deformation density; interatomic scatterers; PHENIX
19.  Interpretation of ensembles created by multiple iterative rebuilding of macromolecular models 
Heterogeneity in ensembles generated by independent model rebuilding principally reflects the limitations of the data and of the model-building process rather than the diversity of structures in the crystal.
Automation of iterative model building, density modification and refinement in macromolecular crystallography has made it feasible to carry out this entire process multiple times. By using different random seeds in the process, a number of different models compatible with experimental data can be created. Sets of models were generated in this way using real data for ten protein structures from the Protein Data Bank and using synthetic data generated at various resolutions. Most of the heterogeneity among models produced in this way is in the side chains and loops on the protein surface. Possible interpretations of the variation among models created by repetitive rebuilding were investigated. Synthetic data were created in which a crystal structure was modelled as the average of a set of ‘perfect’ structures and the range of models obtained by rebuilding a single starting model was examined. The standard deviations of coordinates in models obtained by repetitive rebuilding at high resolution are small, while those obtained for the same synthetic crystal structure at low resolution are large, so that the diversity within a group of models cannot generally be a quantitative reflection of the actual structures in a crystal. Instead, the group of structures obtained by repetitive rebuilding reflects the precision of the models, and the standard deviation of coordinates of these structures is a lower bound estimate of the uncertainty in coordinates of the individual models.
doi:10.1107/S0907444907009791
PMCID: PMC2483474  PMID: 17452785
model building; model completion; coordinate errors; models; Protein Data Bank; convergence; reproducibility; heterogeneity; precision; accuracy
20.  A robust bulk-solvent correction and anisotropic scaling procedure 
A robust method for determining bulk-solvent and anisotropic scaling parameters for macromolecular refinement is described. A maximum-likelihood target function for determination of flat bulk-solvent model parameters and overall anisotropic scale factor is also proposed.
A reliable method for the determination of bulk-solvent model parameters and an overall anisotropic scale factor is of increasing importance as structure determination becomes more automated. Current protocols require the manual inspection of refinement results in order to detect errors in the calculation of these parameters. Here, a robust method for determining bulk-solvent and anisotropic scaling parameters in macromolecular refinement is described. The implementation of a maximum-likelihood target function for determining the same parameters is also discussed. The formulas and corresponding derivatives of the likelihood function with respect to the solvent parameters and the components of anisotropic scale matrix are presented. These algorithms are implemented in the CCTBX bulk-solvent correction and scaling module.
doi:10.1107/S0907444905007894
PMCID: PMC2808320  PMID: 15983406
bulk-solvent correction; anisotropic scaling

Results 1-20 (20)