Abnormal expression or mutations in Ras proteins has been found in up to 30% of cancer cell types, making them excellent protein models to probe structure-function relationships of cell-signaling processes that mediate cell transformtion. Yet, there has been very little development of therapies to help tackle Ras-related diseased states. The development of small molecules to target Ras proteins to potentially inhibit abnormal Ras-stimulated cell signaling has been conceptualized and some progress has been made over the last 16 or so years. Here, we briefly review studies characterizing Ras protein-small molecule interactions to show the importance and potential that these small molecules may have for Ras-related drug discovery. We summarize recent results, highlighting small molecules that can be directly targeted to Ras using Structure-Based Drug Design (SBDD) and Fragment-Based Lead Discovery (FBLD) methods. The inactivation of Ras oncogenic signaling in vitro by small molecules is currently an attractive hurdle to try to and leap over in order to attack the oncogenic state. In this regard, important features of previously characterized properties of small molecule Ras targets, as well as a current understanding of conformational and dynamics changes seen for Ras-related mutants, relative to wild type, must be taken into account as newer small molecule design strategies towards Ras are developed.
Ras [Rat Sarcoma]; Small Molecule Target; Structure-Based Drug Design; Fragment-Based Drug Design; GTP Hydrolysis; Guanine Nucleotide Exchange Factors [GEF]
Special methods are required to interpret sparse diffraction patterns collected from peptide crystals at X-ray free-electron lasers. Bragg spots can be indexed from composite-image powder rings, with crystal orientations then deduced from a very limited number of spot positions.
Still diffraction patterns from peptide nanocrystals with small unit cells are challenging to index using conventional methods owing to the limited number of spots and the lack of crystal orientation information for individual images. New indexing algorithms have been developed as part of the Computational Crystallography Toolbox (cctbx) to overcome these challenges. Accurate unit-cell information derived from an aggregate data set from thousands of diffraction patterns can be used to determine a crystal orientation matrix for individual images with as few as five reflections. These algorithms are potentially applicable not only to amyloid peptides but also to any set of diffraction patterns with sparse properties, such as low-resolution virus structures or high-throughput screening of still images captured by raster-scanning at synchrotron sources. As a proof of concept for this technique, successful integration of X-ray free-electron laser (XFEL) data to 2.5 Å resolution for the amyloid segment GNNQQNY from the Sup35 yeast prion is presented.
XFEL; Sup35 yeast prion; indexing methods; crystallography
The dioxygen we breathe is formed from water by its light-induced oxidation in photosystem II. O2 formation takes place at a catalytic manganese cluster within milliseconds after the photosystem II reaction center is excited by three single-turnover flashes. Here we present combined X-ray emission spectra and diffraction data of 2 flash (2F) and 3 flash (3F) photosystem II samples, and of a transient 3F′ state (250 μs after the third flash), collected under functional conditions using an X-ray free electron laser. The spectra show that the initial O-O bond formation, coupled to Mn-reduction, does not yet occur within 250 μs after the third flash. Diffraction data of all states studied exhibit an anomalous scattering signal from Mn but show no significant structural changes at the present resolution of 4.5 Å. This study represents the initial frames in a molecular movie of the structural changes during the catalytic reaction in photosystem II.
Picoinjection is a promising technique to add reagents into pre-formed emulsion droplets on chip; however, it is sensitive to pressure fluctuation, making stable operation of the picoinjector challenging. We present a chip architecture using a simple pressure stabilizer for consistent and highly reproducible picoinjection in multi-step biochemical assays with droplets. Incorporation of the stabilizer immediately upstream of a picoinjector or a combination of injectors greatly reduces pressure fluctuations enabling reproducible and effective picoinjection in systems where the pressure varies actively during operation. We demonstrate the effectiveness of the pressure stabilizer for an integrated platform for on-demand encapsulation of bacterial cells followed by picoinjection of reagents for lysing the encapsulated cells. The pressure stabilizer was also used for picoinjection of multiple displacement amplification (MDA) reagents to achieve genomic DNA amplification of lysed bacterial cells.
Flexible torsion angle-based NCS restraints have been implemented in phenix.refine, allowing improved model refinement at all resolutions. Rotamer correction and rotamer consistency checks between NCS-related amino-acid side chains further improve the final model quality.
One of the great challenges in refining macromolecular crystal structures is a low data-to-parameter ratio. Historically, knowledge from chemistry has been used to help to improve this ratio. When a macromolecule crystallizes with more than one copy in the asymmetric unit, the noncrystallographic symmetry relationships can be exploited to provide additional restraints when refining the working model. However, although globally similar, NCS-related chains often have local differences. To allow for local differences between NCS-related molecules, flexible torsion-based NCS restraints have been introduced, coupled with intelligent rotamer handling for protein chains, and are available in phenix.refine for refinement of models at all resolutions.
macromolecular crystallography; noncrystallographic symmetry; NCS; refinement; automation
X-ray free-electron laser crystallography relies on the collection of still-shot diffraction patterns. New methods are developed for optimal modeling of the crystals’ orientations and mosaic block properties.
X-ray diffraction patterns from still crystals are inherently difficult to process because the crystal orientation is not uniquely determined by measuring the Bragg spot positions. Only one of the three rotational degrees of freedom is directly coupled to spot positions; the other two rotations move Bragg spots in and out of the reflecting condition but do not change the direction of the diffracted rays. This hinders the ability to recover accurate structure factors from experiments that are dependent on single-shot exposures, such as femtosecond diffract-and-destroy protocols at X-ray free-electron lasers (XFELs). Here, additional methods are introduced to optimally model the diffraction. The best orientation is obtained by requiring, for the brightest observed spots, that each reciprocal-lattice point be placed into the exact reflecting condition implied by Bragg’s law with a minimal rotation. This approach reduces the experimental uncertainties in noisy XFEL data, improving the crystallographic R factors and sharpening anomalous differences that are near the level of the noise.
X-ray free-electron lasers; single-shot exposures
Identifying errors and alternate conformers, and modeling multiple main-chain conformers in poorly ordered regions are overarching problems in crystallographic structure determination that have limited automation efforts and structure quality. Here, we show that implementation of a full factorial designed set of standard refinement approaches, which we call ExCoR (Extensive Combinatorial Refinement), significantly improves structural models compared to the traditional linear tree approach, in which individual algorithms are tested linearly, and only incorporated if the model improves. ExCoR markedly improved maps and models, and reveals building errors and alternate conformations that were masked by traditional refinement approaches. Surprisingly, an individual algorithm that renders a model worse in isolation could still be necessary to produce the best overall model, suggesting that model distortion allows escape from local minima of optimization target function, here shown to be a hallmark limitation of the traditional approach. ExCoR thus provides a simple approach to improving structure determination.
Refinement of macromolecular structures against low-resolution crystallographic data is limited by the ability of current methods to converge on a structure with realistic geometry. We developed a low-resolution crystallographic refinement method that combines the Rosetta sampling methodology and energy function with reciprocal-space X-ray refinement in Phenix. On a set of difficult low-resolution cases, the method yielded improved model geometry and lower free R factors than alternate refinement methods.
Acyltransferases determine which extender units are incorporated into polyketide and fatty acid products. The ping-pong acyltransferase mechanism utilizes a serine in a conserved GHSxG motif. However, the role of the conserved histidine in this motif is poorly understood. We observed that a histidine to alanine mutation (H640A) in the GHSxG motif of the malonyl-CoA specific yersiniabactin acyltransferase results in an approximately seven-fold higher hydrolysis rate over the wildtype enzyme, while retaining transacylation activity. We propose two possibilities for the reduction in hydrolysis rate: either H640 structurally stabilizes the protein by hydrogen bonding with a conserved asparagine in the ferredoxin-like subdomain of the protein, or a water-mediated hydrogen bond between H640 and the malonyl moiety stabilizes the malonyl-O-AT ester intermediate.
Rank scaling of Fourier syntheses leads to new tools for the comparison of crystallographic contour maps. The new metrics are in better agreement with a visual map analysis than the conventional map correlation coefficient.
Numerical comparison of crystallographic contour maps is used extensively in structure solution and model refinement, analysis and validation. However, traditional metrics such as the map correlation coefficient (map CC, real-space CC or RSCC) sometimes contradict the results of visual assessment of the corresponding maps. This article explains such apparent contradictions and suggests new metrics and tools to compare crystallographic contour maps. The key to the new methods is rank scaling of the Fourier syntheses. The new metrics are complementary to the usual map CC and can be more helpful in map comparison, in particular when only some of their aspects, such as regions of high density, are of interest.
Fourier syntheses; crystallographic contour maps; map comparison; sigma scale; rank scaling; correlation coefficients
The Translation-Libration-Screw-rotation (TLS) model of rigid-body harmonic displacements introduced in crystallography by Schomaker & Trueblood (1968) is now a routine tool in macromolecular studies and is a feature of most modern crystallographic structure refinement packages. In this review we consider a number of simple examples that illustrate important features of the TLS model. Based on these examples simplified formulae are given for several special cases that may occur in structure modeling and refinement. The derivation of general TLS formulae from basic principles is also provided. This manuscript describes the principles of TLS modeling, as well as some select algorithmic details for practical application. An extensive list of applications references as examples of TLS in macromolecular crystallography refinement is provided.
TLS; translation libration screw model; ADP; atomic displacement parameter; rigid body motion; structure refinement
A software system for automated protein–ligand crystallography has been implemented in the Phenix suite. This significantly reduces the manual effort required in high-throughput crystallographic studies.
High-throughput drug-discovery and mechanistic studies often require the determination of multiple related crystal structures that only differ in the bound ligands, point mutations in the protein sequence and minor conformational changes. If performed manually, solution and refinement requires extensive repetition of the same tasks for each structure. To accelerate this process and minimize manual effort, a pipeline encompassing all stages of ligand building and refinement, starting from integrated and scaled diffraction intensities, has been implemented in Phenix. The resulting system is able to successfully solve and refine large collections of structures in parallel without extensive user intervention prior to the final stages of model completion and validation.
protein–ligand complexes; automation; crystallographic structure solution and refinement
Single-structure models derived from X-ray data do not adequately account for the inherent, functionally important dynamics of protein molecules. We generated ensembles of structures by time-averaged refinement, where local molecular vibrations were sampled by molecular-dynamics (MD) simulation whilst global disorder was partitioned into an underlying overall translation–libration–screw (TLS) model. Modeling of 20 protein datasets at 1.1–3.1 Å resolution reduced cross-validated Rfree values by 0.3–4.9%, indicating that ensemble models fit the X-ray data better than single structures. The ensembles revealed that, while most proteins display a well-ordered core, some proteins exhibit a ‘molten core’ likely supporting functionally important dynamics in ligand binding, enzyme activity and protomer assembly. Order–disorder changes in HIV protease indicate a mechanism of entropy compensation for ordering the catalytic residues upon ligand binding by disordering specific core residues. Thus, ensemble refinement extracts dynamical details from the X-ray data that allow a more comprehensive understanding of structure–dynamics–function relationships.
It has been clear since the early days of structural biology in the late 1950s that proteins and other biomolecules are continually changing shape, and that these changes have an important influence on both the structure and function of the molecules. X-ray diffraction can provide detailed information about the structure of a protein, but only limited information about how its structure fluctuates over time. Detailed information about the dynamic behaviour of proteins is essential for a proper understanding of a variety of processes, including catalysis, ligand binding and protein–protein interactions, and could also prove useful in drug design.
Currently most of the X-ray crystal structures in the Protein Data Bank are ‘snap-shots’ with limited or no information about protein dynamics. However, X-ray diffraction patterns are affected by the dynamics of the protein, and also by distortions of the crystal lattice, so three-dimensional (3D) models of proteins ought to take these phenomena into account. Molecular-dynamics (MD) computer simulations transform 3D structures into 4D ‘molecular movies’ by predicting the movement of individual atoms.
Combining MD simulations with crystallographic data has the potential to produce more realistic ensemble models of proteins in which the atomic fluctuations are represented by multiple structures within the ensemble. Moreover, in addition to improved structural information, this process—which is called ensemble refinement—can provide dynamical information about the protein. Earlier attempts to do this ran into problems because the number of model parameters needed was greater than the number of observed data points. Burnley et al. now overcome this problem by modelling local molecular vibrations with MD simulations and, at the same time, using a course-grain model to describe global disorder of longer length scales.
Ensemble refinement of high-resolution X-ray diffraction datasets for 20 different proteins from the Protein Data Bank produced a better fit to the data than single structures for all 20 proteins. Ensemble refinement also revealed that 3 of the 20 proteins had a ‘molten core’, rather than the well-ordered residues core found in most proteins: this is likely to be important in various biological functions including ligand binding, filament formation and enzymatic function. Burnley et al. also showed that a HIV enzyme underwent an order–disorder transition that is likely to influence how this enzyme works, and that similar transitions might influence the interactions between the small-molecule drug Imatinib (also known as Gleevec) and the enzymes it targets. Ensemble refinement could be applied to the majority of crystallography data currently being collected, or collected in the past, so further insights into the properties and interactions of a variety of proteins and other biomolecules can be expected.
protein; crystallography; structure; function; dynamics; None
The study of intracellular metabolic fluxes and inter-species metabolite exchange for microbial communities is of crucial importance to understand and predict their behaviour. The most authoritative method of measuring intracellular fluxes, 13C Metabolic Flux Analysis (13C MFA), uses the labeling pattern obtained from metabolites (typically amino acids) during 13C labeling experiments to derive intracellular fluxes. However, these metabolite labeling patterns cannot easily be obtained for each of the members of the community. Here we propose a new type of 13C MFA that infers fluxes based on peptide labeling, instead of amino acid labeling. The advantage of this method resides in the fact that the peptide sequence can be used to identify the microbial species it originates from and, simultaneously, the peptide labeling can be used to infer intracellular metabolic fluxes. Peptide identity and labeling patterns can be obtained in a high-throughput manner from modern proteomics techniques. We show that, using this method, it is theoretically possible to recover intracellular metabolic fluxes in the same way as through the standard amino acid based 13C MFA, and quantify the amount of information lost as a consequence of using peptides instead of amino acids. We show that by using a relatively small number of peptides we can counter this information loss. We computationally tested this method with a well-characterized simple microbial community consisting of two species.
Microbial communities underlie a variety of important biochemical processes ranging from underground cave formation to gold mining or the onset of obesity. Metabolic fluxes describe how carbon and energy flow through the microbial community and therefore provide insights that are rarely captured by other techniques, such as metatranscriptomics or metaproteomics. The most authoritative method to measure fluxes for pure cultures consists of feeding the cells a labeled carbon source and deriving the fluxes from the ensuing metabolite labeling pattern (typically amino acids). Since we cannot easily separate cells of metabolite for each species in a community, this approach is not generally applicable to microbial communities. Here we present a method to derive fluxes from the labeling of peptides, instead of amino acids. This approach has the advantage that peptides can be assigned to each species in a community in a high-throughput fashion through modern proteomic methods. We show that, by using this method, it is theoretically possible to recover the same amount of information as through the standard approach, if enough peptides are used. We computationally tested this method with a well-characterized simple microbial community consisting of two species.
Major efforts in bioenergy research have focused on producing fuels that can directly replace petroleum-derived gasoline and diesel fuel through metabolic engineering of microbial fatty acid biosynthetic pathways. Typically, growth and pathway induction are conducted under aerobic conditions, but for operational efficiency in an industrial context, anaerobic culture conditions would be preferred to obviate the need to maintain specific dissolved oxygen concentrations and to maximize the proportion of reducing equivalents directed to biofuel biosynthesis rather than ATP production. A major concern with fermentative growth conditions is elevated NADH levels, which can adversely affect cell physiology. The purpose of this study was to identify homologs of Escherichia coli FabG, an essential reductase involved in fatty acid biosynthesis, that display a higher preference for NADH than for NADPH as a cofactor. Four potential NADH-dependent FabG variants were identified through bioinformatic analyses supported by crystallographic structure determination (1.3- to 2.0-Å resolution). In vitro assays of cofactor (NADH/NADPH) preference in the four variants showed up to ∼35-fold preference for NADH, which was observed with the Cupriavidus taiwanensis FabG variant. In addition, FabG homologs were overexpressed in fatty acid- and methyl ketone-overproducing E. coli host strains under anaerobic conditions, and the C. taiwanensis variant led to a 60% higher free fatty acid titer and 75% higher methyl ketone titer relative to the titers of the control strains. With further engineering, this work could serve as a starting point for establishing a microbial host strain for production of fatty acid-derived biofuels (e.g., methyl ketones) under anaerobic conditions.
The ability to solubilize lignocellulose makes certain ionic liquids (ILs) very effective reagents for pretreating biomass prior to its saccharification for biofuel fermentation. However, residual IL in the aqueous sugar solution can inhibit the growth and function of biofuel-producing microorganisms. In E. coli this toxicity can be partially overcome by the heterologous expression of an IL efflux pump encoded by eilA from Enterobacter lignolyticus. In the present work, we used microarray analysis to identify native E. coli IL-inducible promoters and develop control systems for regulating eilA gene expression. Three candidate promoters, PmarR’, PydfO’, and PydfA’, were selected and compared to the IPTG-inducible PlacUV5 system for controlling expression of eilA. The PydfA’ and PmarR’ based systems are as effective as PlacUV5 in their ability to rescue E. coli from typically toxic levels of IL, thereby eliminating the need to use an IPTG-based system for such tolerance engineering. We present a mechanistic model indicating that inducible control systems reduce target gene expression when IL levels are low. Selected-reaction monitoring mass spectrometry analysis revealed that at high IL concentrations EilA protein levels were significantly elevated under the control of PydfA’ and PmarR’ in comparison to the other promoters. Further, in a pooled culture competition designed to determine fitness, the strain containing pPmarR’-eilA outcompeted strains with other promoter constructs, most significantly at IL concentrations above 150 mM. These results indicate that native promoters such as PmarR’ can provide effective systems for regulating the expression of heterologous genes in host engineering and simplify the development of industrially useful strains.
Three lignocellulosic pretreatment techniques (ammonia fiber expansion, dilute acid and ionic liquid) are compared with respect to saccharification efficiency, particle size and biomass composition. In particular, the effects of switchgrass particle size (32–200) on each pretreatment regime are examined. Physical properties of untreated and pretreated samples are characterized using crystallinity, surface accessibility measurements and scanning electron microscopy (SEM) imaging. At every particle size tested, ionic liquid (IL) pretreatment results in greater cell wall disruption, reduced crystallinity, increased accessible surface area, and higher saccharification efficiencies compared with dilute acid and AFEX pretreatments. The advantages of using IL pretreatment are greatest at larger particle sizes (>75 µm).
X-ray free-electron laser (XFEL) sources enable the use of crystallography to
solve three-dimensional macromolecular structures under native conditions and free from
radiation damage. Results to date, however, have been limited by the challenge of deriving
accurate Bragg intensities from a heterogeneous population of microcrystals, while at the
same time modeling the X-ray spectrum and detector geometry. Here we present a
computational approach designed to extract statistically significant high-resolution
signals from fewer diffraction measurements.
The solvent-picking procedure in phenix.refine has been extended and combined with Phaser anomalous substructure completion and analysis of coordination geometry to identify and place elemental ions.
Many macromolecular model-building and refinement programs can automatically place solvent atoms in electron density at moderate-to-high resolution. This process frequently builds water molecules in place of elemental ions, the identification of which must be performed manually. The solvent-picking algorithms in phenix.refine have been extended to build common ions based on an analysis of the chemical environment as well as physical properties such as occupancy, B factor and anomalous scattering. The method is most effective for heavier elements such as calcium and zinc, for which a majority of sites can be placed with few false positives in a diverse test set of structures. At atomic resolution, it is observed that it can also be possible to identify tightly bound sodium and magnesium ions. A number of challenges that contribute to the difficulty of completely automating the process of structure completion are discussed.
refinement; ions; PHENIX
Our understanding of the contribution of Golgi proteins to cell wall and wood formation in any woody plant species is limited. Currently, little Golgi proteomics data exists for wood-forming tissues. In this study, we attempted to address this issue by generating and analyzing Golgi-enriched membrane preparations from developing xylem of compression wood from the conifer Pinus radiata. Developing xylem samples from 3-year-old pine trees were harvested for this purpose at a time of active growth and subjected to a combination of density centrifugation followed by free flow electrophoresis, a surface charge separation technique used in the enrichment of Golgi membranes. This combination of techniques was successful in achieving an approximately 200-fold increase in the activity of the Golgi marker galactan synthase and represents a significant improvement for proteomic analyses of the Golgi from conifers. A total of thirty known Golgi proteins were identified by mass spectrometry including glycosyltransferases from gene families involved in glucomannan and glucuronoxylan biosynthesis. The free flow electrophoresis fractions of enriched Golgi were highly abundant in structural proteins (actin and tubulin) indicating a role for the cytoskeleton during compression wood formation. The mass spectrometry proteomics data associated with this study have been deposited to the ProteomeXchange with identifier PXD000557.
A new module, Guided Ligand Replacement (GLR), has been developed in Phenix to increase the ease and success rate of ligand placement when prior protein-ligand complexes are available.
The process of iterative structure-based drug design involves the X-ray crystal structure determination of upwards of 100 ligands with the same general scaffold (i.e. chemotype) complexed with very similar, if not identical, protein targets. In conjunction with insights from computational models and assays, this collection of crystal structures is analyzed to improve potency, to achieve better selectivity and to reduce liabilities such as absorption, distribution, metabolism, excretion and toxicology. Current methods for modeling ligands into electron-density maps typically do not utilize information on how similar ligands bound in related structures. Even if the electron density is of sufficient quality and resolution to allow de novo placement, the process can take considerable time as the size, complexity and torsional degrees of freedom of the ligands increase. A new module, Guided Ligand Replacement (GLR), was developed in Phenix to increase the ease and success rate of ligand placement when prior protein–ligand complexes are available. At the heart of GLR is an algorithm based on graph theory that associates atoms in the target ligand with analogous atoms in the reference ligand. Based on this correspondence, a set of coordinates is generated for the target ligand. GLR is especially useful in two situations: (i) modeling a series of large, flexible, complicated or macrocyclic ligands in successive structures and (ii) modeling ligands as part of a refinement pipeline that can automatically select a reference structure. Even in those cases for which no reference structure is available, if there are multiple copies of the bound ligand per asymmetric unit GLR offers an efficient way to complete the model after the first ligand has been placed. In all of these applications, GLR leverages prior knowledge from earlier structures to facilitate ligand placement in the current structure.
ligand placement; guided ligand-replacement method; GLR
Production of biofuels via enzymatic hydrolysis of complex plant polysaccharides is a subject of intense global interest. Microbial communities are known to express a wide range of enzymes necessary for the saccharification of lignocellulosic feedstocks and serve as a powerful reservoir for enzyme discovery. However, the growth temperature and conditions that yield high cellulase activity vary widely, and the throughput to identify optimal conditions has been limited by the slow handling and conventional analysis. A rapid method that uses small volumes of isolate culture to resolve specific enzyme activity is needed. In this work, a high throughput nanostructure-initiator mass spectrometry (NIMS)-based approach was developed for screening a thermophilic cellulolytic actinomycete, Thermobispora bispora, for β-glucosidase production under various growth conditions. Media that produced high β-glucosidase activity were found to be I/S + glucose or microcrystalline cellulose (MCC), Medium 84 + rolled oats, and M9TE + MCC at 45°C. Supernatants of cell cultures grown in M9TE + 1% MCC cleaved 2.5 times more substrate at 45°C than at all other temperatures. While T. bispora is reported to grow optimally at 60°C in Medium 84 + rolled oats and M9TE + 1% MCC, approximately 40% more conversion was observed at 45°C. This high throughput NIMS approach may provide an important tool in discovery and characterization of enzymes from environmental microbes for industrial and biofuel applications.
NIMS; high throughput; β-glucosidase; enzymatic activity screening; microbial communities
Ionic liquid pretreatment of biomass has been shown to greatly reduce the recalcitrance of lignocellulosic biomass, resulting in improved sugar yields after enzymatic saccharification. However, even under these improved saccharification conditions the cost of enzymes still represents a significant proportion of the total cost of producing sugars and ultimately fuels from lignocellulosic biomass. Much of the high cost of enzymes is due to the low catalytic efficiency and stability of lignocellulolytic enzymes, especially cellulases, under conditions that include high temperatures and the presence of residual pretreatment chemicals, such as acids, organic solvents, bases, or ionic liquids. Improving the efficiency of the saccharification process on ionic liquid pretreated biomass will facilitate reduced enzyme loading and cost. Thermophilic cellulases have been shown to be stable and active in ionic liquids but their activity is typically at lower levels. Cel5A_Tma, a thermophilic endoglucanase from Thermotoga maritima, is highly active on cellulosic substrates and is stable in ionic liquid environments. Here, our motivation was to engineer mutants of Cel5A_Tma with higher activity on 1-ethyl-3-methylimidazolium acetate ([C2mim][OAc]) pretreated biomass. We developed a robotic platform to screen a random mutagenesis library of Cel5A_Tma. Twelve mutants with 25–42% improvement in specific activity on carboxymethyl cellulose and up to 30% improvement on ionic-liquid pretreated switchgrass were successfully isolated and characterized from a library of twenty thousand variants. Interestingly, most of the mutations in the improved variants are located distally to the active site on the protein surface and are not directly involved with substrate binding.
A low flow rate liquid microjet method for delivery of hydrated protein crystals to X-ray lasers is presented. Linac Coherent Light Source data demonstrates serial femtosecond protein crystallography with micrograms, a reduction of sample consumption by orders of magnitude.
An electrospun liquid microjet has been developed that delivers protein microcrystal suspensions at flow rates of 0.14–3.1 µl min−1 to perform serial femtosecond crystallography (SFX) studies with X-ray lasers. Thermolysin microcrystals flowed at 0.17 µl min−1 and diffracted to beyond 4 Å resolution, producing 14 000 indexable diffraction patterns, or four per second, from 140 µg of protein. Nanoflow electrospinning extends SFX to biological samples that necessitate minimal sample consumption.
serial femtosecond crystallography; nanoflow electrospinning
A procedure for model building is described that combines morphing a model to match a density map, trimming the morphed model and aligning the model to a sequence.
A procedure termed ‘morphing’ for improving a model after it has been placed in the crystallographic cell by molecular replacement has recently been developed. Morphing consists of applying a smooth deformation to a model to make it match an electron-density map more closely. Morphing does not change the identities of the residues in the chain, only their coordinates. Consequently, if the true structure differs from the working model by containing different residues, these differences cannot be corrected by morphing. Here, a procedure that helps to address this limitation is described. The goal of the procedure is to obtain a relatively complete model that has accurate main-chain atomic positions and residues that are correctly assigned to the sequence. Residues in a morphed model that do not match the electron-density map are removed. Each segment of the resulting trimmed morphed model is then assigned to the sequence of the molecule using information about the connectivity of the chains from the working model and from connections that can be identified from the electron-density map. The procedure was tested by application to a recently determined structure at a resolution of 3.2 Å and was found to increase the number of correctly identified residues in this structure from the 88 obtained using phenix.resolve sequence assignment alone (Terwilliger, 2003 ▶) to 247 of a possible 359. Additionally, the procedure was tested by application to a series of templates with sequence identities to a target structure ranging between 7 and 36%. The mean fraction of correctly identified residues in these cases was increased from 33% using phenix.resolve sequence assignment to 47% using the current procedure. The procedure is simple to apply and is available in the Phenix software package.
morphing; model building; sequence assignment; model–map correlation; loop-building