Single-step electron tunnelling reactions can transport charges over distances of 15–20 Åin proteins. Longer-range transfer requires multi-step tunnelling processes along redox chains, often referred to as hopping. Long-range hopping via oxidized radicals of tryptophan and tyrosine, which has been identified in several natural enzymes, has been demonstrated in artificial constructs of the blue copper protein azurin. Tryptophan and tyrosine serve as hopping way stations in high-potential charge transport processes. It may be no coincidence that these two residues occur with greater-than-average frequency in O2- and H2O2-reactive enzymes. We suggest that appropriately placed tyrosine and/or tryptophan residues prevent damage from high-potential reactive intermediates by reduction followed by transfer of the oxidizing equivalent to less harmful sites or out of the protein altogether.
electron transfer; hopping; protein radical; azurin; cytochrome P450
Platinum compounds are a mainstay of cancer chemotherapy, with over 50% of patients receiving platinum. But there is a great need for improvement. Major features of the cisplatin mechanism of action involve cancer cell entry, formation mainly of intrastrand cross-links that bend and unwind nuclear DNA, transcription inhibition and induction of cell-death programmes while evading repair. Recently, we discovered that platinum cross-link formation is not essential for activity. Monofunctional Pt compounds such as phenanthriplatin, which make only a single bond to DNA nucleobases, can be far more active and effective against a range of tumour types. Without a cross-link-induced bend, monofunctional complexes can be accommodated in the major groove of DNA. Their biological mechanism of action is similar to that of cisplatin. These discoveries opened the door to a large family of heavy metal-based drug candidates, including those of Os and Re, as will be described.
platinum; transition metal; anti-cancer; monofunctional; osmium
Biological information encoded in genomes is fundamentally different from and effectively orthogonal to Shannon entropy. The biologically relevant concept of information has to do with ‘meaning’, i.e. encoding various biological functions with various degree of evolutionary conservation. Apart from direct experimentation, the meaning, or biological information content, can be extracted and quantified from alignments of homologous nucleotide or amino acid sequences but generally not from a single sequence, using appropriately modified information theoretical formulae. For short, information encoded in genomes is defined vertically but not horizontally. Informally but substantially, biological information density seems to be equivalent to ‘meaning’ of genomic sequences that spans the entire range from sharply defined, universal meaning to effective meaninglessness. Large fractions of genomes, up to 90% in some plants, belong within the domain of fuzzy meaning. The sequences with fuzzy meaning can be recruited for various functions, with the meaning subsequently fixed, and also could perform generic functional roles that do not require sequence conservation. Biological meaning is continuously transferred between the genomes of selfish elements and hosts in the process of their coevolution. Thus, in order to adequately describe genome function and evolution, the concepts of information theory have to be adapted to incorporate the notion of meaning that is central to biology.
information; meaning; evolution; selfish elements
On the one hand, biology, chemistry and also physics tell us how the process of translating the genetic information into life could possibly work, but we are still very far from a complete understanding of this process. On the other hand, mathematics and statistics give us methods to describe such natural systems—or parts of them—within a theoretical framework. Also, they provide us with hints and predictions that can be tested at the experimental level. Furthermore, there are peculiar aspects of the management of genetic information that are intimately related to information theory and communication theory. This theme issue is aimed at fostering the discussion on the problem of genetic coding and information through the presentation of different innovative points of view. The aim of the editors is to stimulate discussions and scientific exchange that will lead to new research on why and how life can exist from the point of view of the coding and decoding of genetic information. The present introduction represents the point of view of the editors on the main aspects that could be the subject of future scientific debate.
DNA; information; genomics; genetic code; life
Information is a precise concept that can be defined mathematically, but its relationship to what we call ‘knowledge’ is not always made clear. Furthermore, the concepts ‘entropy’ and ‘information’, while deeply related, are distinct and must be used with care, something that is not always achieved in the literature. In this elementary introduction, the concepts of entropy and information are laid out one by one, explained intuitively, but defined rigorously. I argue that a proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology.
entropy; information; Bayesian inference
Large-scale central facilities such as Diamond Light Source fulfil an increasingly pivotal role in many large-scale scientific research programmes. We illustrate these developments by reference to energy-centred projects at the University of Nottingham, the progress of which depends crucially on access to these facilities. Continuing access to beamtime has now become a major priority for those who direct such programmes.
central facilities; research programmes; structural information; metal-organic frameworks; gas cell studies
The remarkable advances in structural biology in the past three decades have led to the determination of increasingly complex structures that lie at the heart of many important biological processes. Many of these advances have been made possible by the use of X-ray crystallography using synchrotron radiation. In this short article, some of the challenges and prospects that lie ahead will be summarized.
synchrotron radiation; macromolecular crystallography; structural biology
The process of chromosome division, termed mitosis, involves a complex sequence of events that is tightly controlled to ensure that the faithful segregation of duplicated chromosomes is coordinated with each cell division cycle. The large macromolecular complex responsible for regulating this process is the anaphase-promoting complex or cyclosome (APC/C). In humans, the APC/C is assembled from 20 subunits derived from 15 different proteins. The APC/C functions to ubiquitinate cell cycle regulatory proteins, thereby targeting them for destruction by the proteasome. This review describes our research aimed at understanding the structure and mechanism of the APC/C. We have determined the crystal structures of individual subunits and subcomplexes that provide atomic models to interpret density maps of the whole complex derived from single particle cryo-electron microscopy. With this information, we are generating pseudo-atomic models of functional states of the APC/C that provide insights into its overall architecture and mechanisms of substrate recognition, catalysis and regulation by inhibitory complexes.
anaphase-promoting complex or cyclosome; chromosome division; cell cycle
In communications, the obstacle to high bandwidth and reliable transmission is usually the interconnections, not the links. Nowhere is this more evident than on the Internet, where broadband connections to homes, offices and now mobile smart phones are a frequent source of frustration, and the interconnections between the roughly 50 000 subnetworks (autonomous systems or ASes) from which it is formed, even more so. The structure of the AS graph that is formed by these interconnections is unspecified, undocumented and only guessed-at through measurement, but it shows surprising efficiencies. Under recent pressures for network neutrality and openness or ‘transparency’, operators, several classes of users and regulatory bodies have a good chance of realizing these efficiencies, but they need improved measurement technology to manage this under continued growth. A long-standing vision, an Internet that measures itself, in which every intelligent port takes a part in monitoring, can make this possible and may now be within reach.
Internet measurement; monitoring; complex systems; graph topology
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity.
optical fibre communications; optical nonlinearities; Kerr effect; channel modelling; signal processing; optical networks
Researchers are within a factor of 2 or so from realizing the maximum practical transmission capacity of conventional single-mode fibre transmission technology. It is therefore timely to consider new technological approaches offering the potential for more cost-effective scaling of network capacity than simply installing more and more conventional single-mode systems in parallel. In this paper, I review physical layer options that can be considered to address this requirement including the potential for reduction in both fibre loss and nonlinearity for single-mode fibres, the development of ultra-broadband fibre amplifiers and finally the use of space division multiplexing.
optical communications; optical fibres; optical amplifiers
Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers.
computer architecture; networking; interconnect; performance guarantees
This issue of Philosophical Transactions of the Royal Society, Part A represents a summary of the recent discussion meeting ‘Communication networks beyond the capacity crunch’. The purpose of the meeting was to establish the nature of the capacity crunch, estimate the time scales associated with it and to begin to find solutions to enable continued growth in a post-crunch era. The meeting confirmed that, in addition to a capacity shortage within a single optical fibre, many other ‘crunches’ are foreseen in the field of communications, both societal and technical. Technical crunches identified included the nonlinear Shannon limit, wireless spectrum, distribution of 5G signals (front haul and back haul), while societal influences included net neutrality, creative content generation and distribution and latency, and finally energy and cost. The meeting concluded with the observation that these many crunches are genuine and may influence our future use of technology, but encouragingly noted that research and business practice are already moving to alleviate many of the negative consequences.
optical communications; capacity limits; mobile communications; energy
The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results of the wake characteristics of the flow around a three-bladed model wind turbine, the development of a simple analytical formula for determining the near-wake length behind a wind turbine and a detailed investigation of wake structures based on proper orthogonal decomposition analysis of numerically generated snapshots of the wake.
wind turbines; wakes; actuator line; large eddy simulation
The paper proposes a methodology for reliable design and maintenance of wind turbine rotor blades using a condition monitoring approach and a damage tolerance index coupling the material and structure. By improving the understanding of material properties that control damage propagation it will be possible to combine damage tolerant structural design, monitoring systems, inspection techniques and modelling to manage the life cycle of the structures. This will allow an efficient operation of the wind turbine in terms of load alleviation, limited maintenance and repair leading to a more effective exploitation of offshore wind.
damage tolerance; smart structure; offshore wind
Understanding of dynamic behaviour of offshore wind floating substructures is extremely important in relation to design, operation, maintenance and management of floating wind farms. This paper presents assessment of nonlinear signatures of dynamic responses of a scaled tension-leg platform (TLP) in a wave tank exposed to different regular wave conditions and sea states characterized by the Bretschneider, the Pierson–Moskowitz and the JONSWAP spectra. Dynamic responses of the TLP were monitored at different locations using load cells, a camera-based motion recognition system and a laser Doppler vibrometer. The analysis of variability of the TLP responses and statistical quantification of their linearity or nonlinearity, as non-destructive means of structural monitoring from the output-only condition, remains a challenging problem. In this study, the delay vector variance (DVV) method is used to statistically study the degree of nonlinearity of measured response signals from a TLP. DVV is observed to create a marker estimating the degree to which a change in signal nonlinearity reflects real-time behaviour of the structure and also to establish the sensitivity of the instruments employed to these changes. The findings can be helpful in establishing monitoring strategies and control strategies for undesirable levels or types of dynamic response and can help to better estimate changes in system characteristics over the life cycle of the structure.
offshore wind energy; tension-leg platform; structural dynamics; delay vector variance; signal nonlinearity
A free-vortex wake (FVW) model is developed in this paper to analyse the unsteady aerodynamic performance of offshore floating wind turbines. A time-marching algorithm of third-order accuracy is applied in the FVW model. Owing to the complex floating platform motions, the blade inflow conditions and the positions of initial points of vortex filaments, which are different from the fixed wind turbine, are modified in the implemented model. A three-dimensional rotational effect model and a dynamic stall model are coupled into the FVW model to improve the aerodynamic performance prediction in the unsteady conditions. The effects of floating platform motions in the simulation model are validated by comparison between calculation and experiment for a small-scale rigid test wind turbine coupled with a floating tension leg platform (TLP). The dynamic inflow effect carried by the FVW method itself is confirmed and the results agree well with the experimental data of a pitching transient on another test turbine. Also, the flapping moment at the blade root in yaw on the same test turbine is calculated and compares well with the experimental data. Then, the aerodynamic performance is simulated in a yawed condition of steady wind and in an unyawed condition of turbulent wind, respectively, for a large-scale wind turbine coupled with the floating TLP motions, demonstrating obvious differences in rotor performance and blade loading from the fixed wind turbine. The non-dimensional magnitudes of loading changes due to the floating platform motions decrease from the blade root to the blade tip.
offshore floating wind turbine; free-vortex wake; time-marching algorithm; unsteady aerodynamics
The paper presents a novel method for numerically modelling fluid–structure interactions. The method consists of solving the fluid-dynamics equations on an extended domain, where the computational mesh covers both fluid and solid structures. The fluid and solid velocities are relaxed to one another through a penalty force. The latter acts on a thin shell surrounding the solid structures. Additionally, the shell is represented on the extended domain by a non-zero shell-concentration field, which is obtained by conservatively mapping the shell mesh onto the extended mesh. The paper outlines the theory underpinning this novel method, referred to as the immersed-shell approach. It also shows how the coupling between a fluid- and a structural-dynamics solver is achieved. At this stage, results are shown for cases of fundamental interest.
fluid–structure interactions; immersed-body approach; aerodynamics
The current key challenge in the floating offshore wind turbine industry and research is on designing economic floating systems that can compete with fixed-bottom offshore turbines in terms of levelized cost of energy. The preliminary platform design, as well as early experimental design assessments, are critical elements in the overall design process. In this contribution, a brief review of current floating offshore wind turbine platform pre-design and scaled testing methodologies is provided, with a focus on their ability to accommodate the coupled dynamic behaviour of floating offshore wind systems. The exemplary design and testing methodology for a monolithic concrete spar platform as performed within the European KIC AFOSP project is presented. Results from the experimental tests compared to numerical simulations are presented and analysed and show very good agreement for relevant basic dynamic platform properties. Extreme and fatigue loads and cost analysis of the AFOSP system confirm the viability of the presented design process. In summary, the exemplary application of the reduced design and testing methodology for AFOSP confirms that it represents a viable procedure during pre-design of floating offshore wind turbine platforms.
floating offshore wind turbines; design; monolithic concrete spar buoy; combined wind wave testing; AFOSP project
This article summarizes and reviews recent progress in the development of catalysts for the ring-opening copolymerization of carbon dioxide and epoxides. The copolymerization is an interesting method to add value to carbon dioxide, including from waste sources, and to reduce pollution associated with commodity polymer manufacture. The selection of the catalyst is of critical importance to control the composition, properties and applications of the resultant polymers. This review highlights and exemplifies some key recent findings and hypotheses, in particular using examples drawn from our own research.
catalysis; polycarbonate; ring-opening copolymerization; CO2
The effectiveness of Mg as a promoter of Co-Ru/γ-Al2O3 Fischer–Tropsch catalysts depends on how and when the Mg is added. When the Mg is impregnated into the support before the Co and Ru addition, some Mg is incorporated into the support in the form of MgxAl2O3+x if the material is calcined at 550°C or 800°C after the impregnation, while the remainder is present as amorphous MgO/MgCO3 phases. After subsequent Co-Ru impregnation MgxCo3−xO4 is formed which decomposes on reduction, leading to Co(0) particles intimately mixed with Mg, as shown by high-resolution transmission electron microscopy. The process of impregnating Co into an Mg-modified support results in dissolution of the amorphous Mg, and it is this Mg which is then incorporated into MgxCo3−xO4. Acid washing or higher temperature calcination after Mg impregnation can remove most of this amorphous Mg, resulting in lower values of x in MgxCo3−xO4. Catalytic testing of these materials reveals that Mg incorporation into the Co oxide phase is severely detrimental to the site-time yield, while Mg incorporation into the support may provide some enhancement of activity at high temperature.
Fischer–Tropsch; catalysis; cobalt; alumina; XRD
Quasi-stationarity is ubiquitous in complex dynamical systems. In brain dynamics, there is ample evidence that event-related potentials (ERPs) reflect such quasi-stationary states. In order to detect them from time series, several segmentation techniques have been proposed. In this study, we elaborate a recent approach for detecting quasi-stationary states as recurrence domains by means of recurrence analysis and subsequent symbolization methods. We address two pertinent problems of contemporary recurrence analysis: optimizing the size of recurrence neighbourhoods and identifying symbols from different realizations for sequence alignment. As possible solutions for these problems, we suggest a maximum entropy criterion and a Hausdorff clustering algorithm. The resulting recurrence domains for single-subject ERPs are obtained as partition cells reflecting quasi-stationary brain states.
recurrence analysis; symbolic dynamics; electroencephalography; brain microstates; language processing
Ordinal symbolic analysis opens an interesting and powerful perspective on time-series analysis. Here, we review this relatively new approach and highlight its relation to symbolic dynamics and representations. Our exposition reaches from the general ideas up to recent developments, with special emphasis on its applications to biomedical recordings. The latter will be illustrated with epilepsy data.
time-series analysis; symbolic dynamics; ordinal patterns; permutation entropy
Myocardial ischaemia is hypothesized to stimulate the cardiac sympathetic excitatory afferents and, therefore, the spontaneous changes of heart period (approximated as the RR interval), and the QT interval in ischaemic dilated cardiomyopathy (IDC) patients might reflect this sympathetic activation. Symbolic analysis is a nonlinear and powerful tool for the extraction and classification of patterns in time-series analysis, which implies a transformation of the original series into symbols and the construction of patterns with the symbols. The aim of this work was to investigate whether symbolic transformations of RR and QT cardiac series can provide a better separation between IDC patients and healthy control (HC) subjects compared with traditional linear measures. The variability of these cardiac series was studied during daytime and night-time periods and also during the complete 24 h recording over windows of short data sequences of approximately 5 min. The IDC group was characterized by an increase in the occurrence rate of patterns without variations (0 V%) and a reduction in the occurrence rate of patterns with one variation (1 V%) and two variations (2 V%). Concerning the RR variability during the daytime, the highest number of patterns had 0 V%, whereas the rates of 1 V% and 2 V% were lower. During the night, 1 V% and 2 V% increased at the expense of diminishing 0 V%. Patterns with and without variations between consecutive symbols were able to increase the separation between the IDC and HC groups, allowing accuracies higher than 80%. With regard to entropy measures, an increase in RR regularity was associated with cardiac disease described by accuracy >70% in the RR series and by accuracy >60% in the QTc series. These results could be associated with an increase in the sympathetic tone in IDC patients.
complexity; heart rate variability; ischaemic dilated cardiomyopathy; QT; symbolic dynamics