PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (580999)

Clipboard (0)
None

Related Articles

1.  The Use of the Personal Digital Assistant (PDA) Among Personnel and Students in Health Care: A Review 
Background
Health care personnel need access to updated information anywhere and at any time, and a Personal Digital Assistant (PDA) has the potential to meet these requirements. A PDA is a mobile tool which has been employed widely for various purposes in health care practice, and the level of its use is expected to increase. Loaded with suitable functions and software applications, a PDA might qualify as the tool that personnel and students in health care need. In Sweden today, despite its leadership role in mobile technologies, PDAs are not commonly used, and there is a lack of suitable functions and software applications.
Objective
The aim of the present review was to obtain an overview of existing research on the use of PDAs among personnel and students in health care.
Methods
The literature search included original peer-reviewed research articles written in English and published from 1996 to 2008. All study designs were considered for inclusion. We excluded reviews and studies focusing on the use of PDAs in classroom situations. From March 2006 to the last update in May 2008, we searched PubMed, CINAHL, Cochrane, IngentaConnect, and a local search engine (ELIN@Kalmar). We conducted a content analysis, using Nielsen’s Model of System Acceptability as a theoretical framework in structuring and presenting the results.
Results
From the 900 references initially screened, 172 articles were selected and critically assessed until 48 articles remained. The majority originated in North-America (USA: n=24, Canada: n=11). The categories which emerged from our content analysis coincided to a certain extent to Nielsen’s Model of System Acceptability (social and practical acceptability), including usefulness (utility and usability) subcategories such as learnability, efficiency, errors, and satisfaction. The studies showed that health care personnel and students used PDAs in patient care with varied frequency. Most of the users were physicians. There is some evidence that the use of a PDA in health care settings might improve decision-making, reduce the numbers of medical errors, and enhance learning for both students and professionals, but the evidence is not strong, with most studies being descriptive, and only 6 randomized controlled trials. Several special software programs have been created and tested for PDAs, and a wide range of situations for their use have been reported for different patient groups. Drug and medical information were commonly accessed by PDA users, and the PDA was often viewed as the preferred tool when compared to paper-based documents. Some users regarded the PDA easy to operate, while others found it difficult in the beginning.
Conclusions
This overview of the use of PDAs revealed a positive attitude towards the PDA, which was regarded as a feasible and convenient tool. The possibility of immediate access to medical information has the potential to improve patient care. The PDA seems to be a valuable tool for personnel and students in health care, but there is a need for further intervention studies, randomized controlled trials, action research, and studies with various health care groups in order to identify its appropriate functions and software applications.
doi:10.2196/jmir.1038
PMCID: PMC2629360  PMID: 18957381
Informatics; medical informatics; computers, handheld; health personnel; students, health occupations; personal digital assistant
2.  The Reno-Vascular A2B Adenosine Receptor Protects the Kidney from Ischemia 
PLoS Medicine  2008;5(6):e137.
Background
Acute renal failure from ischemia significantly contributes to morbidity and mortality in clinical settings, and strategies to improve renal resistance to ischemia are urgently needed. Here, we identified a novel pathway of renal protection from ischemia using ischemic preconditioning (IP).
Methods and Findings
For this purpose, we utilized a recently developed model of renal ischemia and IP via a hanging weight system that allows repeated and atraumatic occlusion of the renal artery in mice, followed by measurements of specific parameters or renal functions. Studies in gene-targeted mice for each individual adenosine receptor (AR) confirmed renal protection by IP in A1−/−, A2A−/−, or A3AR−/− mice. In contrast, protection from ischemia was abolished in A2BAR−/− mice. This protection was associated with corresponding changes in tissue inflammation and nitric oxide production. In accordance, the A2BAR-antagonist PSB1115 blocked renal protection by IP, while treatment with the selective A2BAR-agonist BAY 60–6583 dramatically improved renal function and histology following ischemia alone. Using an A2BAR-reporter model, we found exclusive expression of A2BARs within the reno-vasculature. Studies using A2BAR bone-marrow chimera conferred kidney protection selectively to renal A2BARs.
Conclusions
These results identify the A2BAR as a novel therapeutic target for providing potent protection from renal ischemia.
Using gene-targeted mice, Holger Eltzschig and colleagues identify the A2B adenosine receptor as a novel therapeutic target for providing protection from renal ischemia.
Editors' Summary
Background.
Throughout life, the kidneys perform the essential task of filtering waste products and excess water from the blood to make urine. Each kidney contains about a million small structures called nephrons, each of which contains a filtration unit consisting of a glomerulus (a small blood vessel) intertwined with a urine-collecting tube called a tubule. If the nephrons stop working for any reason, the rate at which the blood is filtered (the glomerular filtration rate or GFR) decreases and dangerous amounts of waste products such as creatinine build up in the blood. Most kidney diseases destroy the nephrons slowly over years, producing an irreversible condition called chronic renal failure. But the kidneys can also stop working suddenly because of injury or poisoning. One common cause of “acute” renal failure in hospital patients is ischemia—an inadequate blood supply to an organ that results in the death of part of that organ. Heart surgery and other types of surgery in which the blood supply to the kidneys is temporarily disrupted are associated with high rates of acute renal failure.
Why Was This Study Done?
Although the kidneys usually recover from acute failure within a few weeks if the appropriate intensive treatment (for example, dialysis) is provided, acute renal failure after surgery can be fatal. Thus, new strategies to protect the kidneys from ischemia are badly needed. Like other organs, the kidneys can be protected from lethal ischemia by pre-exposure to several short, nonlethal episodes of ischemia. It is not clear how this “ischemic preconditioning” increases renal resistance to ischemia but some data suggest that the protection of tissues from ischemia might involve a signaling molecule called extracellular adenosine. This molecule binds to proteins called receptors on the surface of cells and sends signals into them that change their behavior. There are four different adenosine receptor—A1AR, A2AAR, A2BAR, and A3AR—and in this study, the researchers use ischemic preconditioning as an experimental strategy to investigate which of these receptors protects the kidneys from ischemia in mice, information that might provide clues about how to protect the kidneys from ischemia.
What Did the Researchers Do and Find?
The researchers first asked whether ischemic preconditioning protects the kidneys of mice strains that lack the genes for individual adenosine receptors (A1AR−/−, A2AAR−/−, A2BAR−/−, and A3AR−/− mice) from subsequent ischemia. Using a hanging-weight system, they intermittently blocked the renal artery of these mice before exposing them to a longer period of renal ischemia. Twenty-four hours later, they assessed the renal function of the mice by measuring their blood creatinine levels, GFRs, and urine production. Ischemic preconditioning protected all the mice from ischemia-induced loss of kidney function except the A2BAR−/− mice. It also prevented ischemia-induced structural damage and inflammation in the kidneys of wild-type but not A2BAR−/− mice. These results suggest that A2BAR may help to protect the kidneys from ischemia. Consistent with this idea, ischemic preconditioning did not prevent ischemia-induced renal damage in wild-type mice treated with a compound that specifically blocks the activity of A2BAR. However, wild-type mice (but not A2BAR−/− mice) treated with an A2BAR agonist (which activates the receptor) retained their kidney function after renal ischemia without ischemic preconditioning. Finally, the researchers report that A2BAR has to be present on the blood vessels in the kidney to prevent ischemia-induced acute renal failure.
What Do These Findings Mean?
These findings suggest that the protection of the kidneys from ischemia and the renal resistance to ischemia that is provided by ischemic preconditioning involve adenosine signaling through A2BAR. They also suggest that adenosine might provide protection against ischemia-induced damage by blocking inflammation in the kidney although other possible mechanisms of action need to be investigated. Importantly, these findings suggest that A2BAR might be a therapeutic target for the prevention of renal ischemia. However, results obtained in animals do not always reflect the situation in people, so before A2BAR agonists can be used to reduce the chances of patients developing acute renal failure after surgery, these results need confirming in people and the safety of A2BAR agonists need to be thoroughly investigated.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050137.
The US National Institute of Diabetes and Digestive and Kidney Diseases provides information on how the kidneys work and what can go wrong with them, including a list of links to further information about kidney disease
The MedlinePlus encyclopedia has a page on acute kidney failure (in English and Spanish)
Wikipedia has pages on acute renal failure, ischemia, ischemic preconditioning, and adenosine (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.0050137
PMCID: PMC2504049  PMID: 18578565
3.  Computational reconstruction of tissue-specific metabolic models: application to human liver metabolism 
The first computational approach for the rapid generation of genome-scale tissue-specific models from a generic species model.A genome scale model of human liver metabolism, which is comprehensively tested and validated using cross-validation and the ability to carry out complex hepatic metabolic functions.The model's flux predictions are shown to correlate with flux measurements across a variety of hormonal and dietary conditions, and are successfully used to predict biomarker changes in genetic metabolic disorders, both with higher accuracy than the generic human model.
The study of normal human metabolism and its alterations is central to the understanding and treatment of a variety of human diseases, including diabetes, metabolic syndrome, neurodegenerative disorders, and cancer. A promising systems biology approach for studying human metabolism is through the development and analysis of large-scale stoichiometric network models of human metabolism. The reconstruction of these network models has followed two main paths: the former being the reconstruction of generic (non-tissue specific) models, characterizing the complete metabolic potential of human cells, based mostly on genomic data to trace enzyme-coding genes (Duarte et al, 2007; Ma et al, 2007), and the latter is the reconstruction of cell type- and tissue-specific models (Wiback and Palsson, 2002; Chatziioannou et al, 2003; Vo et al, 2004), based on a similar methodology to that described above, with the extra complexity of manual curation of literature evidence for the cell/system specificity of metabolic enzymes and pathways.
On this background, we present in this study, to the best of our knowledge, the first computational approach for a rapid generation of genome-scale tissue-specific models. The method relies on integrating the previously reconstructed generic human models with a variety of high-throughput molecular ‘omics' data, including transcriptomic, proteomic, metabolomic, and phenotypic data, as well as literature-based knowledge, characterizing the tissue in hand (Figure 1). Hence, it can be readily used to quite rapidly build and use a large array of human tissue-specific models. The resulting model satisfies stoichiometric, mass-balance, and thermodynamic constraints. It serves as a functional metabolic network that can then be used to explore the metabolic state of a tissue under various genetic and physiological conditions, simulating enzymatic inhibition or drug applications through standard constraint-based modeling methods, without requiring additional context-specific molecular data.
We applied this approach to build a genome scale model of liver metabolism, which is then comprehensively tested and validated. The model is shown to be able to simulate complex hepatic metabolic functions, as well as depicting the pathological alterations caused by urea cycle deficiencies. The liver model was applied to predict measured intra-cellular metabolic fluxes given measured metabolite uptake and secretion rates at different hepatic metabolic conditions. The predictions were tested using a comprehensive set of flux measurements performed by (Chan et al, 2003), showing that the liver model obtained more accurate predictions compared to those obtained by the original, generic human model (an overall prediction accuracy of 0.67 versus 0.46). Furthermore, it was applied to identify metabolic biomarkers for liver in-born errors of metabolism—once again, displaying superiority vs. the predictions generated by the generic human model (accuracy of 0.67 versus 0.59).
From a biotechnological standpoint, the liver model generated here can serve as a basis for future studies aiming to optimize the functioning of bio artificial liver devices. The application of the method to rapidly construct metabolic models of other human tissues can obviously lead to many other important clinical insights, e.g., concerning means for metabolic salvage of ischemic heart and brain tissues. Last but not least, the application of the new method is not limited to the realm of human modeling; it can be used to generate tissue models for any multi-tissue organism for which a generic model exists, such as the Mus musculus (Quek and Nielsen, 2008; Sheikh et al, 2005) and the model plant Arabidopsis thaliana (Poolman et al, 2009).
The computational study of human metabolism has been advanced with the advent of the first generic (non-tissue specific) stoichiometric model of human metabolism. In this study, we present a new algorithm for rapid reconstruction of tissue-specific genome-scale models of human metabolism. The algorithm generates a tissue-specific model from the generic human model by integrating a variety of tissue-specific molecular data sources, including literature-based knowledge, transcriptomic, proteomic, metabolomic and phenotypic data. Applying the algorithm, we constructed the first genome-scale stoichiometric model of hepatic metabolism. The model is verified using standard cross-validation procedures, and through its ability to carry out hepatic metabolic functions. The model's flux predictions correlate with flux measurements across a variety of hormonal and dietary conditions, and improve upon the predictive performance obtained using the original, generic human model (prediction accuracy of 0.67 versus 0.46). Finally, the model better predicts biomarker changes in genetic metabolic disorders than the generic human model (accuracy of 0.67 versus 0.59). The approach presented can be used to construct other human tissue-specific models, and be applied to other organisms.
doi:10.1038/msb.2010.56
PMCID: PMC2964116  PMID: 20823844
constraint based; hepatic; liver; metabolism
4.  THE FUNCTIONAL EFFECT OF EXPERIMENTAL INTRASPINAL INJECTIONS OF SERA WITH AND WITHOUT PRESERVATIVES 
The monkey (Macacus rhesus) usually tolerates readily the repeated intraspinal injection of large doses of 0.3 per cent. tricresol antimeningitis serum. The spontaneous respiration is generally not disturbed. Doses of 0.3 per cent. tricresol serum as large as 8 c.c. per kilo were injected intraspinally with subsequent recovery, even when the monkey had a partial pneumothorax. Dangerous alterations of the respiration and blood pressure in the monkey after 0.3 per cent. tricresol serum given by syringe are apparently largely due to increased intraspinal pressure, for the mere reduction of this pressure has sufficed to bring about a prompt and complete recovery. The medullary centers of the monkey (vagus, respiratory, and vasomotor) are highly resistant to the action of sera when injected intraspinally, strikingly more so than those of the dog. Occasionally the mere introduction of a hypodermic needle into the spinal dural sac of non-anesthetized, unoperated monkeys which have already received injections of 0.3 per cent. tricresol serum, may produce a severe collapse. A preceding partial asphyxia seems to be a necessary condition. Large quantities of sera are rapidly absorbed from the spinal dural sac of monkeys, and the clotting time of the blood is decreased. The spinal meninges of the monkey are resistant to infection; even primitive precautions during intraspinal injections apparently suffice to prevent infection. Dogs are much more sensitive to the intraspinal injection of 0.3 per cent. tricresol serum than monkeys; nevertheless they may tolerate as much as 6 c.c. per kilo provided that intratracheal insufflation is maintained for some time after each injection. The chief danger in dogs after intraspinal injections of 0.3 per cent. tricresol is a cessation of the respiration; for this reason artificial respiration is necessary. The blood pressure in the dog may be profoundly lowered by 0.3 per cent. tricresol serum, yet recovery is usually obtained if intratracheal insufflation is maintained. The effects of 0.3 per cent. tricresol serum upon the medullary centers is interpreted to be the result of either excitatory or inhibitory stimuli. No evidence was found that either the respiratory, vasomotor, or vagus center is paralyzed. The local application of 0.3 per cent. tricresol serum upon the exposed medulla of dogs does not produce the same effect upon the respiration and blood pressure as intraspinal injection of the same serum. A solution of 0.3 per cent. tricresol serum applied locally to the medulla of dogs occasionally produces a transient respiratory stop page, without markedly affecting the blood pressure even when intratracheal insufflation is stopped. Increased intraspinal pressure was found to be an important factor in the production of respiratory and blood pressure changes in the dog after intraspinal injection of 0.3 per cent. tricresol serum. Both in the monkey and in the dog 0.3 per cent. chloroform serum, 0.3 per cent. ether serum, or plain horse serum produced in general a smaller effect upon the medullary centers than 0.3 per cent. tricresol serum. The ideal preservative for therapeutic sera would seem to be one which could be removed before injection. Ether in this respect is better than chloroform. The opsonins in antimeningitis serum are about equally affected by 0.3 per cent. tricresol, 0.3 per cent. chloroform, or 0.3 per cent. ether when tested after one week, one month, and three months. When intraspinal injections are given in the human being it would seem advisable to be prepared to withdraw part of the injected fluid and to administer artificial respiration, if necessary. For a safe withdrawal of fluid the gravity method is the best; for artificial respiration Meltzer's apparatus for pharyngeal insufflation is recommended.
PMCID: PMC2125268  PMID: 19867851
5.  Usability of a Patient Education and Motivation Tool Using Heuristic Evaluation 
Background
Computer-mediated educational applications can provide a self-paced, interactive environment to deliver educational content to individuals about their health condition. These programs have been used to deliver health-related information about a variety of topics, including breast cancer screening, asthma management, and injury prevention. We have designed the Patient Education and Motivation Tool (PEMT), an interactive computer-based educational program based on behavioral, cognitive, and humanistic learning theories. The tool is designed to educate users and has three key components: screening, learning, and evaluation.
Objective
The objective of this tutorial is to illustrate a heuristic evaluation using a computer-based patient education program (PEMT) as a case study. The aims were to improve the usability of PEMT through heuristic evaluation of the interface; to report the results of these usability evaluations; to make changes based on the findings of the usability experts; and to describe the benefits and limitations of applying usability evaluations to PEMT.
Methods
PEMT was evaluated by three usability experts using Nielsen’s usability heuristics while reviewing the interface to produce a list of heuristic violations with severity ratings. The violations were sorted by heuristic and ordered from most to least severe within each heuristic.
Results
A total of 127 violations were identified with a median severity of 3 (range 0 to 4 with 0 = no problem to 4 = catastrophic problem). Results showed 13 violations for visibility (median severity = 2), 38 violations for match between system and real world (median severity = 2), 6 violations for user control and freedom (median severity = 3), 34 violations for consistency and standards (median severity = 2), 11 violations for error severity (median severity = 3), 1 violation for recognition and control (median severity = 3), 7 violations for flexibility and efficiency (median severity = 2), 9 violations for aesthetic and minimalist design (median severity = 2), 4 violations for help users recognize, diagnose, and recover from errors (median severity = 3), and 4 violations for help and documentation (median severity = 4).
Conclusion
We describe the heuristic evaluation method employed to assess the usability of PEMT, a method which uncovers heuristic violations in the interface design in a quick and efficient manner. Bringing together usability experts and health professionals to evaluate a computer-mediated patient education program can help to identify problems in a timely manner. This makes this method particularly well suited to the iterative design process when developing other computer-mediated health education programs. Heuristic evaluations provided a means to assess the user interface of PEMT.
doi:10.2196/jmir.1244
PMCID: PMC2802560  PMID: 19897458
Computers; health; education; usability; heuristic
6.  AB 74. Recurrence of pulmonary tuberculosis in a patient with undiagnosed kidney tuberculosis 
Journal of Thoracic Disease  2012;4(Suppl 1):AB74.
Background
Tuberculosis of the urinary system commonly complicates post-primary tuberculosis. In patients with active kidney tuberculosis, coincident lung disease often prevails in clinical picture, thus the diagnosis of urinary disease often escapes. Presentation of an interesting case of lung and kidney tuberculosis.
Patients and Methods
A 32 year-old female was referred to our clinic for investigation of possible kidney and lung tuberculosis. The patient reported two years of dysuric symptoms, with recurrent episodes of painless macroscopic haematuria. From recent history the patient reported two hospitalizations over a month due to fever and malaise, with laboratory findings of normochromic normocytic anemia, high ESR and microscopic hematuria and aseptic pyuria with normal renal function. Chest CT revealed scattered infiltrates in the middle and upper lung fields, ground-glass opacities and tree-in-bud pattern. CT scan of the abdomen showed hydronephrosis and hypodense areas in the left and right kidney, findings we also confirmed by kidney ultrasonography. Patient presented in our clinic with low grade fever and satisfactory respiratory function, while findings from the physical examination of the chest were insignificant. Mantoux test showed an infiltration of 15 mm. Direct sputum examination by Ziehl-Nielsen staining and PCR for M.Tuberculosis were negative. Ultimately, the diagnosis of lung disease and urinary tuberculosis were confirmed by positive Gen-probe results of gastric fluid and urine specimens and subsequent positive cultures. Prompt initiation of antituberculous therapy was followed by the patient’s marked clinical improvement within a few days and subsequent negative urine examination by Ziehl-Nielsen staining.
Results
Due to suspected hematogenous spread, the patient underwent fundoscopy and CT scan of the brain which revealed no pathological findings. To address hydronephrosis, a Pig-tail catheter was inserted in the left kidney. The patient received anti-tuberculous therapy for a year. During the follow-up period she remained in excellent general condition, with negative sputum and urine cultures and improvement of the imaging findings in the lungs and kidneys.
Conclusions
The initial presentation of urinary tract tuberculosis may be vague and the false interpretation of symptoms can result in great harm to the patient.
doi:10.3978/j.issn.2072-1439.2012.s074
PMCID: PMC3537391
7.  Anæsthesia in Chest Surgery, with Special Reference to Controlled Respiration and Cyclopropane 
Problems in chest surgery: Cases with prolonged toxæmia or amyloid disease require an anæsthetic agent of low toxicity. When sputum or blood are present in the tracheobronchial tree the anæsthesia should abolish reflex distrubances and excessive sputum be removed by suction. The technique should permit the use of a high oxygen atmosphere; controlled respiration with cyclopropane or ether fulfil these requirements. Open pneumothorax is present when a wound of the chest wall allows air to pass in and out of the pleural cavity. The lung on the affected side collapses and the mediastinum moves over and partly compresses the other lung.
The dangers of an open pneumothorax: (1) Paradoxical respiration—the lung on the affected side partially inflates on expiration and collapses on inspiration. Part of the air entering the good lung has been shuttled back from the lung on the affected side and is therefore vitiated. Full expansion of the sound lung is handicapped by the initial displacement of the mediastinum which increases on inspiration. The circulation becomes embarrassed.
(2) Vicious circle coughing. During a paroxysm of coughing dyspnœa will occur. This accentuates paradoxical respiration and starts a vicious circle. Death from asphyxia may result.
Special duties of the anæsthetist: (1) To carry out or supervise continuous circulatory resuscitation. During a thoracotomy a drip blood transfusion maintains normal blood-pressure and pulse-rate.
(2) To maintain effcient respiration.
Positive pressure anæsthesia: Risk of impacting secretions in smaller bronchi with subsequent atelectasis; eventual risk of CO2 poisoning without premonitory signs.
Controlled respiration: (1) How it is produced. (2) Its uses in chest surgery.
Controlled respiration means that the anæsthetist, having abolished the active respiratory efforts of the patient, maintains an efficient tidal exchange by rhythmic squeezing of the breathing bag. This may be done mechanically by Crafoord's modification of Frenkner's spiropulsator or by hand.
Active respiration will cease (i) if the patient's CO2 is lowered sufficiently by hyperventilation, (ii) if the patient's respiratory centre is depressed sufficiently by sedative and anæsthetic drugs, and (iii) by a combination of (i) and (ii) of less degree.
The author uses the second method, depressing the respiratory centre with omnoponscopolamine, pentothal sodium, and then cycloprȯpane. The CO2 absorption method is essential for this technique, and this and controlled respiration should be mastered by the anæsthetist with a familiar agent and used at first only in uncomplicated cases.
The significance of cardiac arrhythmias occuring with cyclopropane is discussed.
The place of the other available anæsthetic agents is discussed particularly on the advisability of using local anæsthesia for the drainage of empyema or lung abscess.
Pharyngeal airway or endotracheal tube? Anæsthesia may be maintained with a pharyngeal airway in many cases but intubation must be used when tracheobronchial suction may be necessary and when there may be difficulty in maintaining an unobstructed airway.
A one-lung anæsthesia is ideal for pneumonectomy. This may be obtained by endotracheal anæsthesia after bronchial tamponage of the affected side (Crafoord, v. fig. 6b) or by an endobronchial intubation of the sound side (v. figs. 9b and 9c). Endobronchial placing of the breathing tube may be performed “blind”. Before deciding on blind bronchial intubation, the anæsthetist must examine X-ray films for any abnormality deviating the trachea or bronchi. Though the right bronchus may be easily intubated blindly as a rule, there is the risk of occluding the orifice of the upper lobe bronchus (fig. 9d) when the patient will become cyanosed. If the tube bevel is facing its orifice the risk of occlusion will be decreased (fig. 9c).
Greater accuracy in placing the tube can be effected by inserting it under direct vision. Instruments for performing this manœuvre are described.
In lobectomy for bronchiectasis the anæsthetist must try to prevent the spread of infection to other parts. Ideally, the bronchus of the affected lobe should be plugged with ribbon gauze (Crafoord, v. fig. 6c) or a suction catheter with a baby balloon on it placed in the affected bronchus. In the presence of a large bronchopleural fistula controlled respiration cannot be established during operation. As the surgeon is rarely able to plug the fistula, if pneumonectomy is to be performed intubation for a one-lung anæsthesia is the best method. During other procedures it is essential to maintain quiet respiration.
In war casualties it is almost always possible, with the technique described, to leave the lung on the affected side fully expanded and thus frequently to restore normal respiratory physiology. Co-operation between surgeon and anæsthetist is essential.
PMCID: PMC1998132  PMID: 19992357
8.  Intracellular energetic units in healthy and diseased hearts 
BACKGROUND:
The present review examines the role of intra-cellular compartmentation of energy metabolism in vivo.
OBJECTIVE:
To compare the kinetics of the activation of mitochondrial respiration in skinned cardiac fibres by exogenous and endogenous adenine nucleotides in dependence of the modulation of cellular structure and contraction.
METHODS:
Saponin-permeabilized cardiac fibres or cells were analyzed using oxygraphy and confocal microscopy.
RESULTS:
Mitochondria respiration in fibres or cells was upregulated by cumulative additions of ADP to the medium with an apparent Km of 200 μM to 300 μM. When respiration was stimulated by endogenous ADP produced by intracellular ATPases, a near maximum respiration rate was achieved at an ADP concentration of less than 20 μM in the medium. A powerful ADP-consuming system, consisting of pyruvate kinase and phosphoenolpyruvate, that totally suppressed the activation of respiration by exogenous ADP, failed to abolish the stimulation of respiration by endogenous ADP, but did inhibit respiration after the cells were treated with trypsin. The addition of up to 4 μM of free Ca2+ to the actively respiring fibres resulted in reversible hypercontraction associated with a decreased apparent Km for exogenous ADP. These changes were fully abolished in fibres after the removal of myosin by KCl treatment.
CONCLUSIONS:
Mitochondria and ATPases, together with cytoskeletal proteins that establish the structural links between mitochondria and sarcomeres, form complexes – intracellular energetic units (ICEUs) – in cardiac cells. Within the ICEUs, the mitochondria and ATPases interact via specialized energy transfer systems, such as the creatine kinase- and adenylate kinase-phosphotransfer networks, and direct ATP channelling. Disintegration of the structure and function of ICEUs results in dyscompartmentation of adenine nucleotides and may represent a basis for cardiac diseases.
PMCID: PMC2716248  PMID: 19641684
ATPases; Compartmentation; Intracellular energetic units; Mitochondria; Regulation of respiration
9.  High Nitrate Concentrations in Vacuolate, Autotrophic Marine Beggiatoa spp 
Massive accumulations of very large Beggiatoa spp. are found at a Monterey Canyon cold seep and at Guaymas Basin hydrothermal vents. Both environments are characterized by high sediment concentrations of soluble sulfide and low levels of dissolved oxygen in surrounding waters. These filamentous, sulfur-oxidizing bacteria accumulate nitrate intracellularly at concentrations of 130 to 160 mM, 3,000- to 4,000-fold higher than ambient levels. Average filament widths range from 24 to 122 (mu)m, and individual cells of all widths possess a central vacuole. These findings plus recent parallel discoveries for Thioploca spp. (H. Fossing, V. A. Gallardo, B. B. Jorgensen, M. Huttel, L. P. Nielsen, H. Schulz, D. E. Canfield, S. Forster, R. N. Glud, J. K. Gundersen, J. Kuver, N. B. Ramsing, A. Teske, B. Thamdrup, and O. Ulloa, Nature (London) 374:713-715, 1995) suggest that nitrate accumulation may be a universal property of vacuolate, filamentous sulfur bacteria. Ribulose bisphosphate carboxylase-oxygenase and 2-oxoglutarate dehydrogenase activities in the Beggiatoa sp. from Monterey Canyon suggest in situ autotrophic growth of these bacteria. Nitrate reductase activity is much higher in the Monterey Beggiatoa sp. than in narrow, laboratory-grown strains of Beggiatoa spp., and the activity is found primarily in the membrane fraction, suggesting that the vacuolate Beggiatoa sp. can reduce nitrate coupled to electron flow through an electron transport system. Nitrate-concentrating and respiration potentials of these chemolithoautotrophs suggest that the Beggiatoa spp. described here are an important link between the sulfur, nitrogen, and carbon cycles at the Monterey Canyon seeps and the Guaymas Basin hydrothermal vents where they are found.
PMCID: PMC1388807  PMID: 16535282
10.  Seasonal Variation in CO2 Efflux of Stems and Branches of Norway Spruce Trees 
Annals of Botany  2007;101(3):469-477.
Background and Aims
Stem and branch respiration, important components of total forest ecosystem respiration, were measured on Norway spruce (Picea abies) trees from May to October in four consecutive years in order (1) to evaluate the influence of temperature on woody tissue CO2 efflux with special focus on variation in Q10 (change in respiration rate resulting from a 10 °C increase in temperature) within and between seasons, and (2) to quantify the contribution of above-ground woody tissue (stem and branch) respiration to the carbon balance of the forest ecosystem.
Methods
Stem and branch CO2 efflux were measured, using an IRGA and a closed gas exchange system, 3–4 times per month on 22-year-old trees under natural conditions. Measurements of ecosystem CO2 fluxes were also determined during the whole experiment by using the eddy covariance system. Stem and branch temperatures were monitored at 10-min intervals during the whole experiment.
Key Results
The temperature of the woody tissue of stems and branches explained up to 68 % of their CO2 efflux. The mean annual Q10 values ranged from 2·20 to 2·32 for stems and from 2·03 to 2·25 for branches. The mean annual normalized respiration rate, R10, for stems and branches ranged from 1·71 to 2·12 µmol CO2 m−2s −1 and from 0·24 to 0·31 µmol CO2 m−2 s−1, respectively. The annual contribution of stem and branch CO2 efflux to total ecosystem respiration were, respectively, 8·9 and 8·1 % in 1999, 9·2 and 9·2 % in 2000, 7·6 and 8·6 % in 2001, and 8·6 and 7·9 % in 2002. Standard deviation for both components ranged from 3 to 8 % of the mean.
Conclusions
Stem and branch CO2 efflux varied diurnally and seasonally, and were related to the temperature of the woody tissue and to growth. The proportion of CO2 efflux from stems and branches is a significant component of the total forest ecosystem respiration, approx. 8 % over the 4 years, and predictive models must take their contribution into account.
doi:10.1093/aob/mcm304
PMCID: PMC2701814  PMID: 18057065
Stem respiration; branch respiration; Picea abies; seasonal variation; temperature; Q10; R10
11.  Peptide Nucleic Acid-Mediated PCR Clamping as a Useful Supplement in the Determination of Microbial Diversity 
Peptide nucleic acid (PNA)-mediated PCR clamping (H. Ørum, P. E. Nielsen, M. Egholm, R. H. Berg, O. Buchardt, and C. Stanley, Nucleic Acids Res. 21:5332–5336, 1993) was introduced as a novel procedure to selectively amplify ribosomal DNAs (rDNAs) which are not frequently found in clone libraries generated by standard PCR from complex microbial consortia. Three different PNA molecules were used; two of these molecules (PNA-ALF and PNA-EUB353) overlapped with one of the amplification primers, whereas PNA-1114F hybridized to the middle of the amplified region. Thus, PCR clamping was achieved either by competitive binding between the PNA molecules and the forward or reverse primers (competitive clamping) or by hindering polymerase readthrough (elongation arrest). Gene libraries generated from mixed rDNA templates by using PCR clamping are enriched for clones that do not contain sequences homologous to the appropriate PNA oligomer. This effect of PCR clamping was exploited in the following two ways: (i) analysis of gene libraries generated by PCR clamping with PNA-ALF together with standard libraries reduced the number of clones which had to be analyzed to detect all of the different sequences present in an artificial rDNA mixture; and (ii) PCR clamping with PNA-EUB353 and PNA-1114F was used to selectively recover rDNA sequences which represented recently described phylogenetic groups (NKB19, TM6, cluster related to green nonsulfur bacteria) from an anaerobic, dechlorinating consortium described previously. We concluded that PCR clamping might be a useful supplement to standard PCR amplification in rDNA-based studies of microbial diversity and could be used to selectively recover members of undescribed phylogenetic clusters from complex microbial communities.
PMCID: PMC91862  PMID: 10653717
12.  Mechanism Targeted Discovery of Antitumor Marine Natural Products 
Current medicinal chemistry  2004;11(13):1725-1756.
Antitumor drug discovery programs aim to identify chemical entities for use in the treatment of cancer. Many strategies have been used to achieve this objective. Natural products have always played a major role in anticancer medicine and the unique metabolites produced by marine organisms have increasingly become major players in antitumor drug discovery. Rapid advances have occurred in the understanding of tumor biology and molecular medicine. New insights into mechanisms responsible for neoplastic disease are significantly changing the general philosophical approach towards cancer treatment. Recently identified molecular targets have created exciting new means for disrupting tumor-specific cell signaling, cell division, energy metabolism, gene expression, drug resistance, and blood supply. Such tumor-specific treatments could someday decrease our reliance on traditional cytotoxicity-based chemotherapy and provide new less toxic treatment options with significantly fewer side effects. Novel molecular targets and state-of-the-art molecular mechanism-based screening methods have revitalized antitumor research and these changes are becoming an ever-increasing component of modern antitumor marine natural products research. This review describes marine natural products identified using tumor-specific mechanism-based assays for regulators of angiogenesis, apoptosis, cell cycle, macromolecule synthesis, mitochondrial respiration, mitosis, multidrug efflux, and signal transduction. Special emphasis is placed on natural products directly discovered using molecular mechanism-based screening.
PMCID: PMC2908268  PMID: 15279579
molecular-target; anticancer; mechanism-based assays; screening methods
13.  THE INTRAVENOUS INJECTION OF MAGNESIUM SULPHATE FOR ANESTHESIA IN ANIMALS 
These experiments justify the following general conclusions. By the intravenous injection of M/4 magnesium sulphate into dogs at a certain rate, a stage can be reached where the abdominal walls are completely relaxed and when section of the abdomen and stimulation of sensitive parts of the parietal peritoneum do not produce pain or elicit any reaction of the animal. At the same time spontaneous respiration may still be maintained within normal limits and the lid reflex be fair or even normal. In this stage intratracheal intubation for artificial respiration can be easily accomplished. This stage may be attained in 12 to 14 minutes when the rate of injection is about 3 cc. per minute. When this stage is once attained the rate of injection should gradually be reduced, otherwise, sooner or later, spontaneous respiration will be abolished, and by a further maintenance of the rate of injection all the skeletal muscles may become paralyzed. When the injection of magnesium is continued for a longer period, the paralytic effects of the magnesium injection will set in, even when administered at a slow rate. The paralysis of the respiratory function is readily met by intrapharyngeal insufflation, which is easily executed even without training in this procedure, or by the method of intratracheal insufflation, if executed by one trained in its management. When the respiration of the animal is accomplished by insufflation, the paralytic effect of the magnesium may be abolished fairly rapidly by an intravenous injection of about 10 cc. of an M/8 calcium chloride solution; or it may disappear slowly, after the infusion of the magnesium solution is discontinued for some time. The latter mode of disappearance may be favorably accelerated by an intravenous infusion of 60 to 100 cc. of an M/4 solution of sodium sulphate. The production of anesthesia by intravenous injection of magnesium sulphate should not be undertaken unless an apparatus for intrapharyngeal insufflation is at hand, because in exceptional cases the disappearance of spontaneous respiration may be one of the earliest consequences of the magnesium injection. The injection of calcium chloride should not be employed in cases in which the subject shows cardiac insufficiency. In such instances, moreover, injections of magnesium should not be used for the purpose of anesthesia; at least not until greater experience has been acquired in the employment of this method.
PMCID: PMC2125448  PMID: 19868013
14.  Artificial Respiration and Artificial Circulation 
A training program in the newer methods of treatment of acute cardiopulmonary emergencies which was developed at the University Hospital, University of Saskatchewan, is reported. Artificial respiration by the chance rescuer, primary and secondary resuscitation, and post-resuscitation measures involving the use of special drugs and equipment by trained personnel are described. Figures and tables designed for wall-mounting and ready reference in an emergency situation are presented. Firstaid ventilatory adjuncts for use by trained personnel are classified and critically appraised, and the propriety of their use is emphasized. A plea is made to the medical profession and allied agencies to assume the responsibility of spreading knowledge of the new techniques more widely. Unless effective treatment is instituted early enough to prevent death or permanent anoxic damage to heart and brain, follow-through therapy will often be fruitless.
PMCID: PMC1928740  PMID: 14339303
15.  Cellular mechanisms of mutations in Kv7.1: auditory functions in Jervell and Lange-Nielsen syndrome vs. Romano–Ward syndrome 
As a result of cell-specific functions of voltage-activated K+ channels, such as Kv7.1, mutations in this channel produce profound cardiac and auditory defects. At the same time, the massive diversity of K+ channels allows for compensatory substitution of mutant channels by other functional channels of their type to minimize defective phenotypes. Kv7.1 represents a clear example of such functional dichotomy. While several point mutations in the channel result in a cardio-auditory syndrome called Jervell and Lange-Nielsen syndrome (JLNS), about 100-fold mutations result in long QT syndrome (LQTS) denoted as Romano–Ward syndrome (RWS), which has an intact auditory phenotype. To determine whether the cellular mechanisms for the diverse phenotypic outcome of Kv7.1 mutations, are dependent on the tissue-specific function of the channel and/or specialized functions of the channel, we made series of point mutations in hKv7.1 ascribed to JLNS and RWS. For JLNS mutations, all except W248F yielded non-functional channels when expressed alone. Although W248F at the end of the S4 domain yielded a functional current, it underwent marked inactivation at positive voltages, rendering the channel non-functional. We demonstrate that by definition, none of the JLNS mutants operated in a dominant negative (DN) fashion. Instead, the JLNS mutants have impaired membrane trafficking, trapped in the endoplasmic reticulum (ER) and Cis-Golgi. The RWS mutants exhibited varied functional phenotypes. However, they can be summed up as exhibiting DN effects. Phenotypic differences between JLNS and RWS may stem from tissue-specific functional requirements of cardiac vs. inner ear non-sensory cells.
doi:10.3389/fncel.2015.00032
PMCID: PMC4319400
genetic diseases; membrane trafficking; mutant; potassium channels; hearing loss
16.  Pseudospirochaetosis of the urinary bladder 
Journal of Clinical Pathology  2005;58(4):437-438.
This report describes an elderly patient with urinary symptoms who showed surface colonisation of the transitional mucosa of the bladder by an unusual haematoxophilic microorganism superficially resembling the “blue fuzz” seen in colonic biopsies showing intestinal spirochaetosis. Special stains showed that the organisms were Gram and Giemsa positive, weakly argyrophilic, and Ziehl-Nielsen negative. Immunostains were negative for Helicobacter pylori and electron microscopy revealed curious curved bodies, which were difficult to classify. Therefore, this condition was described as pseudospirochaetosis of the urinary bladder. The urinary symptoms regressed on treatment with ciprofloxacin. The clinicopathological relevance of these findings is discussed in the report.
doi:10.1136/jcp.2004.020529
PMCID: PMC1770639  PMID: 15790716
lower urinary tract symptoms; pseudospirochaetosis; spirochaetes; urinary tract infection
17.  THE STATUS OF RESPIRATION IN THE METHODS OF DIFFERENTIAL PRESSURE COMPARED WITH THAT UNDER THE METHOD OF INTRATRACHEAL INSUFFLATION 
The maintenance of life of an individual with an open double pneumothorax under differential pressure depends essentially upon the normal position of the lower lobes of the lungs and their close approximation to the diaphragm, especially of the posterior parts of the lobes. A complete dislodgment of both lower lobes leads invariably to the death of the individual, which may occur in a very short time, or after fifteen to twenty-five minutes. In all cases the respiration is affected first; it slows almost at once and stops invariably before the heart. The result is the same whether the vagi are intact or both nerves are cut. Exceptionally, respiration may continue even after the separation of the lungs from the diaphragm, but only by having all the lobes well approximated to the thoracic walls. When by dislodgment of the lower lobes the respiration is stopped and the heart is feeble and slow, or stopped completely, it is rarely possible to restore life by artificial respiration or by other appropriate means. The extent of the exchange of gases occurring in normal respiration, with closed thoracic cavity, exceeds greatly the need for the maintenance of life, since normal respiration is provided with an abundance of factors of safety. Under differential pressure, however, life is carried on with an exchange of gases which amounts to a small fraction only of the extent of the exchange that takes place in normal respiration; respiration under differential pressure is, therefore, deprived of all factors of safety and is incapable of resisting the dangers of exceptional incidents. Deaths occurring in connection with the differential pressure have their cause essentially in this unguarded state of the function of respiration. Under tracheal insufflation, the function of respiration is surrounded with effective safeguards, at least as much as is normal respiration. Dislodgment of the lungs has no detrimental effect. After complete collapse of the lungs, capillary adhesions within the alveoli and the small bronchi become an additional obstacle to the redistension and respiration. Differential pressure holds this obstacle in abeyance. When the lungs become collapsed during an open pneumothorax, it should be kept in mind that the force which is required for redistension is greater than that which is sufficient to keep the lungs continually distended. At the beginning of a redistension, therefore, a higher pressure should be employed for a short time.
PMCID: PMC2124845  PMID: 19867498
18.  A Methodology for Adaptable and Robust Ecosystem Services Assessment 
PLoS ONE  2014;9(3):e91001.
Ecosystem Services (ES) are an established conceptual framework for attributing value to the benefits that nature provides to humans. As the promise of robust ES-driven management is put to the test, shortcomings in our ability to accurately measure, map, and value ES have surfaced. On the research side, mainstream methods for ES assessment still fall short of addressing the complex, multi-scale biophysical and socioeconomic dynamics inherent in ES provision, flow, and use. On the practitioner side, application of methods remains onerous due to data and model parameterization requirements. Further, it is increasingly clear that the dominant “one model fits all” paradigm is often ill-suited to address the diversity of real-world management situations that exist across the broad spectrum of coupled human-natural systems. This article introduces an integrated ES modeling methodology, named ARIES (ARtificial Intelligence for Ecosystem Services), which aims to introduce improvements on these fronts. To improve conceptual detail and representation of ES dynamics, it adopts a uniform conceptualization of ES that gives equal emphasis to their production, flow and use by society, while keeping model complexity low enough to enable rapid and inexpensive assessment in many contexts and for multiple services. To improve fit to diverse application contexts, the methodology is assisted by model integration technologies that allow assembly of customized models from a growing model base. By using computer learning and reasoning, model structure may be specialized for each application context without requiring costly expertise. In this article we discuss the founding principles of ARIES - both its innovative aspects for ES science and as an example of a new strategy to support more accurate decision making in diverse application contexts.
doi:10.1371/journal.pone.0091001
PMCID: PMC3953216  PMID: 24625496
19.  γ-MYN: a new algorithm for estimating Ka and Ks with consideration of variable substitution rates 
Biology Direct  2009;4:20.
Background
Over the past two decades, there have been several approximate methods that adopt different mutation models and used for estimating nonsynonymous and synonymous substitution rates (Ka and Ks) based on protein-coding sequences across species or even different evolutionary lineages. Among them, MYN method (a Modified version of Yang-Nielsen method) considers three major dynamic features of evolving DNA sequences–bias in transition/transversion rate, nucleotide frequency, and unequal transitional substitution but leaves out another important feature: unequal substitution rates among different sites or nucleotide positions.
Results
We incorporated a new feature for analyzing evolving DNA sequences–unequal substitution rates among different sites–into MYN method, and proposed a modified version, namely γ (gamma)-MYN, based on an assumption that the evolutionary rate at each site follows a mode of γ-distribution. We applied γ-MYN to analyze the key estimator of selective pressure ω (Ka/Ks) and other relevant parameters in comparison to two other related methods, YN and MYN, and found that neglecting the variation of substitution rates among different sites may lead to biased estimations of ω. Our new method appears to have minimal deviations when relevant parameters vary within normal ranges defined by empirical data.
Conclusion
Our results indicate that unequal substitution rates among different sites have variable influences on ω under different evolutionary rates while both transition/transversion rate ratio and unequal nucleotide frequencies affect Ka and Ks thus selective pressure ω.
Reviewers
This paper was reviewed by Kateryna Makova, David A. Liberles (nominated by David H Ardell), Zhaolei Zhang (nominated by Mark Gerstein), and Shamil Sunyaev.
doi:10.1186/1745-6150-4-20
PMCID: PMC2702329  PMID: 19531225
20.  Vast Volatility Matrix Estimation using High Frequency Data for Portfolio Selection* 
Portfolio allocation with gross-exposure constraint is an effective method to increase the efficiency and stability of portfolios selection among a vast pool of assets, as demonstrated in Fan et al. (2011). The required high-dimensional volatility matrix can be estimated by using high frequency financial data. This enables us to better adapt to the local volatilities and local correlations among vast number of assets and to increase significantly the sample size for estimating the volatility matrix. This paper studies the volatility matrix estimation using high-dimensional high-frequency data from the perspective of portfolio selection. Specifically, we propose the use of “pairwise-refresh time” and “all-refresh time” methods based on the concept of “refresh time” proposed by Barndorff-Nielsen et al. (2008) for estimation of vast covariance matrix and compare their merits in the portfolio selection. We establish the concentration inequalities of the estimates, which guarantee desirable properties of the estimated volatility matrix in vast asset allocation with gross exposure constraints. Extensive numerical studies are made via carefully designed simulations. Comparing with the methods based on low frequency daily data, our methods can capture the most recent trend of the time varying volatility and correlation, hence provide more accurate guidance for the portfolio allocation in the next time period. The advantage of using high-frequency data is significant in our simulation and empirical studies, which consist of 50 simulated assets and 30 constituent stocks of Dow Jones Industrial Average index.
doi:10.1080/01621459.2012.656041
PMCID: PMC3526073  PMID: 23264708
Volatility matrix estimation; high frequency data; concentration inequalities; portfolio allocation; risk assessment; refresh time
21.  Computing Ka and Ks with a consideration of unequal transitional substitutions 
Background
Approximate methods for estimating nonsynonymous and synonymous substitution rates (Ka and Ks) among protein-coding sequences have adopted different mutation (substitution) models. In the past two decades, several methods have been proposed but they have not considered unequal transitional substitutions (between the two purines, A and G, or the two pyrimidines, T and C) that become apparent when sequences data to be compared are vast and significantly diverged.
Results
We propose a new method (MYN), a modified version of the Yang-Nielsen algorithm (YN), for evolutionary analysis of protein-coding sequences in general. MYN adopts the Tamura-Nei Model that considers the difference among rates of transitional and transversional substitutions as well as factors in codon frequency bias. We evaluate the performance of MYN by comparing to other methods, especially to YN, and to show that MYN has minimal deviations when parameters vary within normal ranges defined by empirical data.
Conclusion
Our comparative results deriving from consistency analysis, computer simulations and authentic datasets, indicate that ignoring unequal transitional rates may lead to serious biases and that MYN performs well in most of the tested cases. These results also suggest that acquisitions of reliable synonymous and nonsynonymous substitution rates primarily depend on less biased estimates of transition/transversion rate ratio.
doi:10.1186/1471-2148-6-44
PMCID: PMC1552089  PMID: 16740169
22.  Probe Microphone Measurements: 20 Years of Progress 
Trends in Amplification  2001;5(2):35-68.
Probe-microphone testing was conducted in the laboratory as early as the 1940s (e.g., the classic work of Wiener and Ross, reported in 1946), however, it was not until the late 1970s that a “dispenser friendly” system was available for testing hearing aids in the real ear. In this case, the term “dispenser friendly,” is used somewhat loosely. The 1970s equipment that I'm referring to was first described in a paper that was presented by Earl Harford, Ph.D. in September of 1979 at the International Ear Clinics' Symposium in Minneapolis. At this meeting, Earl reported on his clinical experiences of testing hearing aids in the real ear using a miniature (by 1979 standards) Knowles microphone. The microphone was coupled to an interfacing impedance matching system (developed by David Preves, Ph.D., who at the time worked at Starkey Laboratories) which could be used with existing hearing aid analyzer systems (see Harford, 1980 for review of this early work). Unlike today's probe tube microphone systems, this early method of clinical real-ear measurement involved putting the entire microphone (about 4mm by 5mm by 2mm) in the ear canal down by the eardrum of the patient. If you think cerumen is a problem with probe-mic measurements today, you should have seen the condition of this microphone after a day's work!
While this early instrumentation was a bit cumbersome, we quickly learned the advantages that probe-microphone measures provided in the fitting of hearing aids. We frequently ran into calibration and equalization problems, not to mention a yelp or two from the patient, but the resulting information was worth the trouble.
Help soon arrived. In the early 1980s, the first computerized probe-tube microphone system, the Rastronics CCI-10 (developed in Denmark by Steen Rasmussen), entered the U.S. market (Nielsen and Rasmussen, 1984). This system had a silicone tube attached to the microphone (the transmission of sound through this tube was part of the calibration process), which eliminated the need to place the microphone itself in the ear canal. By early 1985, three or four different manufactures had introduced this new type of computerized probe-microphone equipment, and this hearing aid verification procedure became part of the standard protocol for many audiology clinics. At his time, the POGO (Prescription Of Gain and Output) and Libby 1/3 prescriptive fitting methods were at the peak of their popularity, and a revised NAL (National Acoustic Laboratories) procedure was just being introduced. All three of these methods were based on functional gain, but insertion gain easily could be substituted, and therefore, manufacturers included calculation of these prescriptive targets as part of the probe-microphone equipment software. Audiologists, frustrated with the tedious and unreliable functional gain procedure they had been using, soon developed a fascination with matching real-ear results to prescriptive targets on a computer monitor.
In some ways, not a lot has changed since those early days of probe-microphone measurements. Most people who use this equipment simply run a gain curve for a couple inputs and see if it's close to prescriptive target—something that could be accomplished using the equipment from 1985. Contrary to the predictions of many, probe-mic measures have not become the “standard hearing aid verification procedure.” (Mueller and Strouse, 1995). There also has been little or no increase in the use of this equipment in recent years. In 1998, I reported on a survey that was conducted by The Hearing Journal regarding the use of probe-microphone measures (Mueller, 1998). We first looked at what percent of people dispensing hearing aids own (or have immediate access to) probe-microphone equipment. Our results showed that 23% of hearing instrument specialists and 75% of audiologists have this equipment. Among audiologists, ownership varied among work settings: 91% for hospitals/clinics, 73% for audiologists working for physicians, and 69% for audiologists in private practice. But more importantly, and a bit puzzling, was the finding that showed that nearly one half of the people who fit hearing aids and have access to this equipment, seldom or never use it.
I doubt that the use rate of probe-microphone equipment has changed much in the last three years, and if anything, I suspect it has gone down. Why do I say that? As programmable hearing aids have become the standard fitting in many clinics, it is tempting to become enamoured with the simulated gain curves on the fitting screen, somehow believing that this is what really is happening in the real ear. Additionally, some dispensers have been told that you can't do reliable probe-mic testing with modern hearing aids—this of course is not true, and we'll address this issue in the Frequently Asked Questions portion of this paper.
The infrequent use of probe-mic testing among dispensers is discouraging, and let's hope that probe-mic equipment does not suffer the fate of the rowing machine stored in your garage. A lot has changed over the years with the equipment itself, and there are also expanded clinical applications and procedures. We have new manufacturers, procedures, acronyms and noises. We have test procedures that allow us to accurately predict the output of a hearing aid in an infant's ear. We now have digital hearing aids, which provide us the opportunity to conduct real-ear measures of the effects of digital noise reduction, speech enhancement, adaptive feedback, expansion, and all the other features. Directional microphone hearing aids have grown in popularity and what better way to assess the real-ear directivity than with probe-mic measures? The array of assistive listening devices has expanded, and so has the role of the real-ear assessment of these products. And finally, with today's PC -based systems, we can program our hearing aids and simultaneously observe the resulting real-ear effects on the same fitting screen, or even conduct an automated target fitting using earcanal monitoring of the output. There have been a lot of changes, and we'll talk about all of them in this issue of Trends.
doi:10.1177/108471380100500202
PMCID: PMC4168927  PMID: 25425897
23.  Codon-based tests of positive selection, branch lengths, and the evolution of mammalian immune system genes 
Immunogenetics  2008;60(9):495-506.
Using basic probability theory, we show that there is a substantial likelihood that even in the presence of strong purifying selection, there will be a number of codons in which the number of synonymous nucleotide substitutions per site (dS) exceeds the number of non-synonymous nucleotide substitutions per site (dN). In an empirical study, we examined the numbers of synonymous (bS) and non-synonymous substitutions (bN) along branches of the phylogenies of 69 single-copy orthologous genes from seven species of mammals. A pattern of bN>bS was most commonly seen in the shortest branches of the tree and was associated with a high coefficient of variation in both bN and bS, suggesting that high stochastic error in bN and bS on short branches, rather than positive Darwinian selection, is the explanation of most cases where bN is greater than bS on a given branch. The branch-site method of Zhang et al. (Zhang, Nielsen, Yang, Mol Biol Evol, 22:2472–2479, 2005) identified 117 codons on 35 branches as “positively selected,” but a majority of these codons lacked synonymous substitutions, while in the others, synonymous and non-synonymous differences per site occurred in approximately equal frequencies. Thus, it was impossible to rule out the hypothesis that chance variation in the pattern of mutation across sites, rather than positive selection, accounted for the observed pattern. Our results showed that bN/bS was consistently elevated in immune system genes, but neither the search for branches with bN>bS nor the branch-site method revealed this trend.
doi:10.1007/s00251-008-0304-4
PMCID: PMC2837078  PMID: 18581108
Immune system evolution; Non-synonymous substitution; Positive Darwinian selection; Stochastic error; Synonymous substitution
24.  Isolation with Migration Models for More Than Two Populations 
Molecular Biology and Evolution  2009;27(4):905-920.
A method for studying the divergence of multiple closely related populations is described and assessed. The approach of Hey and Nielsen (2007, Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics. Proc Natl Acad Sci USA. 104:2785–2790) for fitting an isolation-with-migration model was extended to the case of multiple populations with a known phylogeny. Analysis of simulated data sets reveals the kinds of history that are accessible with a multipopulation analysis. Necessarily, processes associated with older time periods in a phylogeny are more difficult to estimate; and histories with high levels of gene flow are particularly difficult with more than two populations. However, for histories with modest levels of gene flow, or for very large data sets, it is possible to study large complex divergence problems that involve multiple closely related populations or species.
doi:10.1093/molbev/msp296
PMCID: PMC2877539  PMID: 19955477
divergence population genetics; coalescent; gene flow; speciation
25.  Phenotype, origin and estimated prevalence of a common long QT syndrome mutation: a clinical, genealogical and molecular genetics study including Swedish R518X/KCNQ1 families 
Background
The R518X/KCNQ1 mutation is a common cause of autosomal recessive (Jervell and Lange Nielsen Syndrome- JLNS) and autosomal dominant long QT syndrome (LQTS) worldwide. In Sweden p.R518X accounts for the majority of JLNS cases and is the second most common cause of LQTS. Here we investigate the clinical phenotype and origin of Swedish carriers of the p.R518X mutation.
Methods
The study included 19 Swedish p.R518X index families, ascertained by molecular genetics methods (101 mutation-carriers, whereof 15 JLNS cases and 86 LQTS cases). In all families analyses included assessment of clinical data (symptoms, medications and manually measured electrocardiograms), genealogy (census records), haplotype (microsatellite markers) as well as assessment of mutation age and associated prevalence (ESTIAGE and DMLE computer software).
Results
Clinical phenotype ranged from expectedly severe in JLNS to surprisingly benign in LQTS (QTc 576 ± 61 ms vs. 462 ± 34 ms, cumulative incidence of (aborted) cardiac arrest 47% vs. 1%, annual non-medicated incidence rate (aborted) cardiac arrest 4% vs. 0.04%).
A common northern origin was found for 1701/1929 ancestors born 1650-1950. Historical geographical clustering in the coastal area of the Pite River valley was shown. A shared haplotype spanning the KCNQ1 gene was seen in 17/19 families. Mutation age was estimated to 28 generations (95% CI 19;41). A high prevalence of Swedish p.R518X heterozygotes was suggested (~1:2000-4000).
Conclusions
R518X/KCNQ1 occurs as a common founder mutation in Sweden and is associated with an unexpectedly benign phenotype in heterozygous carriers.
doi:10.1186/1471-2261-14-22
PMCID: PMC3942207  PMID: 24552659
Long QT Syndrome; Genotype-phenotype correlations; Clinical phenotype; Founder mutation; Mutation age; Prevalence estimate

Results 1-25 (580999)