PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1274830)

Clipboard (0)
None

Related Articles

1.  Exploration of transitional life events in individuals with Friedreich ataxia: Implications for genetic counseling 
Abstract
Background
Human development is a process of change, adaptation and growth. Throughout this process, transitional events mark important points in time when one's life course is significantly altered. This study captures transitional life events brought about or altered by Friedreich ataxia, a progressive chronic illness leading to disability, and the impact of these events on an affected individual's life course.
Methods
Forty-two adults with Friedreich ataxia (18-65y) were interviewed regarding their perceptions of transitional life events. Data from the interviews were coded and analyzed thematically using an iterative process.
Results
Identified transitions were either a direct outcome of Friedreich ataxia, or a developmental event altered by having the condition. Specifically, an awareness of symptoms, fear of falling and changes in mobility status were the most salient themes from the experience of living with Friedreich ataxia. Developmental events primarily influenced by the condition were one's relationships and life's work.
Conclusions
Friedreich ataxia increased the complexity and magnitude of transitional events for study participants. Transitional events commonly represented significant loss and presented challenges to self-esteem and identity. Findings from this study help alert professionals of potentially challenging times in patients' lives, which are influenced by chronic illness or disability. Implications for developmental counseling approaches are suggested for genetic counseling.
Background
Human development can be described in terms of key transitional events, or significant times of change. Transitional events initiate shifts in the meaning or direction of life and require the individual to develop skills or utilize coping strategies to adapt to a novel situation [1,2]. A successful transition has been defined as the development of a sense of mastery over the changed event [3].
Transitions can be influenced by a variety of factors including one's stage of development, such as graduation from high school, historical events, including war, and idiosyncratic factors, such as health status [4,5]. Of particular interest in the present study are transitional life events, brought about or altered by progressive chronic illness and disability, and the impact of these events on the lives of affected individuals.
It has been recognized that the clinical characteristics of a chronic illness or disability may alter the course and timing of many developmentally-related transitional events [6]. For example, conditions associated with a shortened lifespan may cause an individual to pursue a career with a shorter course of training [6]. Specific medical manifestations may also promote a lifestyle incongruent with developmental needs [6,7]. For example, an adolescent with a disability may have difficulty achieving autonomy because of his/her physical dependence on others.
In addition to the aforementioned effects of chronic illness and disability on developmentally-related transitional events, a growing body of literature has described disease-related transitional events: those changes that are a direct result of chronic illness and disability. Diagnosis has received attention as being a key disease-related transitional event [8,9]. Studies have also noted other disease transitions related to illness trajectory [10], as the clinical features of the disease may require the individual to make specific adaptations. Disease-related events have also been described in terms of accompanying psychological processes, such as one's awareness of differences brought about by illness [11].
While disease-related events are seemingly significant, the patient's perception of the events is varied. Some events may be perceived as positive experiences for the individual. For example, a diagnosis may end years of uncertainty. Some individuals may perceive these transitional events as insignificant, as they have accommodated to the continual change brought about by a chronic disease [12,13].
The aforementioned impact of disability and chronic illness on transitional events may create psychological stress. Developed by Lazarus and Folkman, the Transitional Model of Stress and Coping describes the process of adaptation to a health condition [14]. This model purports that individuals first appraise a stressor and then utilize a variety of coping strategies in order to meet the stressor's demands [14]. Thus, in the context of chronic illness, the ability of the individual to cope successfully with the stress of a health threat contributes to the process of overall adaptation to the condition.
The process of adaptation can be more complex when the chronic illness or disability is progressive. Each transition brought about or altered by the disability may also represent additional loss, including the loss of future plans, freedom in social life and the ability to participate in hobbies [15]. These losses may be accompanied by grief, uncertainty, and a continual need for adaptation [16,17].
Friedreich ataxia (FRDA) is one example of a progressive disorder, leading to adolescent and adult onset disability. To better understand patients' perceptions of key transitional events and the factors perceived to facilitate progression through these events, individuals with FRDA were interviewed.
FRDA is a rare, progressive, neurodegenerative disorder affecting approximately one in 30,000 people in the United States [18]. It equally affects both men and women. Individuals with FRDA experience progressive muscle weakness and loss of coordination in the arms and legs. For most patients, ataxia leads to motor incapacitation and full-time use of a wheelchair, commonly by the late teens or early twenties. Other complications such as vision and hearing impairment, dysarthria, scoliosis, diabetes mellitus and hypertrophic cardiomyopathy may occur [19,20]. Cardiomyopathy and respiratory difficulties often lead to premature death at an average age of 37 years [21]. Currently, there are no treatments or cures for FRDA. Little is known about the specific psychological or psychosocial effects of the condition.
FRDA is an autosomal recessive condition. The typical molecular basis of Friedreich ataxia is the expansion of a GAA trinucleotide repeat in both copies of the FXN gene [22]. Age of onset usually occurs in late childhood or early adolescence. However, the availability of genetic testing has identified affected individuals with an adult form of the condition. This late-onset form is thought to represent approximately 10-15% of the total FRDA population [23].
Health care providers of individuals with progressive, neurodegenerative disorders can help facilitate their patients' progression through transitional events. Data suggest that improvements should be made in the care of these individuals. Shaw et al. [24] found that individualized care that helps to prepare patients for transition is beneficial. Beisecker et al. [25] found that patients desire not only physical care from their providers, but also emotional and psychosocial support.
Genetic counselors have an important opportunity to help patients with neuromuscular disorders progress through transitional events, as several of these conditions have a genetic etiology. Genetic counselors in pediatric and adult settings often develop long-term relationships with patients, due to follow-up care. This extended relationship is becoming increasingly common as genetic counselors move into various medical sub-specialties, such as neurology, ophthalmology, oncology and cardiology.
The role of the genetic counselor in addressing the psychosocial needs of patients has been advocated, but rarely framed in the context of developmental events [26]. Data suggest that patients may not expect a genetic counselor to address psychosocial needs [27]. In a survey of genetic counseling patients, Wertz [28] found a majority of respondents understood genetic conditions to have a moderate to serious effect on family life and finances, while almost half perceived there to be an effect on the spouse, quality of life, and the relationship between home and work. However, these topics were reportedly not discussed within genetic counseling sessions [27,28]. Overall, there is limited information about the experiences of transitional life events in FRDA, as well as a lack of recommendations for genetic counselors and other health care providers to assist patients through these events.
Our study investigated perceptions of patients with Friedreich ataxia to 1) identify key transitional events and specific needs associated with events; 2) describe perception of factors to facilitate progression through the identified events; and 3) explore the actual or potential role of the health care provider in facilitating adaptation to the identified events. Data were used to make suggestions for developmental genetic counseling approaches in the context of ongoing care of clients with hereditary, progressive, neurodegenerative conditions.
doi:10.1186/1744-9081-6-65
PMCID: PMC2987979  PMID: 20979606
2.  Cost effectiveness analysis of different approaches of screening for familial hypercholesterolaemia 
BMJ : British Medical Journal  2002;324(7349):1303.
Objectives
To assess the cost effectiveness of strategies to screen for and treat familial hypercholesterolaemia.
Design
Cost effectiveness analysis. A care pathway for each patient was delineated and the associated probabilities, benefits, and costs were calculated.
Participants
Simulated population aged 16-54 years in England and Wales.
Interventions
Identification and treatment of patients with familial hypercholesterolaemia by universal screening, opportunistic screening in primary care, screening of people admitted to hospital with premature myocardial infarction, or tracing family members of affected patients.
Main outcome measure
Cost effectiveness calculated as cost per life year gained (extension of life expectancy resulting from intervention) including estimated costs of screening and treatment.
Results
Tracing of family members was the most cost effective strategy (£3097 (€5066, $4479) per life year gained) as 2.6 individuals need to be screened to identify one case at a cost of £133 per case detected. If the genetic mutation was known within the family then the cost per life year gained (£4914) was only slightly increased by genetic confirmation of the diagnosis. Universal population screening was least cost effective (£13 029 per life year gained) as 1365 individuals need to be screened at a cost of £9754 per case detected. For each strategy it was more cost effective to screen younger people and women. Targeted strategies were more expensive per person screened, but the cost per case detected was lower. Population screening of 16 year olds only was as cost effective as family tracing (£2777 with a clinical confirmation).
Conclusions
Screening family members of people with familial hypercholesterolaemia is the most cost effective option for detecting cases across the whole population.
What is already known on this topicIn the United Kingdom there are an estimated 110 000 men and women with familial hypercholesterolaemia, only a small percentage of whom have been identified to dateWithout identification and treatment, over half of these people will have a fatal or non-fatal coronary heart disease event by the age of 50 (men) or 60 (women)Effective treatment of high cholesterol concentrations reduces total and coronary heart disease mortalityNo recommended screening strategy currently exists in the United Kingdom for familial hypercholesterolaemiaWhat this study addsComputer modelling has shown that the earlier familial hypercholesterolaemia is diagnosed the more cost effective the screening strategy becomesIdentifying relatives of people with familial hypercholesterolaemia is the most cost effective screening option for all age groupsAs technology improves and the cost of statins falls all strategies will become more cost effective
PMCID: PMC113765  PMID: 12039822
3.  Computer support for recording and interpreting family histories of breast and ovarian cancer in primary care (RAGs): qualitative evaluation with simulated patients 
BMJ : British Medical Journal  1999;319(7201):32-36.
Objectives
To explore general practitioners’ attitudes towards and use of a computer program for assessing genetic risk of cancer in primary care.
Design
Qualitative analysis of semistructured interviews and video recordings of simulated consultations.
Participants
Purposive sample of 15 general practitioners covering a range of computer literacy, interest in genetics, age, and sex.
Interventions
Each doctor used the program in two consultations in which an actor played a woman concerned about her family history of cancer. Consultations were videotaped and followed by interviews with the video as a prompt to questioning.
Main outcome measures
Use of computer program in the consultation.
Results
The program was viewed as an appropriate application of information technology because of the complexity of cancer genetics and a sense of “guideline chaos” in primary care. Doctors found the program easy to use, but it often affected their control of the consultation. They needed to balance their desire to share the computer screen with the patient, driven by their concerns about the effect of the computer on doctor-patient communication, against the risk of premature disclosure of bad news.
Conclusions
This computer program could provide the necessary support to assist assessment of genetic risk of cancer in primary care. The potential impact of computer software on the consultation should not be underestimated. This study highlights the need for careful evaluation when developing medical information systems.
Key messagesGeneral practitioners are under increasing pressure to advise their patients about genetic predisposition to various diseasesComputers could help doctors to give genetic advice by simplifying the construction and assessment of family trees and implementing referral guidelinesThis qualitative evaluation explored the context in which a computer program for assessing genetic risk of cancer would be used in general practice and issues surrounding its integration into a consultationMost of the doctors found the program easy to use, but it affected their control of the consultation—because of their desire to share the computer screen with the patient and their inability to anticipate the information that would be displayedThe study identified important issues relating to the use of computers in consultations which may be of use in testing software for primary care in the future
PMCID: PMC28153  PMID: 10390458
4.  Knowledge, attitudes and preferences regarding genetic testing for smoking cessation. A cross-sectional survey among Dutch smokers 
BMJ Open  2012;2(1):e000321.
Objectives
Recent research strongly suggests that genetic variation influences smokers' ability to stop. Therefore, the use of (pharmaco) genetic testing may increase cessation rates. This study aims to assess the intention of smokers concerning undergoing genetic testing for smoking cessation and their knowledge, attitudes and preferences about this subject.
Design
Online cross-sectional survey.
Setting
Database internet research company of which every inhabitant of the Netherlands of ≥12 years with an email address and capable of understanding Dutch can become a member.
Participants
587 of 711 Dutch smokers aged ≥18 years, daily smokers for ≥5 years and smoke on average ≥10 cigarettes/day (response rate=83%).
Primary and secondary outcome measures
Smokers' knowledge, attitudes and preferences and their intention to undergo genetic testing for smoking cessation.
Results
Knowledge on the influence of genetic factors in smoking addiction and cessation was found to be low. Smokers underestimated their chances of having a genetic predisposition and the influence of this on smoking cessation. Participants perceived few disadvantages, some advantages and showed moderate self-efficacy towards undergoing a genetic test and dealing with the results. Smokers were mildly interested in receiving information and participating in genetic testing, especially when offered by their general practitioner (GP).
Conclusions
For successful implementation of genetic testing for smoking in general practice, several issues should be addressed, such as the knowledge on smoking cessation, genetics and genetic testing (including advantages and disadvantages) and the influence of genetics on smoking addiction and cessation. Furthermore, smokers allocate their GPs a crucial role in the provision of information and the delivery of a genetic test for smoking; however, it is unclear whether GPs will be able and willing to take on this role.
Article summary
Article focus
Intention of smokers to undergo genetic testing for smoking cessation.
Smokers' knowledge, attitudes and preferences regarding genetic testing for smoking.
To aid decisions on the most appropriate strategies for counselling patients and communicating their test results with regard to a genetic test for smoking.
Key messages
Smokers are mildly interested in receiving more information and participating in genetic testing for smoking cessation, especially when offered by their general practitioner.
Knowledge on smoking cessation, genetics and genetic testing (including advantages and disadvantages) and the influence of genetics on smoking cessation is low.
Strengths and limitations of this study
This study provides valuable information on the needs and attitudes of smokers regarding genetic testing for smoking cessation, which can aid decisions for future implementation.
Under-representation smokers intending to stop smoking might have led to an underestimation of smokers interested in genetic testing.
Low knowledge level on genetic testing for smoking cessation and genetics in general might have influenced participants' ability to answer the questions.
Interest in undergoing genetic testing may reflect a generally positive attitude towards genetic testing rather than actual uptake.
Selection bias might have occurred due to the non-representative nature of the internet population and the self-selection of participants (volunteer effect); however, unlikely due to high response rate (83%).
doi:10.1136/bmjopen-2011-000321
PMCID: PMC3253420  PMID: 22223839
Nicotine dependence; smoking cessation; (pharmaco) genetic testing; knowledge; attitudes; preferences; pharmacogenetics; smoking
5.  Cancer Screening with Digital Mammography for Women at Average Risk for Breast Cancer, Magnetic Resonance Imaging (MRI) for Women at High Risk 
Executive Summary
Objective
The purpose of this review is to determine the effectiveness of 2 separate modalities, digital mammography (DM) and magnetic resonance imaging (MRI), relative to film mammography (FM), in the screening of women asymptomatic for breast cancer. A third analysis assesses the effectiveness and safety of the combination of MRI plus mammography (MRI plus FM) in screening of women at high risk. An economic analysis was also conducted.
Research Questions
How does the sensitivity and specificity of DM compare to FM?
How does the sensitivity and specificity of MRI compare to FM?
How do the recall rates compare among these screening modalities, and what effect might this have on radiation exposure? What are the risks associated with radiation exposure?
How does the sensitivity and specificity of the combination of MRI plus FM compare to either MRI or FM alone?
What are the economic considerations?
Clinical Need
The effectiveness of FM with respect to breast cancer mortality in the screening of asymptomatic average- risk women over the age of 50 has been established. However, based on a Medical Advisory Secretariat review completed in March 2006, screening is not recommended for women between the ages of 40 and 49 years. Guidelines published by the Canadian Task Force on Preventive Care recommend mammography screening every 1 to 2 years for women aged 50 years and over, hence, the inclusion of such women in organized breast cancer screening programs. In addition to the uncertainty of the effectiveness of mammography screening from the age of 40 years, there is concern over the risks associated with mammographic screening for the 10 years between the ages of 40 and 49 years.
The lack of effectiveness of mammography screening starting at the age of 40 years (with respect to breast cancer mortality) is based on the assumption that the ability to detect cancer decreases with increased breast tissue density. As breast density is highest in the premenopausal years (approximately 23% of postmenopausal and 53% of premenopausal women having at least 50% of the breast occupied by high density), mammography screening is not promoted in Canada nor in many other countries for women under the age of 50 at average risk for breast cancer. It is important to note, however, that screening of premenopausal women (i.e., younger than 50 years of age) at high risk for breast cancer by virtue of a family history of cancer or a known genetic predisposition (e.g., having tested positive for the breast cancer genes BRCA1 and/or BRCA2) is appropriate. Thus, this review will assess the effectiveness of breast cancer screening with modalities other than film mammography, specifically DM and MRI, for both pre/perimenopausal and postmenopausal age groups.
International estimates of the epidemiology of breast cancer show that the incidence of breast cancer is increasing for all ages combined whereas mortality is decreasing, though at a slower rate. The observed decreases in mortality rates may be attributable to screening, in addition to advances in breast cancer therapy over time. Decreases in mortality attributable to screening may be a result of the earlier detection and treatment of invasive cancers, in addition to the increased detection of ductal carcinoma in situ (DCIS), of which certain subpathologies are less lethal. Evidence from the Surveillance, Epidemiology and End Results (better known as SEER) cancer registry in the United States, indicates that the age-adjusted incidence of DCIS has increased almost 10-fold over a 20 year period, from 2.7 to 25 per 100,000.
There is a 4-fold lower incidence of breast cancer in the 40 to 49 year age group than in the 50 to 69 year age group (approximately 140 per 100,000 versus 500 per 100,000 women, respectively). The sensitivity of FM is also lower among younger women (approximately 75%) than for women aged over 50 years (approximately 85%). Specificity is approximately 80% for younger women versus 90% for women over 50 years. The increased density of breast tissue in younger women is likely responsible for the decreased accuracy of FM.
Treatment options for breast cancer vary with the stage of disease (based on tumor size, involvement of surrounding tissue, and number of affected axillary lymph nodes) and its pathology, and may include a combination of surgery, chemotherapy and/or radiotherapy. Surgery is the first-line intervention for biopsy-confirmed tumors. The subsequent use of radiation, chemotherapy or hormonal treatments is dependent on the histopathologic characteristics of the tumor and the type of surgery. There is controversy regarding the optimal treatment of DCIS, which is considered a noninvasive tumour.
Women at high risk for breast cancer are defined as genetic carriers of the more commonly known breast cancer genes (BRCA1, BRCA2 TP53), first degree relatives of carriers, women with varying degrees of high risk family histories, and/or women with greater than 20% lifetime risk for breast cancer based on existing risk models. Genetic carriers for this disease, primarily women with BRCA1 or BRCA2 mutations, have a lifetime probability of approximately 85% of developing breast cancer. Preventive options for these women include surgical interventions such as prophylactic mastectomy and/or oophorectomy, i.e., removal of the breasts and/or ovaries. Therefore, it is important to evaluate the benefits and risks of different screening modalities, to identify additional options for these women.
This Medical Advisory Secretariat review is the second of 2 parts on breast cancer screening, and concentrates on the evaluation of both DM and MRI relative to FM, the standard of care. Part I of this review (March 2006) addressed the effectiveness of screening mammography in 40 to 49 year old average-risk women. The overall objective of the present review is to determine the optimal screening modality based on the evidence.
Evidence Review Strategy
The Medical Advisory Secretariat followed its standard procedures and searched the following electronic databases: Ovid MEDLINE, EMBASE, Ovid MEDLINE In-Process & Other Non-Indexed Citations, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews and The International Network of Agencies for Health Technology Assessment database. The subject headings and keywords searched included breast cancer, breast neoplasms, mass screening, digital mammography, magnetic resonance imaging. The detailed search strategies can be viewed in Appendix 1.
Included in this review are articles specific to screening and do not include evidence on diagnostic mammography. The search was further restricted to English-language articles published between January 1996 and April 2006. Excluded were case reports, comments, editorials, nonsystematic reviews, and letters.
Digital Mammography: In total, 224 articles specific to DM screening were identified. These were examined against the inclusion/exclusion criteria described below, resulting in the selection and review of 5 health technology assessments (HTAs) (plus 1 update) and 4 articles specific to screening with DM.
Magnetic Resonance Imaging: In total, 193 articles specific to MRI were identified. These were examined against the inclusion/exclusion criteria described below, resulting in the selection and review of 2 HTAs and 7 articles specific to screening with MRI.
The evaluation of the addition of FM to MRI in the screening of women at high risk for breast cancer was also conducted within the context of standard search procedures of the Medical Advisory Secretariat. as outlined above. The subject headings and keywords searched included the concepts of breast cancer, magnetic resonance imaging, mass screening, and high risk/predisposition to breast cancer. The search was further restricted to English-language articles published between September 2007 and January 15, 2010. Case reports, comments, editorials, nonsystematic reviews, and letters were not excluded.
MRI plus mammography: In total, 243 articles specific to MRI plus FM screening were identified. These were examined against the inclusion/exclusion criteria described below, resulting in the selection and review of 2 previous HTAs, and 1 systematic review of 11 paired design studies.
Inclusion Criteria
English-language articles, and English or French-language HTAs published from January 1996 to April 2006, inclusive.
Articles specific to screening of women with no personal history of breast cancer.
Studies in which DM or MRI were compared with FM, and where the specific outcomes of interest were reported.
Randomized controlled trials (RCTs) or paired studies only for assessment of DM.
Prospective, paired studies only for assessment of MRI.
Exclusion Criteria
Studies in which outcomes were not specific to those of interest in this report.
Studies in which women had been previously diagnosed with breast cancer.
Studies in which the intervention (DM or MRI) was not compared with FM.
Studies assessing DM with a sample size of less than 500.
Intervention
Digital mammography.
Magnetic resonance imaging.
Comparator
Screening with film mammography.
Outcomes of Interest
Breast cancer mortality (although no studies were found with such long follow-up).
Sensitivity.
Specificity.
Recall rates.
Summary of Findings
Digital Mammography
There is moderate quality evidence that DM is significantly more sensitive than FM in the screening of asymptomatic women aged less than 50 years, those who are premenopausal or perimenopausal, and those with heterogeneously or extremely dense breast tissue (regardless of age).
It is not known what effect these differences in sensitivity will have on the more important effectiveness outcome measure of breast cancer mortality, as there was no evidence of such an assessment.
Other factors have been set out to promote DM, for example, issues of recall rates and reading and examination times. Our analysis did not show that recall rates were necessarily improved in DM, though examination times were lower than for FM. Other factors including storage and retrieval of screens were not the subject of this analysis.
Magnetic Resonance Imaging
There is moderate quality evidence that the sensitivity of MRI is significantly higher than that of FM in the screening of women at high risk for breast cancer based on genetic or familial factors, regardless of age.
Radiation Risk Review
Cancer Care Ontario conducted a review of the evidence on radiation risk in screening with mammography women at high risk for breast cancer. From this review of recent literature and risk assessment that considered the potential impact of screening mammography in cohorts of women who start screening at an earlier age or who are at increased risk of developing breast cancer due to genetic susceptibility, the following conclusions can be drawn:
For women over 50 years of age, the benefits of mammography greatly outweigh the risk of radiation-induced breast cancer irrespective of the level of a woman’s inherent breast cancer risk.
Annual mammography for women aged 30 – 39 years who carry a breast cancer susceptibility gene or who have a strong family breast cancer history (defined as a first degree relative diagnosed in their thirties) has a favourable benefit:risk ratio. Mammography is estimated to detect 16 to 18 breast cancer cases for every one induced by radiation (Table 1). Initiation of screening at age 35 for this same group would increase the benefit:risk ratio to an even more favourable level of 34-50 cases detected for each one potentially induced.
Mammography for women under 30 years of age has an unfavourable benefit:risk ratio due to the challenges of detecting cancer in younger breasts, the aggressiveness of cancers at this age, the potential for radiation susceptibility at younger ages and a greater cumulative radiation exposure.
Mammography when used in combination with MRI for women who carry a strong breast cancer susceptibility (e.g., BRCA1/2 carriers), which if begun at age 35 and continued for 35 years, may confer greatly improved benefit:risk ratios which were estimated to be about 220 to one.
While there is considerable uncertainty in the risk of radiation-induced breast cancer, the risk expressed in published studies is almost certainly conservative as the radiation dose absorbed by women receiving mammography recently has been substantially reduced by newer technology.
A CCO update of the mammography radiation risk literature for 2008 and 2009 gave rise to one article by Barrington de Gonzales et al. published in 2009 (Barrington de Gonzales et al., 2009, JNCI, vol. 101: 205-209). This article focuses on estimating the risk of radiation-induced breast cancer for mammographic screening of young women at high risk for breast cancer (with BRCA gene mutations). Based on an assumption of a 15% to 25% or less reduction in mortality from mammography in these high risk women, the authors conclude that such a reduction is not substantially greater than the risk of radiation-induced breast cancer mortality when screening before the age of 34 years. That is, there would be no net benefit from annual mammographic screening of BRCA mutation carriers at ages 25-29 years; the net benefit would be zero or small if screening occurs in 30-34 year olds, and there would be some net benefit at age 35 years or older.
The Addition of Mammography to Magnetic Resonance Imaging
The effects of the addition of FM to MRI screening of high risk women was also assessed, with inclusion and exclusion criteria as follows:
Inclusion Criteria
English-language articles and English or French-language HTAs published from September 2007 to January 15, 2010.
Articles specific to screening of women at high risk for breast cancer, regardless of the definition of high risk.
Studies in which accuracy data for the combination of MRI plus FM are available to be compared to that of MRI and FM alone.
RCTs or prospective, paired studies only.
Studies in which women were previously diagnosed with breast cancer were also included.
Exclusion Criteria
Studies in which outcomes were not specific to those of interest in this report.
Studies in which there was insufficient data on the accuracy of MRI plus FM.
Intervention
Both MRI and FM.
Comparators
Screening with MRI alone and FM alone.
Outcomes of Interest
Sensitivity.
Specificity.
Summary of Findings
Magnetic Resonance Imaging Plus Mammography
Moderate GRADE Level Evidence that the sensitivity of MRI plus mammography is significantly higher than that of MRI or FM alone, although the specificity remains either unchanged or decreases in the screening of women at high risk for breast cancer based on genetic/familial factors, regardless of age.
These studies include women at high risk defined as BRCA1/2 or TP53 carriers, first degree relatives of carriers, women with varying degrees of high risk family histories, and/or >20% lifetime risk based on existing risk models. This definition of high risk accounts for approximately 2% of the female adult population in Ontario.
PMCID: PMC3377503  PMID: 23074406
6.  Understanding the impact of genetic testing for inherited retinal dystrophy 
European Journal of Human Genetics  2013;21(11):1209-1213.
The capability of genetic technologies is expanding rapidly in the field of inherited eye disease. New genetic testing approaches will deliver a step change in the ability to diagnose and extend the possibility of targeted treatments. However, evidence is lacking about the benefits of genetic testing to support service planning. Here, we report qualitative data about retinal dystrophy families' experiences of genetic testing in United Kingdom. The data were part of a wider study examining genetic eye service provision. Twenty interviewees from families in which a causative mutation had been identified by a genetic eye clinic were recruited to the study. Fourteen interviewees had chosen to have a genetic test and five had not; one was uncertain. In-depth telephone interviews were conducted allowing a thorough exploration of interviewees' views and experiences of the benefits of genetic counselling and testing. Transcripts were analysed using thematic analysis. Both affected and unaffected interviewees expressed mainly positive views about genetic testing, highlighting benefits such as diagnostic confirmation, risk information, and better preparation for the future. Negative consequences included the burden of knowledge, moral dilemmas around reproduction, and potential impact on insurance. The offer of genetic testing was often taken up, but was felt unnecessary in some cases. Interviewees in the study reported many benefits, suggesting genetic testing should be available to this patient group. The benefits and risks identified will inform future evaluation of models of service delivery. This research was part of a wider study exploring experiences of families with retinal dystrophy.
doi:10.1038/ejhg.2013.19
PMCID: PMC3798830  PMID: 23403902
retinal dystrophy; genetic testing; service delivery; qualitative interviews
7.  Clinical Utility of Serologic Testing for Celiac Disease in Ontario 
Executive Summary
Objective of Analysis
The objective of this evidence-based evaluation is to assess the accuracy of serologic tests in the diagnosis of celiac disease in subjects with symptoms consistent with this disease. Furthermore the impact of these tests in the diagnostic pathway of the disease and decision making was also evaluated.
Celiac Disease
Celiac disease is an autoimmune disease that develops in genetically predisposed individuals. The immunological response is triggered by ingestion of gluten, a protein that is present in wheat, rye, and barley. The treatment consists of strict lifelong adherence to a gluten-free diet (GFD).
Patients with celiac disease may present with a myriad of symptoms such as diarrhea, abdominal pain, weight loss, iron deficiency anemia, dermatitis herpetiformis, among others.
Serologic Testing in the Diagnosis Celiac Disease
There are a number of serologic tests used in the diagnosis of celiac disease.
Anti-gliadin antibody (AGA)
Anti-endomysial antibody (EMA)
Anti-tissue transglutaminase antibody (tTG)
Anti-deamidated gliadin peptides antibodies (DGP)
Serologic tests are automated with the exception of the EMA test, which is more time-consuming and operator-dependent than the other tests. For each serologic test, both immunoglobulin A (IgA) or G (IgG) can be measured, however, IgA measurement is the standard antibody measured in celiac disease.
Diagnosis of Celiac Disease
According to celiac disease guidelines, the diagnosis of celiac disease is established by small bowel biopsy. Serologic tests are used to initially detect and to support the diagnosis of celiac disease. A small bowel biopsy is indicated in individuals with a positive serologic test. In some cases an endoscopy and small bowel biopsy may be required even with a negative serologic test. The diagnosis of celiac disease must be performed on a gluten-containing diet since the small intestine abnormalities and the serologic antibody levels may resolve or improve on a GFD.
Since IgA measurement is the standard for the serologic celiac disease tests, false negatives may occur in IgA-deficient individuals.
Incidence and Prevalence of Celiac Disease
The incidence and prevalence of celiac disease in the general population and in subjects with symptoms consistent with or at higher risk of celiac disease based on systematic reviews published in 2004 and 2009 are summarized below.
Incidence of Celiac Disease in the General Population
Adults or mixed population: 1 to 17/100,000/year
Children: 2 to 51/100,000/year
In one of the studies, a stratified analysis showed that there was a higher incidence of celiac disease in younger children compared to older children, i.e., 51 cases/100,000/year in 0 to 2 year-olds, 33/100,000/year in 2 to 5 year-olds, and 10/100,000/year in children 5 to 15 years old.
Prevalence of Celiac Disease in the General Population
The prevalence of celiac disease reported in population-based studies identified in the 2004 systematic review varied between 0.14% and 1.87% (median: 0.47%, interquartile range: 0.25%, 0.71%). According to the authors of the review, the prevalence did not vary by age group, i.e., adults and children.
Prevalence of Celiac Disease in High Risk Subjects
Type 1 diabetes (adults and children): 1 to 11%
Autoimmune thyroid disease: 2.9 to 3.3%
First degree relatives of patients with celiac disease: 2 to 20%
Prevalence of Celiac Disease in Subjects with Symptoms Consistent with the Disease
The prevalence of celiac disease in subjects with symptoms consistent with the disease varied widely among studies, i.e., 1.5% to 50% in adult studies, and 1.1% to 17% in pediatric studies. Differences in prevalence may be related to the referral pattern as the authors of a systematic review noted that the prevalence tended to be higher in studies whose population originated from tertiary referral centres compared to general practice.
Research Questions
What is the sensitivity and specificity of serologic tests in the diagnosis celiac disease?
What is the clinical validity of serologic tests in the diagnosis of celiac disease? The clinical validity was defined as the ability of the test to change diagnosis.
What is the clinical utility of serologic tests in the diagnosis of celiac disease? The clinical utility was defined as the impact of the test on decision making.
What is the budget impact of serologic tests in the diagnosis of celiac disease?
What is the cost-effectiveness of serologic tests in the diagnosis of celiac disease?
Methods
Literature Search
A literature search was performed on November 13th, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1st 2003 and November 13th 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Studies that evaluated diagnostic accuracy, i.e., both sensitivity and specificity of serology tests in the diagnosis of celiac disease.
Study population consisted of untreated patients with symptoms consistent with celiac disease.
Studies in which both serologic celiac disease tests and small bowel biopsy (gold standard) were used in all subjects.
Systematic reviews, meta-analyses, randomized controlled trials, prospective observational studies, and retrospective cohort studies.
At least 20 subjects included in the celiac disease group.
English language.
Human studies.
Studies published from 2000 on.
Clearly defined cut-off value for the serology test. If more than one test was evaluated, only those tests for which a cut-off was provided were included.
Description of small bowel biopsy procedure clearly outlined (location, number of biopsies per patient), unless if specified that celiac disease diagnosis guidelines were followed.
Patients in the treatment group had untreated CD.
Studies on screening of the general asymptomatic population.
Studies that evaluated rapid diagnostic kits for use either at home or in physician’s offices.
Studies that evaluated diagnostic modalities other than serologic tests such as capsule endoscopy, push enteroscopy, or genetic testing.
Cut-off for serologic tests defined based on controls included in the study.
Study population defined based on positive serology or subjects pre-screened by serology tests.
Celiac disease status known before study enrolment.
Sensitivity or specificity estimates based on repeated testing for the same subject.
Non-peer-reviewed literature such as editorials and letters to the editor.
Population
The population consisted of adults and children with untreated, undiagnosed celiac disease with symptoms consistent with the disease.
Serologic Celiac Disease Tests Evaluated
Anti-gliadin antibody (AGA)
Anti-endomysial antibody (EMA)
Anti-tissue transglutaminase antibody (tTG)
Anti-deamidated gliadin peptides antibody (DGP)
Combinations of some of the serologic tests listed above were evaluated in some studies
Both IgA and IgG antibodies were evaluated for the serologic tests listed above.
Outcomes of Interest
Sensitivity
Specificity
Positive and negative likelihood ratios
Diagnostic odds ratio (OR)
Area under the sROC curve (AUC)
Small bowel biopsy was used as the gold standard in order to estimate the sensitivity and specificity of each serologic test.
Statistical Analysis
Pooled estimates of sensitivity, specificity and diagnostic odds ratios (DORs) for the different serologic tests were calculated using a bivariate, binomial generalized linear mixed model. Statistical significance for differences in sensitivity and specificity between serologic tests was defined by P values less than 0.05, where “false discovery rate” adjustments were made for multiple hypothesis testing. The bivariate regression analyses were performed using SAS version 9.2 (SAS Institute Inc.; Cary, NC, USA). Using the bivariate model parameters, summary receiver operating characteristic (sROC) curves were produced using Review Manager 5.0.22 (The Nordiac Cochrane Centre, The Cochrane Collaboration, 2008). The area under the sROC curve (AUC) was estimated by bivariate mixed-efects binary regression modeling framework. Model specification, estimation and prediction are carried out with xtmelogit in Stata release 10 (Statacorp, 2007). Statistical tests for the differences in AUC estimates could not be carried out.
The study results were stratified according to patient or disease characteristics such as age, severity of Marsh grade abnormalities, among others, if reported in the studies. The literature indicates that the diagnostic accuracy of serologic tests for celiac disease may be affected in patients with chronic liver disease, therefore, the studies identified through the systematic literature review that evaluated the diagnostic accuracy of serologic tests for celiac disease in patients with chronic liver disease were summarized. The effect of the GFD in patiens diagnosed with celiac disease was also summarized if reported in the studies eligible for the analysis.
Summary of Findings
Published Systematic Reviews
Five systematic reviews of studies that evaluated the diagnostic accuracy of serologic celiac disease tests were identified through our literature search. Seventeen individual studies identified in adults and children were eligible for this evaluation.
In general, the studies included evaluated the sensitivity and specificity of at least one serologic test in subjects with symptoms consistent with celiac disease. The gold standard used to confirm the celiac disease diagnosis was small bowel biopsy. Serologic tests evaluated included tTG, EMA, AGA, and DGP, using either IgA or IgG antibodies. Indirect immunoflurorescence was used for the EMA serologic tests whereas enzyme-linked immunosorbent assay (ELISA) was used for the other serologic tests.
Common symptoms described in the studies were chronic diarrhea, abdominal pain, bloating, unexplained weight loss, unexplained anemia, and dermatitis herpetiformis.
The main conclusions of the published systematic reviews are summarized below.
IgA tTG and/or IgA EMA have a high accuracy (pooled sensitivity: 90% to 98%, pooled specificity: 95% to 99% depending on the pooled analysis).
Most reviews found that AGA (IgA or IgG) are not as accurate as IgA tTG and/or EMA tests.
A 2009 systematic review concluded that DGP (IgA or IgG) seems to have a similar accuracy compared to tTG, however, since only 2 studies identified evaluated its accuracy, the authors believe that additional data is required to draw firm conclusions.
Two systematic reviews also concluded that combining two serologic celiac disease tests has little contribution to the accuracy of the diagnosis.
MAS Analysis
Sensitivity
The pooled analysis performed by MAS showed that IgA tTG has a sensitivity of 92.1% [95% confidence interval (CI) 88.0, 96.3], compared to 89.2% (83.3, 95.1, p=0.12) for IgA DGP, 85.1% (79.5, 94.4, p=0.07) for IgA EMA, and 74.9% (63.6, 86.2, p=0.0003) for IgA AGA. Among the IgG-based tests, the results suggest that IgG DGP has a sensitivity of 88.4% (95% CI: 82.1, 94.6), 44.7% (30.3, 59.2) for tTG, and 69.1% (56.0, 82.2) for AGA. The difference was significant when IgG DGP was compared to IgG tTG but not IgG AGA. Combining serologic celiac disease tests yielded a slightly higher sensitivity compared to individual IgA-based serologic tests.
IgA deficiency
The prevalence of total or severe IgA deficiency was low in the studies identified varying between 0 and 1.7% as reported in 3 studies in which IgA deficiency was not used as a referral indication for celiac disease serologic testing. The results of IgG-based serologic tests were positive in all patients with IgA deficiency in which celiac disease was confirmed by small bowel biopsy as reported in four studies.
Specificity
The MAS pooled analysis indicates a high specificity across the different serologic tests including the combination strategy, pooled estimates ranged from 90.1% to 98.7% depending on the test.
Likelihood Ratios
According to the likelihood ratio estimates, both IgA tTG and serologic test combinationa were considered very useful tests (positive likelihood ratio above ten and the negative likelihood ratio below 0.1).
Moderately useful tests included IgA EMA, IgA DGP, and IgG DGP (positive likelihood ratio between five and ten and the negative likelihood ratio between 0.1 and 0.2).
Somewhat useful tests: IgA AGA, IgG AGA, generating small but sometimes important changes from pre- to post-test probability (positive LR between 2 and 5 and negative LR between 0.2 and 0.5)
Not Useful: IgG tTG, altering pre- to post-test probability to a small and rarely important degree (positive LR between 1 and 2 and negative LR between 0.5 and 1).
Diagnostic Odds Ratios (DOR)
Among the individual serologic tests, IgA tTG had the highest DOR, 136.5 (95% CI: 51.9, 221.2). The statistical significance of the difference in DORs among tests was not calculated, however, considering the wide confidence intervals obtained, the differences may not be statistically significant.
Area Under the sROC Curve (AUC)
The sROC AUCs obtained ranged between 0.93 and 0.99 for most IgA-based tests with the exception of IgA AGA, with an AUC of 0.89.
Sensitivity and Specificity of Serologic Tests According to Age Groups
Serologic test accuracy did not seem to vary according to age (adults or children).
Sensitivity and Specificity of Serologic Tests According to Marsh Criteria
Four studies observed a trend towards a higher sensitivity of serologic celiac disease tests when Marsh 3c grade abnormalities were found in the small bowel biopsy compared to Marsh 3a or 3b (statistical significance not reported). The sensitivity of serologic tests was much lower when Marsh 1 grade abnormalities were found in small bowel biopsy compared to Marsh 3 grade abnormalities. The statistical significance of these findings were not reported in the studies.
Diagnostic Accuracy of Serologic Celiac Disease Tests in Subjects with Chronic Liver Disease
A total of 14 observational studies that evaluated the specificity of serologic celiac disease tests in subjects with chronic liver disease were identified. All studies evaluated the frequency of false positive results (1-specificity) of IgA tTG, however, IgA tTG test kits using different substrates were used, i.e., human recombinant, human, and guinea-pig substrates. The gold standard, small bowel biopsy, was used to confirm the result of the serologic tests in only 5 studies. The studies do not seem to have been designed or powered to compare the diagnostic accuracy among different serologic celiac disease tests.
The results of the studies identified in the systematic literature review suggest that there is a trend towards a lower frequency of false positive results if the IgA tTG test using human recombinant substrate is used compared to the guinea pig substrate in subjects with chronic liver disease. However, the statistical significance of the difference was not reported in the studies. When IgA tTG with human recombinant substrate was used, the number of false positives seems to be similar to what was estimated in the MAS pooled analysis for IgA-based serologic tests in a general population of patients. These results should be interpreted with caution since most studies did not use the gold standard, small bowel biopsy, to confirm or exclude the diagnosis of celiac disease, and since the studies were not designed to compare the diagnostic accuracy among different serologic tests. The sensitivity of the different serologic tests in patients with chronic liver disease was not evaluated in the studies identified.
Effects of a Gluten-Free Diet (GFD) in Patients Diagnosed with Celiac Disease
Six studies identified evaluated the effects of GFD on clinical, histological, or serologic improvement in patients diagnosed with celiac disease. Improvement was observed in 51% to 95% of the patients included in the studies.
Grading of Evidence
Overall, the quality of the evidence ranged from moderate to very low depending on the serologic celiac disease test. Reasons to downgrade the quality of the evidence included the use of a surrogate endpoint (diagnostic accuracy) since none of the studies evaluated clinical outcomes, inconsistencies among study results, imprecise estimates, and sparse data. The quality of the evidence was considered moderate for IgA tTg and IgA EMA, low for IgA DGP, and serologic test combinations, and very low for IgA AGA.
Clinical Validity and Clinical Utility of Serologic Testing in the Diagnosis of Celiac Disease
The clinical validity of serologic tests in the diagnosis of celiac disease was considered high in subjects with symptoms consistent with this disease due to
High accuracy of some serologic tests.
Serologic tests detect possible celiac disease cases and avoid unnecessary small bowel biopsy if the test result is negative, unless an endoscopy/ small bowel biopsy is necessary due to the clinical presentation.
Serologic tests support the results of small bowel biopsy.
The clinical utility of serologic tests for the diagnosis of celiac disease, as defined by its impact in decision making was also considered high in subjects with symptoms consistent with this disease given the considerations listed above and since celiac disease diagnosis leads to treatment with a gluten-free diet.
Economic Analysis
A decision analysis was constructed to compare costs and outcomes between the tests based on the sensitivity, specificity and prevalence summary estimates from the MAS Evidence-Based Analysis (EBA). A budget impact was then calculated by multiplying the expected costs and volumes in Ontario. The outcome of the analysis was expected costs and false negatives (FN). Costs were reported in 2010 CAD$. All analyses were performed using TreeAge Pro Suite 2009.
Four strategies made up the efficiency frontier; IgG tTG, IgA tTG, EMA and small bowel biopsy. All other strategies were dominated. IgG tTG was the least costly and least effective strategy ($178.95, FN avoided=0). Small bowel biopsy was the most costly and most effective strategy ($396.60, FN avoided =0.1553). The cost per FN avoided were $293, $369, $1,401 for EMA, IgATTG and small bowel biopsy respectively. One-way sensitivity analyses did not change the ranking of strategies.
All testing strategies with small bowel biopsy are cheaper than biopsy alone however they also result in more FNs. The most cost-effective strategy will depend on the decision makers’ willingness to pay. Findings suggest that IgA tTG was the most cost-effective and feasible strategy based on its Incremental Cost-Effectiveness Ratio (ICER) and convenience to conduct the test. The potential impact of IgA tTG test in the province of Ontario would be $10.4M, $11.0M and $11.7M respectively in the following three years based on past volumes and trends in the province and basecase expected costs.
The panel of tests is the commonly used strategy in the province of Ontario therefore the impact to the system would be $13.6M, $14.5M and $15.3M respectively in the next three years based on past volumes and trends in the province and basecase expected costs.
Conclusions
The clinical validity and clinical utility of serologic tests for celiac disease was considered high in subjects with symptoms consistent with this disease as they aid in the diagnosis of celiac disease and some tests present a high accuracy.
The study findings suggest that IgA tTG is the most accurate and the most cost-effective test.
AGA test (IgA) has a lower accuracy compared to other IgA-based tests
Serologic test combinations appear to be more costly with little gain in accuracy. In addition there may be problems with generalizability of the results of the studies included in this review if different test combinations are used in clinical practice.
IgA deficiency seems to be uncommon in patients diagnosed with celiac disease.
The generalizability of study results is contingent on performing both the serologic test and small bowel biopsy in subjects on a gluten-containing diet as was the case in the studies identified, since the avoidance of gluten may affect test results.
PMCID: PMC3377499  PMID: 23074399
8.  Unravelling fears of genetic discrimination: an exploratory study of Dutch HCM families in an era of genetic non-discrimination acts 
European Journal of Human Genetics  2012;20(10):1018-1023.
Since the 1990s, many countries in Europe and the United States have enacted genetic non-discrimination legislation to prevent people from deferring genetic tests for fear that insurers or employers would discriminate against them based on that information. Although evidence for genetic discrimination exists, little is known about the origins and backgrounds of fears of discrimination and how it affects decisions for uptake of genetic testing. The aim of this article is to gain a better understanding of these fears and its possible impact on the uptake of testing by studying the case of hypertrophic cardiomyopathy (HCM). In a qualitative study, we followed six Dutch extended families involved in genetic testing for HCM for three-and-a-half years. Semi-structured interviews were conducted with 57 members of these families. Based on the narratives of the families, we suggest that fears of discrimination have to be situated in the broader social and life-course context of family and kin. We describe the processes in which families developed meaningful interpretations of genetic discrimination and how these interpretations affected family members' decisions to undergo genetic testing. Our findings show that fears of genetic discrimination do not so much stem from the opportunity of genetic testing but much more from earlier experiences of discrimination of diseased family members. These results help identify the possible limitations of genetic non-discrimination regulations and provide direction to clinicians supporting their clients as they confront issues of genetic testing and genetic discrimination.
doi:10.1038/ejhg.2012.53
PMCID: PMC3449067  PMID: 22453290
genetic discrimination; fear; insurance; family experiences; hypertrophic cardiomyopathy
9.  A Genetic Association Study of Serum Acute-Phase C-Reactive Protein Levels in Rheumatoid Arthritis: Implications for Clinical Interpretation 
PLoS Medicine  2010;7(9):e1000341.
A genetic association study by Timothy Vyse and colleagues suggests that there is a significant association between CRP variants and acute-phase serum CRP concentrations in patients with rheumatoid arthritis, including those with chronic inflammation.
Background
The acute-phase increase in serum C-reactive protein (CRP) is used to diagnose and monitor infectious and inflammatory diseases. Little is known about the influence of genetics on acute-phase CRP, particularly in patients with chronic inflammation.
Methods and Findings
We studied two independent sets of patients with chronic inflammation due to rheumatoid arthritis (total 695 patients). A tagSNP approach captured common variation at the CRP locus and the relationship between genotype and serum CRP was explored by linear modelling. Erythrocyte sedimentation rate (ESR) was incorporated as an independent marker of inflammation to adjust for the varying levels of inflammatory disease activity between patients. Common genetic variants at the CRP locus were associated with acute-phase serum CRP (for the most associated haplotype: p = 0.002, p<0.0005, p<0.0005 in patient sets 1, 2, and the combined sets, respectively), translating into an approximately 3.5-fold change in expected serum CRP concentrations between carriers of two common CRP haplotypes. For example, when ESR = 50 mm/h the expected geometric mean CRP (95% confidence interval) concentration was 43.1 mg/l (32.1–50.0) for haplotype 1 and 14.2 mg/l (9.5–23.2) for haplotype 4.
Conclusions
Our findings raise questions about the interpretation of acute-phase serum CRP. In particular, failure to take into account the potential for genetic effects may result in the inappropriate reassurance or suboptimal treatment of patients simply because they carry low-CRP–associated genetic variants. CRP is increasingly being incorporated into clinical algorithms to compare disease activity between patients and to predict future clinical events: our findings impact on the use of these algorithms. For example, where access to effective, but expensive, biological therapies in rheumatoid arthritis is rationed on the basis of a DAS28-CRP clinical activity score, then two patients with identical underlying disease severity could be given, or denied, treatment on the basis of CRP genotype alone. The accuracy and utility of these algorithms might be improved by using a genetically adjusted CRP measurement.
Please see later in the article for the Editors' Summary
Editors' Summary
C-reactive protein (CRP) is a serum marker for inflammation or infection and acts by binding to a chemical (phosphocholine) found on the surface of dead or dying cells (and some types of bacteria) in order to activate the immune system (via the complement system). Fat cells release factors that stimulate the liver to produce CRP, and serum levels greater than 10 mg/l are generally considered indicative of an infectious or inflammatory process. After an inflammatory stimulus, serum CRP levels may exceed 500 times baseline, so CRP is used in all medical specialities to help diagnose inflammation and infection. Although patients with chronic inflammatory diseases, such as rheumatoid arthritis, have raised levels of CRP, levels of CRP are still highly variable. Some studies have suggested that there may be genetic variations of CRP (CRP variants) that determine the magnitude of the acute-phase CRP response, a finding that has important clinical implications: CRP thresholds are used as a diagnostic component of formal clinical algorithms and play an important role in a clinician's decision-making process when diagnosing inflammatory disease and choosing treatment options. Therefore, it is possible that false reassurance could be given to a patient with disease, or optimal treatment withheld, because some patients are genetically predisposed to have only a modest increase in acute-phase CRP.
Why Was This Study Done?
Although some studies have looked at the CRP gene variant response, few, if any, studies have examined the CRP gene variant response in the context of chronic inflammation, such as in rheumatoid arthritis. Therefore, this study aimed to determine whether CRP gene variants could also influence CRP serum levels in rheumatoid arthritis.
What Did the Researchers Do and Find?
The authors studied two independent sets of patients with chronic inflammation due to rheumatoid arthritis (total 695 patients): one patient set used a cohort of 281 patients in the UK, and the other patient set (used for replication) consisted of 414 patients from New Zealand and Australia. A genetic technique (a tagSNP approach) was used to capture common variations at the CRP locus (haplotype association analysis) at both the population and the individual level. The relationship between genotype and serum CRP was explored by linear modeling. The researchers found that common genetic variants at the CRP locus were associated with acute-phase serum CRP in both patient sets translating into an approximate 3.5-fold change in expected serum CRP between carriers of two common CRP variants. For example, when ESR = 50 mm/h the expected CRP serum level for one common CRP variant was 43.1 mg/l and for another CRP variant was 14.2 mg/l.
What Do These Findings Mean?
The findings of this study raise questions about the interpretation of acute-phase serum CRP, as they suggest that there is a significant association between CRP variants and acute-phase serum CRP concentrations in a group of patients with rheumatoid arthritis, including those with chronic active inflammation. The size of the genetic effect may be large enough to have a clinically relevant impact on the assessment of inflammatory disease activity, which in turn may influence therapeutic decision making. Failure to take into account the potential for genetic effects may result in the inappropriate reassurance or undertreatment of patients simply because they carry low-CRP–associated genetic variants. CRP is increasingly being incorporated into clinical algorithms to compare disease activity between patients and to predict future clinical events, so these findings impact on the use of such algorithms. The accuracy and utility of these algorithms might be improved by using a genetically adjusted CRP measurement.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000341
Lab Test Online provides information on CRP
The Wellcome Trust provides a glossary of genetic terms
Learn.Genetics provides access to the Genetic Science Learning Center, which is part of the human genome project
doi:10.1371/journal.pmed.1000341
PMCID: PMC2943443  PMID: 20877716
10.  Experiences with Policing among People Who Inject Drugs in Bangkok, Thailand: A Qualitative Study 
PLoS Medicine  2013;10(12):e1001570.
Using thematic analysis, Kerr and colleagues document the experiences of policing among people who inject drugs in Bangkok and examine how interactions with police can affect drug-using behaviors and health care access.
Please see later in the article for the Editors' Summary
Background
Despite Thailand's commitment to treating people who use drugs as “patients” not “criminals,” Thai authorities continue to emphasize criminal law enforcement for drug control. In 2003, Thailand's drug war received international criticism due to extensive human rights violations. However, few studies have since investigated the impact of policing on drug-using populations. Therefore, we sought to examine experiences with policing among people who inject drugs (PWID) in Bangkok, Thailand, between 2008 and 2012.
Methods and Findings
Between July 2011 and June 2012, semi-structured, in-depth interviews were conducted with 42 community-recruited PWID participating in the Mitsampan Community Research Project in Bangkok. Interviews explored PWID's encounters with police during the past three years. Audio-recorded interviews were transcribed verbatim, and a thematic analysis was conducted to document the character of PWID's experiences with police. Respondents indicated that policing activities had noticeably intensified since rapid urine toxicology screening became available to police. Respondents reported various forms of police misconduct, including false accusations, coercion of confessions, excessive use of force, and extortion of money. However, respondents were reluctant to report misconduct to the authorities in the face of social and structural barriers to seeking justice. Respondents' strategies to avoid police impeded access to health care and facilitated transitions towards the misuse of prescribed pharmaceuticals. The study's limitations relate to the transferability of the findings, including the potential biases associated with the small convenience sample.
Conclusions
This study suggests that policing in Bangkok has involved injustices, human rights abuses, and corruption, and policing practices in this setting appeared to have increased PWID's vulnerability to poor health through various pathways. Novel to this study are findings pertaining to the use of urine drug testing by police, which highlight the potential for widespread abuse of this emerging technology. These findings raise concern about ongoing policing practices in this setting.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In many countries, the dominant strategy used to control illegal drugs such as heroin and methamphetamine is criminal law enforcement, a strategy that sometimes results in human rights abuses such as ill-treatment by police, extrajudicial killings, and arbitrary detention. Moreover, growing evidence suggests that aggressive policing of illicit drug use can have adverse public-health consequences. For example, the fear engendered by intensive policing may cause people who inject drugs (PWID) to avoid services such as needle exchanges, thereby contributing to the HIV/AIDS epidemic. One country with major epidemics of illicit drug use and of HIV/AIDS among PWID is Thailand. Although Thailand reclassified drug users as “patients” instead of “criminals” in 2002, possession and consumption of illicit drugs remain criminal offenses. The 2002 legislation also created a system of compulsory drug detention centers, most of which lack evidence-based addiction treatment services. In 2003, the Thai government launched a campaign to suppress drug trafficking and to enrol 300,000 people who use drugs into treatment. This campaign received international criticism because it involved extensive human rights violations, including more than 2,800 extrajudicial killings of suspected drug users and dealers.
Why Was This Study Done?
Drug-related arrests and compulsory detention of drug users are increasing in Thailand but what is the impact of current policing practices on drug users and on public health? In this qualitative study (a study that aims for an in-depth understanding of human behavior), the researchers use thematic analysis informed by the Rhodes' Risk Environment Framework to document the social and structural factors that led to encounters with the police among PWID in Bangkok between 2008 and 2012, the policing tactics employed during these encounters, and the associated health consequences of these encounters. The Risk Environment Framework posits that a range of social, political, economic, and physical environmental factors interact with each other and shape the production of drug-related harm.
What Did the Researchers Do and Find?
Between July 2011 and June 2012, the researchers conducted in-depth interviews with a convenience sample (a non-random sample from a nearby population) of 42 participants in the Mitsampan Community Research Project, an investigation of drug-using behavior, health care access, and drug-related harms among PWID in Bangkok. Respondents reported that policing activities had intensified since rapid urine toxicology screening became widely available and since the initiation of a crackdown on drug users in 2011. They described various forms of violence and misconduct that they had experienced during confrontations with police, including false accusations, degrading stop and search procedures, and excessive use of force. Urine drug testing was identified as a key tool used by the police, with some respondents describing how police caused unnecessary humiliation by requesting urine samples in public places. It was also reported that the police used positive test results as a means of extortion. Finally, some respondents reported feeling powerless in relation to the police and cited fear of retaliation as an important barrier to obtaining redress for police corruption. Others reported that they had adopted strategies to avoid the police such as staying indoors, a strategy likely to impede access to health care, or changing their drug-using behavior by, for example, injecting midazolam rather than methamphetamine, a practice associated with an increased risk of injection-related complications.
What Do These Findings Mean?
These findings suggest that the policing of PWID in Bangkok between 2008 and 2012 involved injustices, human rights abuses, and corruption and highlight the potential for widespread misuse of urine drug testing. Moreover, they suggest that policing practices in this setting may have increased the vulnerability of PWID to poor health by impeding their access to health care and by increasing the occurrence of risky drug-using behaviors. Because this study involved a small convenience sample of PWID, these findings may not be generalizable to other areas of Bangkok or Thailand and do not indicate whether police misconduct and corruption is highly prevalent across the all police departments in Bangkok. Nevertheless, these findings suggest that multilevel structural changes and interventions are needed to mitigate the harm associated with policing of illicit drug use in Bangkok. These changes will need to ensure full accountability for police misconduct and access to legal services for victims of this misconduct. They will also need to include ethical guidelines for urine drug testing and the reform of policies that promote repressive policing and compulsory detention.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001570.
This study is further discussed in a PLOS Medicine Perspective by Burris and Koester
Human Rights Watch, a global organization dedicated to defending and protecting human rights, has information about drug policy and human rights, which includes information on Thailand
The Global Commission on Drug Policy published a report in June 2012 entitled “The War on Drugs and HIV/AIDS: How the Criminalization of Drug Use Fuels the Global Pandemic” (available in several languages)
The Global Commission on HIV and the Law published a report in July 2012 entitled “HIV and the Law: Risk, Rights and Health” (available in several languages), the Open Society Foundations have prepared a briefing on this report
More information about the Mitsampan Community Research Project is available
doi:10.1371/journal.pmed.1001570
PMCID: PMC3858231  PMID: 24339753
11.  Knowledge and attitudes towards genetic testing in those affected with Parkinson’s disease 
Journal of Community Genetics  2013;5(2):167-177.
Advances in genetic tests provide valuable information for clinicians and patients around risks and inheritance of Parkinson’s Disease (PD); however, questions arise whether those affected or at risk of PD will want genetic testing, particularly given that there are no preventive or disease-modifying therapies currently available. This study sought to determine knowledge and attitudes toward genetic testing for those affected with PD. A cross-sectional study was undertaken using a standardized questionnaire with six multi-choice genetic knowledge and 17 multi-choice attitude items. Participants were selected from a registry of people affected with PD living in Queensland, Australia. Half of the selected index cases had a family history of PD. Ordinal regression was used to evaluate the association between support for genetic testing and demographic, knowledge, and other attitudinal factors. The level of genetic knowledge was relatively low (37 % correct responses). The vast majority supported diagnostic testing (97 %) and 90 % would undertake a genetic test themselves. Support for predictive was lower (78 %) and prenatal genetic testing had the least support (58 %). Benefits of testing were identified as the ability to know the child’s risk, seek therapies, and helping science with finding a cure. Concerns about genetic testing included potential emotional reactions and test accuracy. Genetic knowledge was not significantly associated with attitudes towards genetic testing. Patients with PD have strong interest in genetic testing for themselves with support for diagnostic testing but less support for predictive and prenatal testing. Genetic knowledge was unrelated to testing attitudes.
Electronic supplementary material
The online version of this article (doi:10.1007/s12687-013-0168-7) contains supplementary material, which is available to authorized users.
doi:10.1007/s12687-013-0168-7
PMCID: PMC3955457  PMID: 24018619
Genetic counseling; Genetic testing; Australia; Attitudes; Knowledge; Parkinson’s disease
12.  What’s at Stake? Genetic Information from the Perspective of People with Epilepsy and their Family Members 
Social science & medicine (1982)  2011;73(5):645-654.
Substantial progress has been made in identifying genes that raise risk for epilepsy, and genetic testing for some of these genes is increasingly being used in clinical practice. However, almost no empirical data are available from the perspective of people with epilepsy and their family members about the impact of genetic information and potential benefits and harms of genetic testing. To address this gap we conducted in-depth qualitative interviews with 40 individuals (22 with epilepsy, 18 unaffected) in the USA from families containing multiple affected individuals who had participated in epilepsy genetics research. The interviews were coded and analyzed using the principles of grounded theory. Several major themes emerged from these interviews. Participants expressed “personal theories of inheritance” that emphasized commonalities among relatives and the idea that disease risk is most shared by family members who share physical or personality traits. Most participants said they would have genetic testing if it were offered. They cited many potential benefits, including learning what caused epilepsy in their family, being better able to care and advocate for children at risk, reducing guilt and blame, providing an increased sense of control, and relieving anxiety in unaffected individuals who test negative. The influence of genetic information on reproduction was a particularly salient theme. Although respondents believed genetic testing would be useful for informing their reproductive choices, they also expressed fear that it could lead to external pressures to modify these choices. Other concerns about the potential negative impact of genetic information included increased blame and guilt, increased stigma and discrimination in employment and insurance, self-imposed limitations on life goals, and alterations in fundamental conceptions of “what epilepsy is.” Consideration of the perspectives of people with epilepsy and their family members is critical to understanding the implications of contemporary epilepsy genetic research and testing.
doi:10.1016/j.socscimed.2011.06.043
PMCID: PMC3163050  PMID: 21831495
genetics; stigma; epilepsy; reproductive decision making; USA; family
13.  Outcome of case finding among relatives of patients with known heterozygous familial hypercholesterolaemia 
BMJ : British Medical Journal  2000;321(7275):1497.
Objectives
To assess the feasibility of detecting new cases of heterozygous familial hypercholesterolaemia by using a nurse led genetic register.
Design
Case finding among relatives of patients with familial hypercholesterolaemia.
Setting
Two lipid clinics in central and south Manchester.
Subjects
259 (137 men and 122 women) probands and 285 first degree relatives.
Results
Of the 200 first degree relatives tested, 121 (60%) had inherited familial hypercholesterolaemia. The newly diagnosed patients were younger than the probands and were generally detected before they had clinically overt atherosclerosis. Concentrations of serum cholesterol were, respectively, 8.4 (1.7 SD) mmol/l and 8.1 (1.9 SD) mmol/l in affected men and women and 5.6 (1.0 SD) mmol/l and 5.6 (1.1 SD) mmol/l in unaffected men and women. Screening for risk factors as recommended in recent guidelines for coronary heart disease prevention would have failed to identify most of the affected relatives in whom hypertension, diabetes mellitus, cigarette smoking, and obesity were uncommon.
Conclusions
By performing cholesterol tests on 200 relatives, 121 new patients with familial hypercholesterolaemia were discovered. Because 1 in 500 people in the UK are affected by this condition, to detect a similar number by population screening over 60 000 tests would be required, and only a few of these patients would have been detected had cholesterol testing been restricted to those with other risk factors for coronary heart disease. A case exists for organising a genetic register approach, linking lipid clinics nationally.
PMCID: PMC27551  PMID: 11118175
14.  Genetic Literacy and Patient Perceptions of IBD Testing Utility and Disease Control: A Randomized Vignette Study of Genetic Testing 
Inflammatory bowel diseases  2014;20(5):901-908.
Background
Findings from inflammatory bowel disease (IBD) genome-wide association studies are being translated clinically into prognostic and diagnostic indicators of disease. Yet, patient perception and understanding of these tests and their applicability to providing risk information is unclear. The goal of this study was to determine, using hypothetical scenarios, whether patients with IBD perceive genetic testing to be useful for risk assessment, whether genetic test results impact perceived control, and whether low genetic literacy may be a barrier to patient understanding of these tests.
Methods
Two hundred fifty seven patients with IBD from the Johns Hopkins gastroenterology clinics were randomized to receive a vignette depicting either a genetic testing scenario or a standard blood testing scenario. Participants were asked questions about the vignette and responses were compared between groups.
Results
Perceptions of test utility for risk assessment were higher among participants responding to the genetic vignette (P < 0.001). There were no significant differences in perceptions of control over IBD after hypothetical testing between vignettes (P = 0.24). Participant responses were modified by genetic literacy, measured using a scale developed for this study. Participants randomized to the genetic vignette who scored higher on the genetic literacy scale perceived greater utility of testing for risk assessment (P = 0.008) and more control after testing (P = 0.02).
Conclusions
Patients with IBD perceive utility in genetic testing for providing information relevant to family members, and this appreciation is promoted by genetic literacy. Low genetic literacy among patients poses a potential threat to effective translation of genetic and genomic tests.
doi:10.1097/MIB.0000000000000021
PMCID: PMC4141772  PMID: 24691112
genetic literacy; genetic testing; IBD
15.  Risk perception after genetic counseling in patients with increased risk of cancer 
Background
Counselees are more aware of genetics and seek information, reassurance, screening and genetic testing. Risk counseling is a key component of genetic counseling process helping patients to achieve a realistic view for their own personal risk and therefore adapt to the medical, psychological and familial implications of disease and to encourage the patient to make informed choices [1,2].
The aim of this study was to conceptualize risk perception and anxiety about cancer in individuals attending to genetic counseling.
Methods
The questionnaire study measured risk perception and anxiety about cancer at three time points: before and one week after initial genetic counseling and one year after completed genetic investigations. Eligibility criteria were designed to include only index patients without a previous genetic consultation in the family. A total of 215 individuals were included. Data was collected during three years period.
Results
Before genetic counseling all of the unaffected participants subjectively estimated their risk as higher than their objective risk. Participants with a similar risk as the population overestimated their risk most. All risk groups estimated the risk for children's/siblings to be lower than their own. The benefits of preventive surveillance program were well understood among unaffected participants.
The difference in subjective risk perception before and directly after genetic counseling was statistically significantly lower in all risk groups. Difference in risk perception for children as well as for population was also statistically significant. Experienced anxiety about developing cancer in the unaffected subjects was lower after genetic counseling compared to baseline in all groups. Anxiety about cancer had clear correlation to perceived risk of cancer before and one year after genetic investigations.
The affected participants overestimated their children's risk as well as risk for anyone in population. Difference in risk perception for children/siblings as for the general population was significant between the first and second measurement time points. Anxiety about developing cancer again among affected participants continued to be high throughout this investigation.
Conclusion
The participant's accuracy in risk perception was poor, especially in low risk individuals before genetic counseling. There was a general trend towards more accurate estimation in all risk groups after genetic counseling. The importance of preventive programs was well understood. Cancer anxiety was prevalent and associated with risk perception, but decreased after genetic counseling.
[1] National Society of Genetic Counselors (2005), Genetic Counseling as a Profession. Available at (accessed November 25th 2007)
[2] Julian-Reynier C., Welkenhuysen M-, Hagoel L., Decruyenaere M., Hopwood P. (2003) Risk communication strategies: state of the art and effectiveness in the context of cancer genetic services. Eur J of Human Genetics 11, 725-736.
doi:10.1186/1897-4287-7-15
PMCID: PMC2744911  PMID: 19698175
16.  Neoplasms Associated with Germline and Somatic NF1 Gene Mutations 
The Oncologist  2012;17(1):101-116.
Neurofibromatosis 1 is a tumor predisposition genetic syndrome with autosomal dominant inheritance and virtually 100% penetrance by the age of 5 years. NF1 results from a loss-of-function mutation in the NF1 gene, resulting in decreased levels of neurofibromin in the cell. Neurofibromin is a negative regulator of various intracellular signaling pathways involved in the cellular proliferation. Although the loss of heterozygosity in the NF1 gene may predispose NF1 patients to certain malignancies, additional genetic alterations are a prerequisite for their development. The precise nature of these additional genetic alterations is not well defined, and genetic testing of all malignancies in NF1 patients becomes an essential component of future research in this subset of patients. In addition to germline NF1 mutations, alteration of the somatic NF1 gene is associated with sporadic malignancies such as adenocarcinoma of the colon, myelodysplastic syndrome, and anaplastic astrocytoma. The lack of well-defined screening tests for early detection and the nonspecific clinical presentation contribute to a poorer outcome in malignancies associated with NF1. Small study group size, mixed patient population, and a lack of uniformity in reporting research results make comparison of treatment outcome for this group difficult. An International Consensus Meeting to address and recommend best practices for screening, diagnosis, management, and follow-up of malignancies associated with NF1 is needed.
Learning Objectives
After completing this course, the reader will be able to: Describe phenotypic and clinical features associated with neurofibromatosis 1.Identify malignant tumors associated with neurofibromatosis 1.
This article is available for continuing medical education credit at CME.TheOncologist.com
Introduction.
Neurofibromatosis 1 is a tumor predisposition genetic syndrome with autosomal dominant inheritance and virtually 100% penetrance by the age of 5 years. NF1 results from a loss-of-function mutation in the NF1 gene, resulting in decreased levels of neurofibromin in the cell. Neurofibromin is a negative regulator of various intracellular signaling pathways involved in the cellular proliferation. Although the loss of heterozygosity in the NF1 gene may predispose NF1 patients to certain malignancies, additional genetic alterations are a prerequisite for their development. The precise nature of these additional genetic alterations is not well defined, and genetic testing of all malignancies in NF1 patients becomes an essential component of future research in this subset of patients. In addition to germline NF1 mutations, alteration of the somatic NF1 gene is associated with sporadic malignancies such as adenocarcinoma of the colon, myelodysplastic syndrome, and anaplastic astrocytoma.
Materials and Methods.
A comprehensive English and non-English language search for all articles pertinent to malignancies associated with NF1 was conducted using PubMed, a search engine provided by the U.S. National Library of Medicine and the National Institutes of Health. Key words searched included the following: “malignancies associated with NF1”, “tumors associated with NF1”, and “NF1 and malignancies”. A comprehensive analysis in terms age and mode of presentation, investigation and therapeutic modalities, and outcome of the published data was performed and compared with similar information on the sporadic cases.
Results.
Malignancies in NF1 patients typically occur at an earlier age and, with an exception of optic pathway gliomas, certain types of malignancies carry a poor prognosis compared with their sporadic counterparts. Malignancies are the leading cause of death in NF1 patients, resulting in a 10- to 15-year decreased life expectancy compared with the general population.
Conclusions.
The lack of well-defined screening tests for early detection and the nonspecific clinical presentation contributes to a poorer outcome in malignancies associated with NF1. Small study group size, mixed patient population, and a lack of uniformity in reporting research results make comparison of treatment outcome for this group difficult. An International Consensus Meeting to address and recommend best practices for screening, diagnosis, management, and follow-up of malignancies associated with NF1 is needed.
doi:10.1634/theoncologist.2010-0181
PMCID: PMC3267808  PMID: 22240541
NF1; Neurofibromatosis; Malignancy; Cancer
17.  Main Report 
Genetics in Medicine  2006;8(Suppl 1):12S-252S.
Background:
States vary widely in their use of newborn screening tests, with some mandating screening for as few as three conditions and others mandating as many as 43 conditions, including varying numbers of the 40+ conditions that can be detected by tandem mass spectrometry (MS/MS). There has been no national guidance on the best candidate conditions for newborn screening since the National Academy of Sciences report of 19751 and the United States Congress Office of Technology Assessment report of 1988,2 despite rapid developments since then in genetics, in screening technologies, and in some treatments.
Objectives:
In 2002, the Maternal and Child Health Bureau (MCHB) of the Health Resources and Services Administration (HRSA) of the United States Department of Health and Human Services (DHHS) commissioned the American College of Medical Genetics (ACMG) to: Conduct an analysis of the scientific literature on the effectiveness of newborn screening.Gather expert opinion to delineate the best evidence for screening for specified conditions and develop recommendations focused on newborn screening, including but not limited to the development of a uniform condition panel.Consider other components of the newborn screening system that are critical to achieving the expected outcomes in those screened.
Methods:
A group of experts in various areas of subspecialty medicine and primary care, health policy, law, public health, and consumers worked with a steering committee and several expert work groups, using a two-tiered approach to assess and rank conditions. A first step was developing a set of principles to guide the analysis. This was followed by developing criteria by which conditions could be evaluated, and then identifying the conditions to be evaluated. A large and broadly representative group of experts was asked to provide their opinions on the extent to which particular conditions met the selected criteria, relying on supporting evidence and references from the scientific literature. The criteria were distributed among three main categories for each condition: The availability and characteristics of the screening test;The availability and complexity of diagnostic services; andThe availability and efficacy of treatments related to the conditions. A survey process utilizing a data collection instrument was used to gather expert opinion on the conditions in the first tier of the assessment. The data collection format and survey provided the opportunity to quantify expert opinion and to obtain the views of a diverse set of interest groups (necessary due to the subjective nature of some of the criteria). Statistical analysis of data produced a score for each condition, which determined its ranking and initial placement in one of three categories (high scoring, moderately scoring, or low scoring/absence of a newborn screening test). In the second tier of these analyses, the evidence base related to each condition was assessed in depth (e.g., via systematic reviews of reference lists including MedLine, PubMed and others; books; Internet searches; professional guidelines; clinical evidence; and cost/economic evidence and modeling). The fact sheets reflecting these analyses were evaluated by at least two acknowledged experts for each condition. These experts assessed the data and the associated references related to each criterion and provided corrections where appropriate, assigned a value to the level of evidence and the quality of the studies that established the evidence base, and determined whether there were significant variances from the survey data. Survey results were subsequently realigned with the evidence obtained from the scientific literature during the second-tier analysis for all objective criteria, based on input from at least three acknowledged experts in each condition. The information from these two tiers of assessment was then considered with regard to the overriding principles and other technology or condition-specific recommendations. On the basis of this information, conditions were assigned to one of three categories as described above:Core Panel;Secondary Targets (conditions that are part of the differential diagnosis of a core panel condition.); andNot Appropriate for Newborn Screening (either no newborn screening test is available or there is poor performance with regard to multiple other evaluation criteria).
ACMG also considered features of optimal newborn screening programs beyond the tests themselves by assessing the degree to which programs met certain goals (e.g., availability of educational programs, proportions of newborns screened and followed up). Assessments were based on the input of experts serving in various capacities in newborn screening programs and on 2002 data provided by the programs of the National Newborn Screening and Genetics Resource Center (NNSGRC). In addition, a brief cost-effectiveness assessment of newborn screening was conducted.
Results:
Uniform panel
A total of 292 individuals determined to be generally representative of the regional distribution of the United States population and of areas of expertise or involvement in newborn screening provided a total of 3,949 evaluations of 84 conditions. For each condition, the responses of at least three experts in that condition were compared with those of all respondents for that condition and found to be consistent. A score of 1,200 on the data collection instrument provided a logical separation point between high scoring conditions (1,200–1,799 of a possible 2,100) and low scoring (<1,000) conditions. A group of conditions with intermediate scores (1,000–1,199) was identified, all of which were part of the differential diagnosis of a high scoring condition or apparent in the result of the multiplex assay. Some are identified by screening laboratories and others by diagnostic laboratories. This group was designated as a “secondary target” category for which the program must report the diagnostic result.
Using the validated evidence base and expert opinion, each condition that had previously been assigned to a category based on scores gathered through the data collection instrument was reconsidered. Again, the factors taken into consideration were: 1) available scientific evidence; 2) availability of a screening test; 3) presence of an efficacious treatment; 4) adequate understanding of the natural history of the condition; and 5) whether the condition was either part of the differential diagnosis of another condition or whether the screening test results related to a clinically significant condition.
The conditions were then assigned to one of three categories as previously described (core panel, secondary targets, or not appropriate for Newborn Screening).
Among the 29 conditions assigned to the core panel are three hemoglobinopathies associated with a Hb/S allele, six amino acidurias, five disorders of fatty oxidation, nine organic acidurias, and six unrelated conditions (congenital hypothyroidism (CH), biotinidase deficiency (BIOT), congenital adrenal hyperplasia (CAH), classical galactosemia (GALT), hearing loss (HEAR) and cystic fibrosis (CF)). Twenty-three of the 29 conditions in the core panel are identified with multiplex technologies such as tandem mass spectrometry (MS/MS) or high pressure liquid chromatography (HPLC). On the basis of the evidence, six of the 35 conditions initially placed in the core panel were moved into the secondary target category, which expanded to 25 conditions. Test results not associated with potential disease in the infant (e.g., carriers) were also placed in the secondary target category. When newborn screening laboratory results definitively establish carrier status, the result should be made available to the health care professional community and families. Twenty-seven conditions were determined to be inappropriate for newborn screening at this time.
Conditions with limited evidence reported in the scientific literature were more difficult to evaluate, quantify and place in one of the three categories. In addition, many conditions were found to occur in multiple forms distinguished by age-of-onset, severity, or other features. Further, unless a condition was already included in newborn screening programs, there was a potential for bias in the information related to some criteria. In such circumstances, the quality of the studies underlying the data such as expert opinion that considered case reports and reasoning from first principles determined the placement of the conditions into particular categories.
Newborn screening program optimization
– Assessment of the activities of newborn screening programs, based on program reports, was done for the six program components: education; screening; follow-up; diagnostic confirmation; management; and program evaluation. Considerable variation was found between programs with regard to whether particular aspects (e.g., prenatal education program availability, tracking of specimen collection and delivery) were included and the degree to which they are provided. Newborn screening program evaluation systems also were assessed in order to determine their adequacy and uniformity with the goal being to improve interprogram evaluation and comparison to ensure that the expected outcomes from having been identified in screening are realized.
Conclusions:
The state of the published evidence in the fast-moving worlds of newborn screening and medical genetics has not kept up with the implementation of new technologies, thus requiring the considerable use of expert opinion to develop recommendations about a core panel of conditions for newborn screening. Twenty-nine conditions were identified as primary targets for screening from which all components of the newborn screening system should be maximized. An additional 25 conditions were listed that could be identified in the course of screening for core panel conditions. Programs are obligated to establish a diagnosis and communicate the result to the health care provider and family. It is recognized that screening may not have been maximized for the detection of these secondary conditions but that some proportion of such cases may be found among those screened for core panel conditions. With additional screening, greater training of primary care health care professionals and subspecialists will be needed, as will the development of an infrastructure for appropriate follow-up and management throughout the lives of children who have been identified as having one of these rare conditions. Recommended actions to overcome barriers to an optimal newborn screening system include: The establishment of a national role in the scientific evaluation of conditions and the technologies by which they are screened;Standardization of case definitions and reporting procedures;Enhanced oversight of hospital-based screening activities;Long-term data collection and surveillance; andConsideration of the financial needs of programs to allow them to deliver the appropriate services to the screened population.
doi:10.1097/01.gim.0000223467.60151.02
PMCID: PMC3109899
18.  Detection of Tuberculosis in HIV-Infected and -Uninfected African Adults Using Whole Blood RNA Expression Signatures: A Case-Control Study 
PLoS Medicine  2013;10(10):e1001538.
Using a microarray-based approach, Michael Levin and colleagues develop a disease risk score to distinguish active from latent tuberculosis, as well as tuberculosis from other diseases, using whole blood samples.
Please see later in the article for the Editors' Summary
Background
A major impediment to tuberculosis control in Africa is the difficulty in diagnosing active tuberculosis (TB), particularly in the context of HIV infection. We hypothesized that a unique host blood RNA transcriptional signature would distinguish TB from other diseases (OD) in HIV-infected and -uninfected patients, and that this could be the basis of a simple diagnostic test.
Methods and Findings
Adult case-control cohorts were established in South Africa and Malawi of HIV-infected or -uninfected individuals consisting of 584 patients with either TB (confirmed by culture of Mycobacterium tuberculosis [M.TB] from sputum or tissue sample in a patient under investigation for TB), OD (i.e., TB was considered in the differential diagnosis but then excluded), or healthy individuals with latent TB infection (LTBI). Individuals were randomized into training (80%) and test (20%) cohorts. Blood transcriptional profiles were assessed and minimal sets of significantly differentially expressed transcripts distinguishing TB from LTBI and OD were identified in the training cohort. A 27 transcript signature distinguished TB from LTBI and a 44 transcript signature distinguished TB from OD. To evaluate our signatures, we used a novel computational method to calculate a disease risk score (DRS) for each patient. The classification based on this score was first evaluated in the test cohort, and then validated in an independent publically available dataset (GSE19491).
In our test cohort, the DRS classified TB from LTBI (sensitivity 95%, 95% CI [87–100]; specificity 90%, 95% CI [80–97]) and TB from OD (sensitivity 93%, 95% CI [83–100]; specificity 88%, 95% CI [74–97]). In the independent validation cohort, TB patients were distinguished both from LTBI individuals (sensitivity 95%, 95% CI [85–100]; specificity 94%, 95% CI [84–100]) and OD patients (sensitivity 100%, 95% CI [100–100]; specificity 96%, 95% CI [93–100]).
Limitations of our study include the use of only culture confirmed TB patients, and the potential that TB may have been misdiagnosed in a small proportion of OD patients despite the extensive clinical investigation used to assign each patient to their diagnostic group.
Conclusions
In our study, blood transcriptional signatures distinguished TB from other conditions prevalent in HIV-infected and -uninfected African adults. Our DRS, based on these signatures, could be developed as a test for TB suitable for use in HIV endemic countries. Further evaluation of the performance of the signatures and DRS in prospective populations of patients with symptoms consistent with TB will be needed to define their clinical value under operational conditions.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Tuberculosis (TB), caused by Mycobacterium tuberculosis, is curable and preventable, but according to the World Health Organization (WHO), in 2011, 8.7 million people had symptoms of TB (usually a productive cough and fever) and 1.4 million people—95% from low- and middle-income countries—died from this infection. Worldwide, TB is also the leading cause of death in people with HIV. For over a century, diagnosis of TB has relied on clinical and radiological features, sputum microscopy, and tuberculin skin testing but all of these tests have major disadvantages, especially in people who are also infected with HIV (HIV/TB co-infection) in whom results are often atypical or falsely negative. Furthermore, current tests cannot distinguish between inactive (latent) and active TB infection. Therefore, there is a need to identify biomarkers that can differentiate TB from other diseases common to African populations, where the burden of the HIV/TB pandemic is greatest.
Why Was This Study Done?
Previous studies have suggested that TB may be associated with specific transcriptional profiles (identified by microarray analysis) in the blood of the infected patient (host), which might make it possible to differentiate TB from other conditions. However, these studies have not included people co-infected with HIV and have included in the differential diagnosis diseases that are unrepresentative of the range of conditions common to African patients. In this study of patients from Malawi and South Africa, the researchers investigated whether blood RNA expression could distinguish TB from other conditions prevalent in African populations and form the basis of a diagnostic test for TB (through a process using transcription signatures).
What Did the Researchers Do and Find?
The researchers recruited patients with suspected TB attending one clinic in Cape Town, South Africa between 2007 and 2010 and in one hospital in Karonga district, Malawi between 2007 and 2009 (the training and test cohorts). Each patient underwent a series of tests for TB (and had a blood test for HIV) and was diagnosed as having TB if there was microbiological evidence confirming the presence of Mycobacterium tuberculosis. At recruitment, each patient also had blood taken for microarray analysis and following this assessment, the researchers selected minimal transcript sets that distinguished TB from latent TB infection and TB from other diseases, even in HIV-infected individuals. In order to help form the basis of a simple, low cost, diagnostic test, the researchers then developed a statistical method for the translation of multiple transcript RNA signatures into a disease risk score, which the researchers then checked using a separate cohort of South African patients (the independent validation cohort).
Using these methods, after screening 437 patients in Malawi and 314 in South Africa, the researchers recruited 273 patients to the Malawi cohort and 311 adults to the South African cohort (the training and test cohorts). Following technical failures, 536 microarray samples were available for analysis. The researchers identified a set of 27 transcripts that could distinguish between TB and latent TB and a set of 44 transcripts that could distinguish TB from other diseases. These multi-transcript signatures were then used to calculate a single value disease risk score for every patient. In the test cohorts, the disease risk score had a high sensitivity (95%) and specificity (90%) for distinguishing TB from latent TB infection (sensitivity is a measure of true positives, correctly identified as such and specificity is a measure of true negatives, correctly identified as such) and for distinguishing TB from other diseases (sensitivity 93% and specificity 88%). In the independent validation cohort, the researchers found that patients with TB could be distinguished from patients with latent TB infection (sensitivity 95% and specificity 94%) and also from patients with other diseases (sensitivity 100% and specificity 96%).
What Do These Findings Mean?
These findings suggest that a distinctive set of RNA transcriptional signatures forming a disease risk score might provide the basis of a diagnostic test that can distinguish active TB from latent TB infection (27 signatures) and also from other diseases (44 signatures), such as pneumonia, that are prevalent in African populations. There is a concern that using transcriptional signatures as a clinical diagnostic tool in resource poor settings might not be feasible because they are complex and costly. The relatively small number of transcripts in the signatures described here may increase the potential for using this approach (transcriptional profiling) as a clinical diagnostic tool using a single blood test. In order to make most use of these findings, there is an urgent need for the academic research community and for industry to develop innovative methods to translate multi-transcript signatures into simple, cheap tests for TB suitable for use in African health facilities.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/ 10.1371/journal.pmed.1001538.
Wikipedia has definitions of tests for gene expression (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The National Center for Biotechnology Information has a fact sheet on microarray analysis
MedlinePlus has links to further information about tuberculosis (in English and Spanish)
The World Health Organization has up-to-date information on TB
The Stop TB partnership is working towards tuberculosis elimination; patient stories about tuberculosis/HIV coinfection are available
doi:10.1371/journal.pmed.1001538
PMCID: PMC3805485  PMID: 24167453
19.  The Role of Abcb5 Alleles in Susceptibility to Haloperidol-Induced Toxicity in Mice and Humans 
PLoS Medicine  2015;12(2):e1001782.
Background
We know very little about the genetic factors affecting susceptibility to drug-induced central nervous system (CNS) toxicities, and this has limited our ability to optimally utilize existing drugs or to develop new drugs for CNS disorders. For example, haloperidol is a potent dopamine antagonist that is used to treat psychotic disorders, but 50% of treated patients develop characteristic extrapyramidal symptoms caused by haloperidol-induced toxicity (HIT), which limits its clinical utility. We do not have any information about the genetic factors affecting this drug-induced toxicity. HIT in humans is directly mirrored in a murine genetic model, where inbred mouse strains are differentially susceptible to HIT. Therefore, we genetically analyzed this murine model and performed a translational human genetic association study.
Methods and Findings
A whole genome SNP database and computational genetic mapping were used to analyze the murine genetic model of HIT. Guided by the mouse genetic analysis, we demonstrate that genetic variation within an ABC-drug efflux transporter (Abcb5) affected susceptibility to HIT. In situ hybridization results reveal that Abcb5 is expressed in brain capillaries, and by cerebellar Purkinje cells. We also analyzed chromosome substitution strains, imaged haloperidol abundance in brain tissue sections and directly measured haloperidol (and its metabolite) levels in brain, and characterized Abcb5 knockout mice. Our results demonstrate that Abcb5 is part of the blood-brain barrier; it affects susceptibility to HIT by altering the brain concentration of haloperidol. Moreover, a genetic association study in a haloperidol-treated human cohort indicates that human ABCB5 alleles had a time-dependent effect on susceptibility to individual and combined measures of HIT. Abcb5 alleles are pharmacogenetic factors that affect susceptibility to HIT, but it is likely that additional pharmacogenetic susceptibility factors will be discovered.
Conclusions
ABCB5 alleles alter susceptibility to HIT in mouse and humans. This discovery leads to a new model that (at least in part) explains inter-individual differences in susceptibility to a drug-induced CNS toxicity.
Gary Peltz and colleagues examine the role of ABCB5 alleles in haloperidol-induced toxicity in a murine genetic model and humans treated with haloperidol.
Editors' Summary
Background
The brain is the control center of the human body. This complex organ controls thoughts, memory, speech, and movement, it is the seat of intelligence, and it regulates the function of many organs. The brain comprises many different parts, all of which work together but all of which have their own special functions. For example, the forebrain is involved in intellectual activities such as thinking whereas the hindbrain controls the body’s vital functions and movements. Messages are passed between the various regions of the brain and to other parts of the body by specialized cells called neurons, which release and receive signal molecules known as neurotransmitters. Like all the organs in the body, blood vessels supply the brain with the oxygen, water, and nutrients it needs to function. Importantly, however, the brain is protected from infectious agents and other potentially dangerous substances circulating in the blood by the “blood-brain barrier,” a highly selective permeability barrier that is formed by the cells lining the fine blood vessels (capillaries) within the brain.
Why Was This Study Done?
Although drugs have been developed to treat various brain disorders, more active and less toxic drugs are needed to improve the treatment of many if not most of these conditions. Unfortunately, relatively little is known about how the blood-brain barrier regulates the entry of drugs into the brain or about the genetic factors that affect the brain’s susceptibility to drug-induced toxicities. It is not known, for example, why about half of patients given haloperidol—a drug used to treat psychotic disorders (conditions that affect how people think, feel, or behave)—develop tremors and other symptoms caused by alterations in the brain region that controls voluntary movements. Here, to improve our understanding of how drugs enter the brain and impact its function, the researchers investigate the genetic factors that affect haloperidol-induced toxicity by genetically analyzing several inbred mouse strains (every individual in an inbred mouse strain is genetically identical) with different susceptibilities to haloperidol-induced toxicity and by undertaking a human genetic association study (a study that looks for non-chance associations between specific traits and genetic variants).
What Did the Researchers Do and Find?
The researchers used a database of genetic variants called single nucleotide polymorphisms (SNPs) and a computational genetic mapping approach to show first that variations within the gene encoding Abcb5 affected susceptibility to haloperidol-induced toxicity (indicated by changes in the length of time taken by mice to move their paws when placed on an inclined wire-mesh screen) among inbred mouse strains. Abcb5 is an ATP-binding cassette transporter, a type of protein that moves molecules across cell membranes. The researchers next showed that Abcb5 is expressed in brain capillaries, which is the location of the blood-brain barrier. Abcb5 was also expressed in cerebellar Purkinje cells, which help to control motor (intentional) movements. They also measured the measured the effect of haloperidol and the haloperidol concentration in brain tissue sections in mice that were genetically engineered to make no Abcb5 (Abcb5 knockout mice). Finally, the researchers investigated whether specific alleles (alternative versions) of ABCB5 are associated with haloperidol-induced toxicity in people. Among a group of 85 patients treated with haloperidol for a psychotic illness, one specific ABCB5 allele was associated with haloperidol-induced toxicity during the first few days of treatment.
What Do These Findings Mean?
These findings indicate that Abcb5 is a component of the blood-brain barrier in mice and suggest that genetic variants in the gene encoding this protein underlie, at least in part, the differences in susceptibility to haloperidol-induced toxicity seen among inbred mice strains. Moreover, the human genetic association study indicates that a specific ABCB5 allele also affects the susceptibility of people to haloperidol-induced toxicity. The researchers note that other ABCB5 alleles or other genetic factors that affect haloperidol-induced toxicity in people might emerge if larger groups of patients were studied. However, based on their findings, the researchers propose a new model for the genetic mechanisms that underlie inter-individual and cell type-specific differences in susceptibility to haloperidol-induced brain toxicity. If confirmed in future studies, this model might facilitate the development of more effective and less toxic drugs to treat a range of brain disorders.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001782.
The US National Institute of Neurological Disorders and Stroke provides information about a wide range of brain diseases (in English and Spanish); its fact sheet “Brain Basics: Know Your Brain” is a simple introduction to the human brain; its “Blueprint Neurotherapeutics Network” was established to develop new drugs for disorders affecting the brain and other parts of the nervous system
MedlinePlus provides links to additional resources about brain diseases and their treatment (in English and Spanish)
Wikipedia provides information about haloperidol, about ATP-binding cassette transporters and about genetic association (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1001782
PMCID: PMC4315575  PMID: 25647612
20.  Sharing Genetic Test Results in Lynch Syndrome: Communication with Close and Distant Relatives 
Background and Aims
Clinical genetic testing can help direct cancer screening for members of Lynch Syndrome families; however there is limited information about family communication of genetic test results.
Methods
174 probands who had genetic testing for Lynch Syndrome were enrolled through 4 U.S. cancer genetics clinics. Subjects were asked whether they had disclosed their genetic test results to first, second and third-degree relatives. Univariate and multivariate analyses were used to identify clinical and demographic factors associated with informing immediate and extended family of genetic test results.
Results
171/174 probands (98% [95%CI: 95%-100%]) reported they had disclosed their genetic test result to a first-degree relative. Communication of test results to other relatives occurred significantly less often, with only 109 of 162 (67% [95%CI: 59%-74%]) subjects with second or third-degree relatives sharing their results. Individuals with a pathogenic mutation were significantly more likely to inform distant relatives than were subjects with a negative or indeterminate test result (OR 2.49 [95% CI: 1.14-5.40]). Probands' age, gender, and cancer status did not influence communication of genetic test results. Lack of closeness and concerns that relatives would worry or not understand the implications of test results were the primary reasons for not sharing genetic test results.
Conclusions
Most individuals who undergo genetic testing for Lynch syndrome share their test results with first-degree family members; however these results reach more distant relatives significantly less often. Interventions to improve communication of genetic test results to members of the extended family are necessary to provide the optimal cancer prevention care to at-risk families.
doi:10.1016/j.cgh.2007.12.014
PMCID: PMC2536607  PMID: 18258490
21.  Personalised medicine in Canada: a survey of adoption and practice in oncology, cardiology and family medicine 
BMJ Open  2011;1(1):e000110.
Introduction
In order to provide baseline data on genetic testing as a key element of personalised medicine (PM), Canadian physicians were surveyed to determine roles, perceptions and experiences in this area. The survey measured attitudes, practice, observed benefits and impacts, and barriers to adoption.
Methods
A self-administered survey was provided to Canadian oncologists, cardiologists and family physicians and responses were obtained online, by mail or by fax. The survey was designed to be exploratory. Data were compared across specialties and geography.
Results
The overall response rate was 8.3%. Of the respondents, 43%, 30% and 27% were family physicians, cardiologists and oncologists, respectively. A strong majority of respondents agreed that genetic testing and PM can have a positive impact on their practice; however, only 51% agreed that there is sufficient evidence to order such tests. A low percentage of respondents felt that they were sufficiently informed and confident practicing in this area, although many reported that genetic tests they have ordered have benefited their patients. Half of the respondents agreed that genetic tests that would be useful in their practice are not readily available. A lack of practice guidelines, limited provider knowledge and lack of evidence-based clinical information were cited as the main barriers to practice. Differences across provinces were observed for measures relating to access to testing and the state of practice. Differences across specialties were observed for the state of practice, reported benefits and access to testing.
Conclusions
Canadian physicians recognise the benefits of genetic testing and PM; however, they lack the education, information and support needed to practice effectively in this area. Variability in practice and access to testing across specialties and across Canada was observed. These results support a need for national strategies and resources to facilitate physician knowledge, training and practice in PM.
Article summary
Article focus
Canadian physicians' perceptions and experience relating to genetic testing and personalised medicine (PM).
Practice and impact of genetic testing and PM in Canada and across specialties.
Implications for continued adoption of genetic testing and PM in Canada across specialties.
Key messages
Family physicians, cardiologists and oncologists across Canada are practicing PM and recognise its benefits and potential impacts.
Physicians reported a number of barriers to the adoption of PM that are currently affecting medical practice in Canada.
The practice of and access to genetic testing and PM varies across specialties and provinces, which will have an impact on their continued adoption.
Strengths and limitations of this study
First national survey of physicians on this topic
Allows for a baseline measure of practice for comparison in future studies
Medical discipline specific study
Administration of the survey over the period 26 May to 15 September 2010 may have negatively influenced the response rate.
There may have been differences in respondents based on the medium used to complete the survey (electronic vs paper-based).
The topic of genetic testing and personalised medicine may not have been relevant to all physicians who were sent the survey, which may have negatively affected the response rate.
All survey results were based on physicians' self-reports.
The physician contact information was purchased through a third party and some data were incomplete or inaccurate.
doi:10.1136/bmjopen-2011-000110
PMCID: PMC3191410  PMID: 22021765
22.  Genetic Predisposition to Increased Blood Cholesterol and Triglyceride Lipid Levels and Risk of Alzheimer Disease: A Mendelian Randomization Analysis 
PLoS Medicine  2014;11(9):e1001713.
In this study, Proitsi and colleagues use a Mendelian randomization approach to dissect the causal nature of the association between circulating lipid levels and late onset Alzheimer's Disease (LOAD) and find that genetic predisposition to increased plasma cholesterol and triglyceride lipid levels is not associated with elevated LOAD risk.
Please see later in the article for the Editors' Summary
Background
Although altered lipid metabolism has been extensively implicated in the pathogenesis of Alzheimer disease (AD) through cell biological, epidemiological, and genetic studies, the molecular mechanisms linking cholesterol and AD pathology are still not well understood and contradictory results have been reported. We have used a Mendelian randomization approach to dissect the causal nature of the association between circulating lipid levels and late onset AD (LOAD) and test the hypothesis that genetically raised lipid levels increase the risk of LOAD.
Methods and Findings
We included 3,914 patients with LOAD, 1,675 older individuals without LOAD, and 4,989 individuals from the general population from six genome wide studies drawn from a white population (total n = 10,578). We constructed weighted genotype risk scores (GRSs) for four blood lipid phenotypes (high-density lipoprotein cholesterol [HDL-c], low-density lipoprotein cholesterol [LDL-c], triglycerides, and total cholesterol) using well-established SNPs in 157 loci for blood lipids reported by Willer and colleagues (2013). Both full GRSs using all SNPs associated with each trait at p<5×10−8 and trait specific scores using SNPs associated exclusively with each trait at p<5×10−8 were developed. We used logistic regression to investigate whether the GRSs were associated with LOAD in each study and results were combined together by meta-analysis. We found no association between any of the full GRSs and LOAD (meta-analysis results: odds ratio [OR] = 1.005, 95% CI 0.82–1.24, p = 0.962 per 1 unit increase in HDL-c; OR = 0.901, 95% CI 0.65–1.25, p = 0.530 per 1 unit increase in LDL-c; OR = 1.104, 95% CI 0.89–1.37, p = 0.362 per 1 unit increase in triglycerides; and OR = 0.954, 95% CI 0.76–1.21, p = 0.688 per 1 unit increase in total cholesterol). Results for the trait specific scores were similar; however, the trait specific scores explained much smaller phenotypic variance.
Conclusions
Genetic predisposition to increased blood cholesterol and triglyceride lipid levels is not associated with elevated LOAD risk. The observed epidemiological associations between abnormal lipid levels and LOAD risk could therefore be attributed to the result of biological pleiotropy or could be secondary to LOAD. Limitations of this study include the small proportion of lipid variance explained by the GRS, biases in case-control ascertainment, and the limitations implicit to Mendelian randomization studies. Future studies should focus on larger LOAD datasets with longitudinal sampled peripheral lipid measures and other markers of lipid metabolism, which have been shown to be altered in LOAD.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Currently, about 44 million people worldwide have dementia, a group of brain disorders characterized by an irreversible decline in memory, communication, and other “cognitive” functions. Dementia mainly affects older people and, because people are living longer, experts estimate that more than 135 million people will have dementia by 2050. The commonest form of dementia is Alzheimer disease. In this type of dementia, protein clumps called plaques and neurofibrillary tangles form in the brain and cause its degeneration. The earliest sign of Alzheimer disease is usually increasing forgetfulness. As the disease progresses, affected individuals gradually lose their ability to deal with normal daily activities such as dressing. They may become anxious or aggressive or begin to wander. They may also eventually lose control of their bladder and of other physical functions. At present, there is no cure for Alzheimer disease although some of its symptoms can be managed with drugs. Most people with the disease are initially cared for at home by relatives and other unpaid carers, but many patients end their days in a care home or specialist nursing home.
Why Was This Study Done?
Several lines of evidence suggest that lipid metabolism (how the body handles cholesterol and other fats) is altered in patients whose Alzheimer disease develops after the age of 60 years (late onset Alzheimer disease, LOAD). In particular, epidemiological studies (observational investigations that examine the patterns and causes of disease in populations) have found an association between high amounts of cholesterol in the blood in midlife and the risk of LOAD. However, observational studies cannot prove that abnormal lipid metabolism (dyslipidemia) causes LOAD. People with dyslipidemia may share other characteristics that cause both dyslipidemia and LOAD (confounding) or LOAD might actually cause dyslipidemia (reverse causation). Here, the researchers use “Mendelian randomization” to examine whether lifetime changes in lipid metabolism caused by genes have a causal impact on LOAD risk. In Mendelian randomization, causality is inferred from associations between genetic variants that mimic the effect of a modifiable risk factor and the outcome of interest. Because gene variants are inherited randomly, they are not prone to confounding and are free from reverse causation. So, if dyslipidemia causes LOAD, genetic variants that affect lipid metabolism should be associated with an altered risk of LOAD.
What Did the Researchers Do and Find?
The researchers investigated whether genetic predisposition to raised lipid levels increased the risk of LOAD in 10,578 participants (3,914 patients with LOAD, 1,675 elderly people without LOAD, and 4,989 population controls) using data collected in six genome wide studies looking for gene variants associated with Alzheimer disease. The researchers constructed a genotype risk score (GRS) for each participant using genetic risk markers for four types of blood lipids on the basis of the presence of single nucleotide polymorphisms (SNPs, a type of gene variant) in their DNA. When the researchers used statistical methods to investigate the association between the GRS and LOAD among all the study participants, they found no association between the GRS and LOAD.
What Do These Findings Mean?
These findings suggest that the genetic predisposition to raised blood levels of four types of lipid is not causally associated with LOAD risk. The accuracy of this finding may be affected by several limitations of this study, including the small proportion of lipid variance explained by the GRS and the validity of several assumptions that underlie all Mendelian randomization studies. Moreover, because all the participants in this study were white, these findings may not apply to people of other ethnic backgrounds. Given their findings, the researchers suggest that the observed epidemiological associations between abnormal lipid levels in the blood and variation in lipid levels for reasons other than genetics, or to LOAD risk could be secondary to variation in lipid levels for reasons other than genetics, or to LOAD, a possibility that can be investigated by studying blood lipid levels and other markers of lipid metabolism over time in large groups of patients with LOAD. Importantly, however, these findings provide new information about the role of lipids in LOAD development that may eventually lead to new therapeutic and public-health interventions for Alzheimer disease.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001713.
The UK National Health Service Choices website provides information (including personal stories) about Alzheimer's disease
The UK not-for-profit organization Alzheimer's Society provides information for patients and carers about dementia, including personal experiences of living with Alzheimer's disease
The US not-for-profit organization Alzheimer's Association also provides information for patients and carers about dementia and personal stories about dementia
Alzheimer's Disease International is the international federation of Alzheimer disease associations around the world; it provides links to individual associations, information about dementia, and links to World Alzheimer Reports
MedlinePlus provides links to additional resources about Alzheimer's disease (in English and Spanish)
Wikipedia has a page on Mendelian randomization (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1001713
PMCID: PMC4165594  PMID: 25226301
23.  Screening Mammography for Women Aged 40 to 49 Years at Average Risk for Breast Cancer 
Executive Summary
Objective
The aim of this review was to determine the effectiveness of screening mammography in women aged 40 to 49 years at average risk for breast cancer.
Clinical Need
The effectiveness of screening mammography in women aged over 50 years has been established, yet the issue of screening in women aged 40 to 49 years is still unsettled. The Canadian Task Force of Preventive Services, which sets guidelines for screening mammography for all provinces, supports neither the inclusion nor the exclusion of this screening procedure for 40- to 49-year-old women from the periodic health examination. In addition to this, 2 separate reviews, one conducted in Quebec in 2005 and the other in Alberta in 2000, each concluded that there is an absence of convincing evidence on the effectiveness of screening mammography for women in this age group who are at average risk for breast cancer.
In the United States, there is disagreement among organizations on whether population-based mammography should begin at the age of 40 or 50 years. The National Institutes of Health, the American Association for Cancer Research, and the American Academy of Family Physicians recommend against screening women in their 40s, whereas the United States Preventive Services Task Force, the National Cancer Institute, the American Cancer Society, the American College of Radiology, and the American College of Obstetricians and Gynecologists recommend screening mammograms for women aged 40 to 49 years. Furthermore, in comparing screening guidelines between Canada and the United States, it is also important to recognize that “standard care” within a socialized medical system such as Canada’s differs from that of the United States. The National Breast Screening Study (NBSS-1), a randomized screening trial conducted in multiple centres across Canada, has shown there is no benefit in mortality from breast cancer from annual mammograms in women randomized between the ages of 40 and 49, relative to standard care (i.e. physical exam and teaching of breast-self examination on entry to the study, with usual community care thereafter).
At present, organized screening programs in Canada systematically screen women starting at 50 years of age, although with a physician’s referral, a screening mammogram is an insured service in Ontario for women under 50 years of age.
International estimates of the epidemiology of breast cancer show that the incidence of breast cancer is increasing for all ages combined, whereas mortality is decreasing, though at a slower rate. These decreasing mortality rates may be attributed to screening and advances in breast cancer therapy over time. Decreases in mortality attributable to screening may be a result of the earlier detection and treatment of invasive cancers, in addition to the increased detection of ductal carcinoma in situ (DCIS), of which certain subpathologies are less lethal. Evidence from the SEER cancer registry in the United States indicates that the age-adjusted incidence of DCIS has increased almost 10-fold over a 20-year period (from 2.7 to 25 per 100,000).
The incidence of breast cancer is lower in women aged 40 to 49 years than in women aged 50 to 69 years (about 140 per 100,000 versus 500 per 100,000 women, respectively), as is the sensitivity (about 75% versus 85% for women aged under and over 50, respectively) and specificity of mammography (about 80% versus 90% for women aged under and over 50, respectively). The increased density of breast tissue in younger women is mainly responsible for the lower accuracy of this procedure in this age group. In addition, as the proportion of breast cancers that occur before the age of 50 are more likely to be associated with genetic predisposition as compared with those diagnosed in women after the age of 50, mammography may not be an optimal screening method for younger women.
Treatment options vary with the stage of disease (based on tumor size, involvement of surrounding tissue, and number of affected axillary lymph nodes) and its pathology, and may include a combination of surgery, chemotherapy, and/or radiotherapy.
Surgery is the first-line intervention for biopsy confirmed tumours. The subsequent use of radiation, chemotherapy, or hormonal treatments is dependent on the histopathologic characteristics of the tumor and the type of surgery. There is controversy regarding the optimal treatment of DCIS, which is noninvasive.
With such controversy as to the effectiveness of mammography and the potential risk associated with women being overtreated or actual cancers being missed, and the increased risk of breast cancer associated with exposure to annual mammograms over a 10-year period, the Ontario Health Technology Advisory Committee requested this review of screening mammography in women aged 40 to 49 years at average risk for breast cancer. This review is the first of 2 parts and concentrates on the effectiveness of screening mammography (i.e., film mammography, FM) for women at average risk aged 40 to 49 years. The second part will be an evaluation of screening by either magnetic resonance imaging or digital mammography, with the objective of determining the optimal screening modality in these younger women.
Review Strategy
The following questions were asked:
Does screening mammography for women aged 40 to 49 years who are at average risk for breast cancer reduce breast cancer mortality?
What is the sensitivity and specificity of mammography for this age group?
What are the risks associated with annual screening from ages 40 to 49?
What are the risks associated with false positive and false negative mammography results?
What are the economic considerations if evidence for effectiveness is established?
The Medical Advisory Secretariat followed its standard procedures and searched these electronic databases: Ovid MEDLINE, EMBASE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews and the International Network of Agencies for Health Technology Assessment.
Keywords used in the search were breast cancer, breast neoplasms, mass screening, and mammography.
In total, the search yielded 6,359 articles specific to breast cancer screening and mammography. This did not include reports on diagnostic mammograms. The search was further restricted to English-language randomized controlled trials (RCTs), systematic reviews, and meta-analyses published between 1995 and 2005. Excluded were case reports, comments, editorials, and letters, which narrowed the results to 516 articles and previous health technology policy assessments.
These were examined against the criteria outlined below. This resulted in the inclusion of 5 health technology assessments, the Canadian Preventive Services Task Force report, the United States Preventive Services Task Force report, 1 Cochrane review, and 8 RCTs.
Inclusion Criteria
English-language articles, and English and French-language health technology policy assessments, conducted by other organizations, from 1995 to 2005
Articles specific to RCTs of screening mammography of women at average risk for breast cancer that included results for women randomized to studies between the ages of 40 and 49 years
Studies in which women were randomized to screening with or without mammography, although women may have had clinical breast examinations and/or may have been conducting breast self-examination.
UK Age Trial results published in December 2006.
Exclusion Criteria
Observational studies, including those nested within RCTs
RCTs that do not include results on women between the ages of 40 and 49 at randomization
Studies in which mammography was compared with other radiologic screening modalities, for example, digital mammography, magnetic resonance imaging or ultrasound.
Studies in which women randomized had a personal history of breast cancer.
Intervention
Film mammography
Comparators
Within RCTs, the comparison group would have been women randomized to not undergo screening mammography, although they may have had clinical breast examinations and/or have been conducting breast self-examination.
Outcomes of Interest
Breast cancer mortality
Summary of Findings
There is Level 1 Canadian evidence that screening women between the ages of 40 and 49 years who are at average risk for breast cancer is not effective, and that the absence of a benefit is sustained over a maximum follow-up period of 16 years.
All remaining studies that reported on women aged under 50 years were based on subset analyses. They provide additional evidence that, when all these RCTs are taken into account, there is no significant reduction in breast cancer mortality associated with screening mammography in women aged 40 to 49 years.
Conclusions
There is Level 1 evidence that screening mammography in women aged 40 to 49 years at average risk for breast cancer is not effective in reducing mortality.
Moreover, risks associated with exposure to mammographic radiation, the increased risk of missed cancers due to lower mammographic sensitivity, and the psychological impact of false positives, are not inconsequential.
The UK Age Trial results published in December 2006 did not change these conclusions.
PMCID: PMC3377515  PMID: 23074501
24.  Communicating accuracy of tests to general practitioners: a controlled study 
BMJ : British Medical Journal  2002;324(7341):824-826.
Objective
To assess the extent to which different forms of summarising diagnostic test information influence general practitioners' ability to estimate disease probabilities.
Design
Controlled questionnaire study.
Setting
Three Swiss conferences in continuous medical education.
Participants
263 general practitioners.
Intervention
Questionnaire with multiple choice questions about terms of test accuracy and a clinical vignette with the results of a diagnostic test described in three different ways (test result only, test result plus test sensitivity and specificity, test result plus the positive likelihood ratio presented in plain language).
Main outcome measures
Doctors' knowledge and application of terms of test accuracy and estimation of disease probability in the clinical vignette.
Results
The correct definitions for sensitivity and predictive value were chosen by 76% and 61% of the doctors respectively, but only 22% chose the correct answer for the post-test probability of a positive screening test. In the clinical vignette doctors given the test result only overestimated its diagnostic value (median attributed likelihood ratio (aLR)=9.0, against 2.54 reported in the literature). Providing the scan's sensitivity and specificity reduced the overestimation (median aLR=6.0) but to a lesser extent than simple wording of the likelihood ratio (median aLR=3.0).
Conclusion
Most general practitioners recognised the correct definitions for sensitivity and positive predictive value but did not apply them correctly. Conveying test accuracy information in simple, non-technical language improved their ability to estimate disease probabilities accurately.
What is already known on this topicMany doctors confuse the sensitivity of clinical tests and their positive predictive valueDoctors tend to overestimate information derived from such tests and underestimate information from a patient's clinical historyMost primary research on diagnostic accuracy is reported using sensitivity and specificity or likelihood ratiosWhat this study addsIn a cohort of experienced Swiss general practitioners most were unable to interpret correctly numerical information on the diagnostic accuracy of a screening testWhen presented with a positive result alone they grossly overestimated its valueAdding information on the test's sensitivity and specificity moderated these overestimates, and expressing the same numerical information as a positive likelihood ratio in simple, non-technical language brought the estimates still closer to their true values
PMCID: PMC100792  PMID: 11934776
25.  Ethics and Neuropsychiatric Genetics: A Review of Major Issues 
Advances in neuropsychiatric genetics hold great hopes for improved prevention, diagnosis, and treatment. However, the power of genetic testing to identify individuals at increased risk for disorders and to convey information about relatives creates a set of complex ethical issues. Public attitudes are inevitably affected by the shadow of eugenics, with its history of distorting scientific findings to serve socio-political ends. Nonetheless, the growing availability of genetic tests means that more patients will seek genetic information, and physicians must manage the process of informed consent to allow meaningful decisions. Patients should be helped to understand the often-limited predictive power of current knowledge, potential psychological impact, risks of stigma and discrimination, and possible implications for family members. Decisions for predictive testing of children raise additional concerns, including distortions of family dynamics and negative effects on children’s self-image; testing is best deferred until adulthood unless preventive interventions exist. Pharmacogenomic testing, part of personalized medicine, may bring collateral susceptibility information for which patients should be prepared. The implications of genetic findings for families raise the question of whether physicians have duties to inform family members of implications for their health. Finally, participation in research in neuropsychiatric genetics evokes a broad range of ethical concerns, including the contentious issue of the extent to which results should be returned to individual subjects. As genetic science becomes more widely applied, the public will become more sophisticated and will be likely to demand a greater role in determining social policy on these issues.
doi:10.1017/S1461145711001982
PMCID: PMC3359421  PMID: 22272758
ethics; genetics; consent

Results 1-25 (1274830)