Epidemiologists have used case-control studies to investigate enteric disease outbreaks for many decades. Increasingly, case-control studies are also used to investigate risk factors for sporadic (not outbreak-associated) disease. While the same basic approach is used, there are important differences between outbreak and sporadic disease settings that need to be considered in the design and implementation of the case-control study for sporadic disease. Through the International Collaboration on Enteric Disease “Burden of Illness” Studies (the International Collaboration), we reviewed 79 case-control studies of sporadic enteric infections caused by nine pathogens that were conducted in 22 countries and published from 1990 through to 2009. We highlight important methodological and study design issues (including case definition, control selection, and exposure assessment) and discuss how approaches to the study of sporadic enteric disease have changed over the last 20 years (e.g., making use of more sensitive case definitions, databases of controls, and computer-assisted interviewing). As our understanding of sporadic enteric infections grows, methods and topics for case-control studies are expected to continue to evolve; for example, advances in understanding of the role of immunity can be used to improve control selection, the apparent protective effects of certain foods can be further explored, and case-control studies can be used to provide population-based measures of the burden of disease.
In the isolated population of Sardinia, a Mediterranean island, ~25% of ALS cases carry either a p.A382T mutation of the TARDBP gene or a GGGGCC hexanucleotide repeat expansion in the first intron of the C9ORF72 gene.
To describe the co-presence of two genetic mutations in two Sardinian ALS patients.
We identified two index ALS cases carrying both the p.A382T missense mutation of TARDBP gene and the hexanucleotide repeat expansion of C9ORF72 gene.
The index case of Family A had bulbar ALS and frontemporal dementia (FTD) at 43. His father, who carried the hexanucleotide repeat expansion of C9ORF72 gene, had spinal ALS and FTD at 64 and his mother, who carried the TARDBP gene p.A382T missense mutation, had spinal ALS and FTD at 69. The index case of Family B developed spinal ALS without FTD at 35 and had a rapid course to respiratory failure. His parents are healthy at 62 and 63. The two patients share the known founder risk haplotypes across both the C9ORF72 9p21 locus and the TARDBP 1p36.22 locus.
Our data show that in rare neurodegenerative causing genes can co-exist within the same individuals and are associated with a more severe disease course.
Purpose of the study
To find intra rater and inter rater reliability of Dynamic Rotator Stability Test (DRST) and to find concurrent validity of Dynamic Rotator Stability Test (DRST) with University of Pennsylvania Shoulder Score (PENN) Scale.
Material and method
40 subjects of either gender between the age group of 18–70 with painful shoulder conditions of musculoskeletal origin was selected through convenient sampling. Tester 1 and tester 2 administered DRST and PENN scale randomly. In a subgroup of 20 subjects DRST was administered by both the testers to find the inter rater reliability. 180° Standard Universal Goniometer was used to take measurements.
For intra-rater reliability, all the test variables were showing highly significant correlation (p=.94 – 1). For inter -rater, with tester 2, test variables like position, ROM, force, direction of abnormal translation, pain during the test, compensatory movement during test were found to be significant (p=.71–1).only some variables of DRST showed significant correlation with PENN scale (P=.320–.450)
Dynamic Rotator Stability Test has good intra rater and moderate inter rater reliability. Concurrent validity of Dynamic Rotator Stability Test was found to be poor when compared to PENN Shoulder Score.
Dynamic Rotator Stability Test; Shoulder; Instability; Rotator cuff
To evaluate the validity of the International Classification of Diseases, 10th Revision (ICD-10) diagnosis code for hyponatraemia (E87.1) in two settings: at presentation to the emergency department and at hospital admission.
Population-based retrospective validation study.
Twelve hospitals in Southwestern Ontario, Canada, from 2003 to 2010.
Patients aged 66 years and older with serum sodium laboratory measurements at presentation to the emergency department (n=64 581) and at hospital admission (n=64 499).
Main outcome measures
Sensitivity, specificity, positive predictive value and negative predictive value comparing various ICD-10 diagnostic coding algorithms for hyponatraemia to serum sodium laboratory measurements (reference standard). Median serum sodium values comparing patients who were code positive and code negative for hyponatraemia.
The sensitivity of hyponatraemia (defined by a serum sodium ≤132 mmol/l) for the best-performing ICD-10 coding algorithm was 7.5% at presentation to the emergency department (95% CI 7.0% to 8.2%) and 10.6% at hospital admission (95% CI 9.9% to 11.2%). Both specificities were greater than 99%. In the two settings, the positive predictive values were 96.4% (95% CI 94.6% to 97.6%) and 82.3% (95% CI 80.0% to 84.4%), while the negative predictive values were 89.2% (95% CI 89.0% to 89.5%) and 87.1% (95% CI 86.8% to 87.4%). In patients who were code positive for hyponatraemia, the median (IQR) serum sodium measurements were 123 (119–126) mmol/l and 125 (120–130) mmol/l in the two settings. In code negative patients, the measurements were 138 (136–140) mmol/l and 137 (135–139) mmol/l.
The ICD-10 diagnostic code for hyponatraemia differentiates between two groups of patients with distinct serum sodium measurements at both presentation to the emergency department and at hospital admission. However, these codes underestimate the true incidence of hyponatraemia due to low sensitivity.
Epidemiology; Adult nephrology < Nephrology
Evaluate the validity of the International Classification of Diseases, 10th revision (ICD-10) code for hyperkalaemia (E87.5) in two settings: at presentation to an emergency department and at hospital admission.
Population-based validation study.
12 hospitals in Southwestern Ontario, Canada, from 2003 to 2010.
Elderly patients with serum potassium values at presentation to an emergency department (n=64 579) and at hospital admission (n=64 497).
Sensitivity, specificity, positive-predictive value and negative-predictive value. Serum potassium values in patients with and without a hyperkalaemia code (code positive and code negative, respectively).
The sensitivity of the best-performing ICD-10 coding algorithm for hyperkalaemia (defined by serum potassium >5.5 mmol/l) was 14.1% (95% CI 12.5% to 15.9%) at presentation to an emergency department and 14.6% (95% CI 13.3% to 16.1%) at hospital admission. Both specificities were greater than 99%. In the two settings, the positive-predictive values were 83.2% (95% CI 78.4% to 87.1%) and 62.0% (95% CI 57.9% to 66.0%), while the negative-predictive values were 97.8% (95% CI 97.6% to 97.9%) and 96.9% (95% CI 96.8% to 97.1%). In patients who were code positive for hyperkalaemia, median (IQR) serum potassium values were 6.1 (5.7 to 6.8) mmol/l at presentation to an emergency department and 6.0 (5.1 to 6.7) mmol/l at hospital admission. For code-negative patients median (IQR) serum potassium values were 4.0 (3.7 to 4.4) mmol/l and 4.1 (3.8 to 4.5) mmol/l in each of the two settings, respectively.
Patients with hospital encounters who were ICD-10 E87.5 hyperkalaemia code positive and negative had distinct higher and lower serum potassium values, respectively. However, due to very low sensitivity, the incidence of hyperkalaemia is underestimated.
Management of a patient's diabetes is entirely dependent upon the type of diabetes they are deemed to have. Patients with Type 1 diabetes are insulin deficient so require multiple daily insulin injections, whereas patients with Type 2 diabetes still have some endogenous insulin production so insulin treatment is only required when diet and tablets do not establish good glycaemic control. Despite the importance of a correct diagnosis, classification of diabetes is based on aetiology and relies on clinical judgement. There are no clinical guidelines on how to determine whether a patient has Type 1 or Type 2 diabetes. We aim to systematically review the literature to derive evidence-based clinical criteria for the classification of the major subtypes of diabetes.
Methods and analysis
We will perform a systematic review of diagnostic accuracy studies to establish clinical criteria that predict the subsequent development of absolute insulin deficiency seen in Type 1 diabetes. Insulin deficiency will be determined by reference standard C-peptide concentrations. Synthesis of criteria identified will be undertaken using hierarchical summary receiver operating characteristic curves.
Ethics and dissemination
As this is a systematic review, there will be no ethical issues. We will disseminate results by writing up the final systematic review and synthesis for publication in a peer-reviewed journal and will present at national and international diabetes-related meetings.
Statistics & Research Methods
To find out whether there is a potential impact of the appearance of a plain cigarette package on the smoking perceptions and behavioural intentions of Flemish adolescents.
We performed a cross-sectional study using the qualitative method of focus group discussions.
We performed eight focus group discussions, in which 55 adolescents took part, 32 female and 23 male. Inclusion criteria were: Flemish male and female 15-year-olds to 16-year-olds and 17-year-olds to 18-year-olds attending regular high-school education or vocational training who were current or had ever been smokers.
Outcome measure (planned as well as measured)
The opinions and perceptions of young Flemish smokers regarding the impact of cigarette packaging on their smoking behaviour.
Plain packages are perceived as less attractive, cheap and unreliable for young people. Because of the unattractiveness of the plain packaging, the health warnings catch the eye much more strongly.
In this first scientific study in Flanders on this topic, it emerged that plain packaging could be a strong policy tool to reduce the number of adolescents starting smoking. Validation of these findings by conducting a quantitative survey in the same target group is recommended.
Public Health; Qualitative Research; Preventive Medicine
In this study we explore the ethical issues around unlinked anonymous testing (UAT) of blood, a method of seroprevalence surveillance for infectious diseases. Our study focused on UAT for HIV, although UAT can be used for other infectious diseases. The objectives of the research were to gain a better understanding of the views of key informants in countries adopting different UAT testing strategies, and to use the findings of the research to inform health policy.
Qualitative study using in-depth interviews and ethical analysis.
Four countries using different strategies around UAT of blood for HIV (the UK, the USA, the Netherlands and Norway).
Twenty-three key informants in the four countries.
Participants from the four countries have different views on UAT of blood, and the approaches and policies on UAT adopted by different countries have been historically and culturally determined. We use our findings to explore the relationship between public health policy and ethics, framing our discussion in relation to two important contemporary debates: informed consent for participation in medical and public health research; and the balance between the individual good and the public good.
Qualitative research and ethical analysis of UAT of blood in different countries has yielded important findings for consideration by policy makers. The policy of UAT of blood for HIV and other diseases in the UK needs reconsideration in the light of these findings.
Public Health; Medical Ethics
The purpose of this study was to identify organisational processes and structures that are associated with nurse-reported patient safety and quality of nursing.
This is an observational cross-sectional study using survey methods.
Respondents from 31 Norwegian hospitals with more than 85 beds were included in the survey.
All registered nurses working in direct patient care in a position of 20% or more were invited to answer the survey. In this study, 3618 nurses from surgical and medical wards responded (response rate 58.9). Nurses' practice environment was defined as organisational processes and measured by the Nursing Work Index Revised and items from Hospital Survey on Patient Safety Culture.
Nurses' assessments of patient safety, quality of nursing, confidence in how their patients manage after discharge and frequency of adverse events were used as outcome measures.
Quality system, nurse–physician relation, patient safety management and staff adequacy were process measures associated with nurse-reported work-related and patient-related outcomes, but we found no associations with nurse participation, education and career and ward leadership. Most organisational structures were non-significant in the multilevel model except for nurses’ affiliations to medical department and hospital type.
Organisational structures may have minor impact on how nurses perceive work-related and patient-related outcomes, but the findings in this study indicate that there is a considerable potential to address organisational design in improvement of patient safety and quality of care.
The main objective of this study was to determine the effectiveness of smoking cessation interventions (SCIs) for increasing cessation rates in smokers with cerebrovascular disease.
Systematic review. Two independent reviewers searched information sources and assessed studies for inclusion/exclusion criteria.
Eligibility criteria for included studies
Randomised control trials, conducted prior to the 22 May 2012 investigating SCIs in smokers with cerebrovascular disease, were included. No age or ethnicity limitations were applied in order to be as inclusive as possible.
We followed the PRISMA statement approach to identify relevant randomised control studies. Due to the variability of interventions used in the reported studies, a meta-analysis was not conducted.
Of 852 identified articles, 4 articles fit the inclusion criteria describing the outcome in 354 patients. The overall cessation rate with an SCI was 23.9% (42 of 176) while without one was 20.8% (37 of 178).
There are a limited number of reported intervention studies that explore this area of secondary stroke prevention. Furthermore, of those intervention studies that were found, only two implemented evidence-based approaches to smoking cessation. A meta-analysis was not conducted because of the variability of interventions in the reported studies. Larger studies with homogeneous interventions are needed to determine how effective SCIs are in increasing cessation in smokers with established cerebrovascular disease.
epidemiology; systematic review; smoking cessation; stroke < neurology
To describe physician perspectives on the causes of and solutions to obesity care and identify differences in these perspectives by number of years since completion of medical school.
National cross-sectional online survey from 9 February to 1 March 2011.
500 primary care physicians.
We evaluated physician perspectives on: (1) causes of obesity, (2) competence in treating obese patients, (3) perspectives on the health professional most qualified to help obese patients lose or maintain weight and (4) solutions for improving obesity care.
Primary care physicians overwhelmingly supported additional training (such as nutrition counselling) and practice-based changes (such as having scales report body mass index) to help them improve their obesity care. They also identified nutritionists/dietitians as the most qualified providers to care for obese patients. Physicians with fewer than 20 years since completion of medical school were more likely to identify lack of information about good eating habits and lack of access to healthy food as important causes of obesity. They also reported feeling relatively more successful helping obese patients lose weight. The response rate for the survey was 25.6%.
Our results indicate a perceived need for improved medical education related to obesity care.
Medical Education & Training; Internal Medicine
Rock and pop fame is associated with risk taking, substance use and premature mortality. We examine relationships between fame and premature mortality and test how such relationships vary with type of performer (eg, solo or band member) and nationality and whether cause of death is linked with prefame (adverse childhood) experiences.
A retrospective cohort analysis based on biographical data. An actuarial methodology compares postfame mortality to matched general populations. Cox survival and logistic regression techniques examine risk and protective factors for survival and links between adverse childhood experiences and cause of death, respectively.
North America and Europe.
1489 rock and pop stars reaching fame between 1956 and 2006.
Stars’ postfame mortality relative to age-, sex- and ethnicity-matched populations (USA and UK); variations in survival with performer type, and in cause of mortality with exposure to adverse childhood experiences.
Rock/pop star mortality increases relative to the general population with time since fame. Increases are greater in North American stars and those with solo careers. Relative mortality begins to recover 25 years after fame in European but not North American stars. Those reaching fame from 1980 onwards have better survival rates. For deceased stars, cause of death was more likely to be substance use or risk-related in those with more adverse childhood experiences.
Relationships between fame and mortality vary with performers’ characteristics. Adverse experiences in early life may leave some predisposed to health-damaging behaviours, with fame and extreme wealth providing greater opportunities to engage in risk-taking. Millions of youths wish to emulate their icons. It is important they recognise that substance use and risk-taking may be rooted in childhood adversity rather than seeing them as symbols of success.
Public Health; Epidemiology; Occupational & Industrial Medicine
Cognitive behaviour therapy delivered in the format of guided self-help via the internet has been found to be effective for a range of conditions, including depression and anxiety disorders. Recent results indicate that guided self-help via the internet is a promising treatment format also for psychodynamic therapy. However, to date and to our knowledge, no study has evaluated internet-delivered psychodynamic therapy as a transdiagnostic treatment. The affect-phobia model of psychopathology by McCullough et al provides a psychodynamic conceptualisation of a range of psychiatric disorders. The aim of this study will be to test the effects of a transdiagnostic guided self-help treatment based on the affect-phobia model in a sample of clients with depression and anxiety.
Methods and analysis
This study will be a randomised controlled trial with a total sample size of 100 participants. The treatment group receives a 10-week, psychodynamic, guided self-help treatment based on the transdiagnostic affect-phobia model of psychopathology. The treatment consists of eight text-based treatment modules and includes therapist contact in a secure online environment. Participants in the control group receive similar online therapist support without any treatment modules. Outcome measures are the 9-item Patient Health Questionnaire Depression Scale and the 7-item Generalised Anxiety Disorder Scale (GAD-7). Process measures that concerns emotional processing and mindfulness are included. All outcome and process measures will be administered weekly via the internet and at 6-month follow-up.
This trial will add to the body of knowledge on internet-delivered psychological treatments in general and to psychodynamic treatments in particular. We also hope to provide new insights in the effectiveness and working mechanisms of psychodynamic therapy based on the affect-phobia model.
Accurate and full reporting of evaluation of interventions in health research is needed for evidence synthesis and informed decision-making. Evidence suggests that biases and incomplete reporting affect the assessment of study validity and the ability to include this data in secondary research. The Transparent Reporting of Evaluations with Non-randomised Designs (TREND) reporting guideline was developed to improve the transparency and accuracy of the reporting of behavioural and public health evaluations with non-randomised designs. Evaluations of reporting guidelines have shown that they can be effective in improving reporting completeness. Although TREND occupies a niche within reporting guidelines, and despite it being 8 years since publication, no study yet has assessed its impact on reporting completeness or investigated what factors affect its use by authors and journal editors. This protocol describes two studies that aim to redress this.
Methods and analysis
Study 1 will use an observational design to examine the uptake and use of TREND by authors, and by journals in their instructions to authors. A comparison of reporting completeness and study quality of papers that do and do not use TREND to inform reporting will be made. Study 2 will use a cross-sectional survey to investigate what factors inhibit or facilitate authors’ and journal editors’ use of TREND. Semistructured interviews will also be conducted with a subset of authors and editors to explore findings from study 1 and the surveys in greater depth.
Ethics and dissemination
These studies will generate evidence of how implementation and dissemination of the TREND guideline has affected reporting completeness in studies with experimental, non-randomised designs within behavioural and public health research. The project has received ethics approval from the Research Ethics Committee of the Peninsula College of Medicine and Dentistry, Universities of Exeter and Plymouth.
Public Health; Qualitative Research
Background and objective
The increasing prevalence of childhood obesity has led to interest in its prevention, particularly through school-based and family-based interventions in the early years. Most evidence reviews, to date, have focused on individual behaviour change rather than the ‘obesogenic environment’.
This paper reviews the evidence on the influence of the food environment on overweight and obesity in children up to 8 years.
Electronic databases (including MEDLINE, EMBASE, Cochrane Controlled Trials Register (CCTR), DARE, CINAHL and Psycho-Info) and reference lists of original studies and reviews were searched for all papers published up to 31 August 2011.
Study designs included were either population-based intervention studies or a longitudinal study. Studies were included if the majority of the children studied were under 9 years, if they related to diet and if they focused on prevention rather than treatment in clinical settings.
Data included in the tables were characteristics of participants, aim, and key outcome results. Quality assessment of the selected studies was carried out to identify potential bias and an evidence ranking exercise carried out to prioritise areas for future public health interventions.
Thirty-five studies (twenty-five intervention studies and ten longitudinal studies) were selected for the review. There was moderately strong evidence to support interventions on food promotion, large portion sizes and sugar-sweetened soft drinks.
Reducing food promotion to young children, increasing the availability of smaller portions and providing alternatives to sugar-sweetened soft drinks should be considered in obesity prevention programmes aimed at younger children. These environment-level interventions would support individual and family-level behaviour change.
Preventive Medicine; Public Health
In Bangladesh, private healthcare is common and popular, regardless of income or area of residence, making the private sector an important player in health service provision. Although the private sector offers a good range of health services, tuberculosis (TB) care in the private sector is poor. We conducted research in Dhaka, between 2004 and 2008, to develop and evaluate a public–private partnership (PPP) model to involve private medical practitioners (PMPs) within the National TB Control Programme (NTP)'s activities. Since 2008, this PPP model has been scaled up in two other big cities, Chittagong and Sylhet. This paper reports the results of this development, evaluation and scale-up.
Mixed method, observational study design. We used NTP service statistics to compare the TB control outcomes between intervention and control areas. To capture detailed insights of PMPs and TB managers about the process and outcomes of the study, we conducted in-depth interviews, focus group discussions and workshops.
Urban setting, piloted in four areas in Dhaka city; later scaled up in other areas of Dhaka and in two major cities.
The partnership with PMPs yielded significantly increased case finding of sputum smear-positive TB cases. Between 2004 and 2010, 703 participating PMPs referred 3959 sputum smear-positive TB cases to the designated Directly Observed Treatment, Short-course (DOTS) centres, contributing about 36% of all TB cases in the project areas. There was a steady increase in case notification rates in the project areas following implementation of the partnership.
The PPP model was highly effective in improving access and quality of TB care in urban settings.
We examined the potential association between prior chronic obstructive pulmonary disease (COPD) and edentulism, and whether the association varied by COPD severity using data from the Dental Atherosclerosis Risk in Communities Study.
Community dwelling subjects from four US communities.
Participants and measurements
Cases were identified as edentulous (without teeth) and subjects with one or more natural teeth were identified as dentate. COPD cases were defined by spirometry measurements that showed the ratio of forced expiratory volume (1 s) to vital capacity to be less than 0.7. The severity of COPD cases was also determined using a modified Global Initiative for Chronic Obstructive Lung Disease classification criteria (GOLD stage I–IV). Multiple logistic regression was used to examine the association between COPD and edentulism, while adjusting for age, gender, centre/race, ethnicity, education level, income, diabetes, hypertension, coronary heart disease and congestive heart failure, body mass index, smoking, smokeless tobacco use and alcohol consumption.
13 465 participants were included in this analysis (2087 edentulous; 11 378 dentate). Approximately 28.3% of edentulous participants had prior COPD compared with 19.6% among dentate participants (p<0.0001). After adjustment for potential confounders, we observed a 1.3 (1.08 to 1.62) and 2.5 (1.68 to 3.63) fold increased risk of edentulism among GOLD II and GOLD III/IV COPD, respectively, as compared with the non-COPD/dentate referent. Given the short period of time between the measurements of COPD (visit 2) and dentate status (visit 4) relative to the natural history of both diseases, neither temporality nor insight as to the directionality of the association can be ascertained.
We found a statistically significant association between prior COPD and edentulism, with evidence of a positive incremental effect seen with increasing GOLD classification.
Pulmonary disease; Edentulism; Chronic Obstructive disease; bronchitis
Traditional microbiology identification takes 48–72 h to complete. This lag forces clinicians to rely on broad-spectrum empiric coverage. To address this gap, manufacturers are developing rapid molecular diagnostics (RMD). We hypothesised that RMD's accuracy is more dependent upon population risk of harbouring the culprit pathogen than to their sensitivity and specificity.
A mathematical model.
Setting and participants
We used the range of risks (5–50%) for methicillin-resistant Staphylococcus aureus (MRSA) among patients hospitalised with complicated skin and skin structure infections (cSSSI), pneumonia or sepsis.
Main outcome measures
We modelled the impact of changing a test's characteristics on its positive (PPV) and negative (NPV) predictive values, and hence the risk of overtreatment or undertreatment, within strata of an organism's population prevalence. MRSA diagnostics provided assumptions for the test sensitivity and specificity (95–99%). Scenarios with low sensitivity and specificity (90%), and best-case and worst-case scenarios normalised to the annual universe of populations of interest, were examined.
With a low prevalence (5%) and high test specificity, the PPV was 84%. Conversely, with 50% prevalence and 95% test specificity the PPV rose to ≥95%. Even when the test's specificity and sensitivity were both 90%, in a high-risk population both PPV and NPV were ∼90%. In the worst-case scenario, 150 000 patients with cSSSI, pneumonia and sepsis annually were at risk for inappropriate treatment, 91% of these at risk for over-treatment. In the best-case scenario, 81% of 18 000 patients at risk for inappropriate coverage were subject to overtreatment.
Although promising for limiting exposure to excessive antimicrobial coverage, RMDs alone will not solve the issue of inappropriate, and particularly overtreatment. Increasing pretest probability as a strategy to minimise antibiotic abuse results in more accurate patient classification than does developing a test with near-perfect characteristics. The healthcare community must build robust evidence and information technology infrastructure to guide appropriate use of such testing.
To compare home-based cardiac rehabilitation (CR) with usual care (control group with no rehabilitation) in elderly patients who declined participation in centre-based CR.
Randomised clinical trial with 12 months follow-up and mortality data after 5.5 years (mean follow-up 4½ years).
Rehabilitation unit, Department of Cardiology, Copenhagen, Denmark.
Elderly patients ≥65 years with coronary heart disease.
A physiotherapist made home visits in order to develop an individualised exercise programme that could be performed at home and surrounding outdoor area. Risk factor intervention, medical adjustment, physical and psychological assessments were offered at baseline and after 3, 6 and 12 months.
Main outcome measurements
The primary outcome was 6 min walk test (6MWT). Secondary outcomes were blood pressure, body composition, cholesterol profile, cessation of smoking, health-related quality of life (HRQoL), anxiety and depression.
40 patients participated. The study population was characterised by high age (median age 77 years, range 65–92 years) and high level of comorbidity. Patients receiving home-based CR had a significant increase in the primary outcome 6MWT of 33.5 m (95% CI: 6.2 to 60.8, p=0.02) at 3 months, whereas the usual care group did not significantly improve, but with no significant differences between the groups. At 12 months follow-up, there was a decline in 6MWT in both groups; −55.2 m (95% CI: 18.7 to 91.7, p<0.01) in the home group and −52.1 m (95% CI: −3.0 to 107.1, p=0.06) in the usual care group. There were no significant differences in blood pressure, body composition, cholesterol profile, cessation of smoking or HRQoL after 3, 6 and 12 months follow-up.
Participation in home-based CR improved exercise capacity among elderly patients with coronary heart disease, but there was no significant difference between the home intervention and the control group. In addition, no significant difference was found in the secondary outcomes. When intervention ceased, the initial increase in exercise capacity was rapidly lost.
Administering medication to hospitalised infants and children is a complex process at high risk of error. Failure mode and effect analysis (FMEA) is a proactive tool used to analyse risks, identify failures before they happen and prioritise remedial measures. To examine the hazards associated with the process of drug delivery to children, we performed a proactive risk-assessment analysis.
Design and setting
Five multidisciplinary teams, representing different divisions of the paediatric department at Padua University Hospital, were trained to analyse the drug-delivery process, to identify possible causes of failures and their potential effects, to calculate a risk priority number (RPN) for each failure and plan changes in practices.
To identify higher-priority potential failure modes as defined by RPNs and planning changes in clinical practice to reduce the risk of patients harm and improve safety in the process of medication use in children.
In all, 37 higher-priority potential failure modes and 71 associated causes and effects were identified. The highest RPNs related (>48) mainly to errors in calculating drug doses and concentrations. Many of these failure modes were found in all the five units, suggesting the presence of common targets for improvement, particularly in enhancing the safety of prescription and preparation of endovenous drugs. The introductions of new activities in the revised process of administering drugs allowed reducing the high-risk failure modes of 60%.
FMEA is an effective proactive risk-assessment tool useful to aid multidisciplinary groups in understanding a process care and identifying errors that may occur, prioritising remedial interventions and possibly enhancing the safety of drug delivery in children.
Qualitative Research; Pediatrics; Drugs Administration
To develop an internationally validated measure of cancer awareness and beliefs; the awareness and beliefs about cancer (ABC) measure.
Design and setting
Items modified from existing measures were assessed by a working group in six countries (Australia, Canada, Denmark, Norway, Sweden and the UK). Validation studies were completed in the UK, and cross-sectional surveys of the general population were carried out in the six participating countries.
Testing in UK English included cognitive interviewing for face validity (N=10), calculation of content validity indexes (six assessors), and assessment of test–retest reliability (N=97). Conceptual and cultural equivalence of modified (Canadian and Australian) and translated (Danish, Norwegian, Swedish and Canadian French) ABC versions were tested quantitatively for equivalence of meaning (≥4 assessors per country) and in bilingual cognitive interviews (three interviews per translation). Response patterns were assessed in surveys of adults aged 50+ years (N≥2000) in each country.
Psychometric properties were evaluated through tests of validity and reliability, conceptual and cultural equivalence and systematic item analysis. Test–retest reliability used weighted-κ and intraclass correlations. Construction and validation of aggregate scores was by factor analysis for (1) beliefs about cancer outcomes, (2) beliefs about barriers to symptomatic presentation, and item summation for (3) awareness of cancer symptoms and (4) awareness of cancer risk factors.
The English ABC had acceptable test–retest reliability and content validity. International assessments of equivalence identified a small number of items where wording needed adjustment. Survey response patterns showed that items performed well in terms of difficulty and discrimination across countries except for awareness of cancer outcomes in Australia. Aggregate scores had consistent factor structures across countries.
The ABC is a reliable and valid international measure of cancer awareness and beliefs. The methods used to validate and harmonise the ABC may serve as a methodological guide in international survey research.
early detection of cancer; cancer early diagnosis; validation studies; cross-cultural comparison; reliability and validity
To determine the level of knowledge concerning Sudden Unexpected Death in the Young (SUDY) among Canadian medical students and recent graduates (≤5 years after graduating).
A cross-sectional study was conducted by distributing a standardised, multiple choice, online questionnaire which assessed basic knowledge of SUDY.
Canadian medical schools and residency training programmes.
614 Canadian medical students (in either their penultimate or final year) and recent graduates (≤5 years after graduating) completed an anonymous online questionnaire.
Primary and secondary outcome measures
The level of knowledge regarding molecular aetiology, clinical presentation, pharmacological management and modes of inheritance of six of the commonest conditions causing SUDY, including hypertrophic cardiomyopathy (HCM), arrhythmogenic right ventricular cardiomyopathy (ARVC), Brugada syndrome, catecholaminergic polymorphic ventricular tachycardia (CPVT), long QT syndrome (LQT) and Wolff-Parkinson White syndrome (WPW), were compared between medical students and recent graduates. Questions were broken down into basic knowledge and advanced categories and analysed as a secondary outcome measure.
Of 614 responses, approximately two-thirds were answered by recent graduates, who generally scored 10% higher on all subject categories than medical students. Overall, questions regarding HCM were best answered (40%), followed by WPW syndrome (32%), CPVT (30%), ARVC (23%), Brugada syndrome (21%) and LQT syndrome (17%). Questions categorised as basic knowledge were answered 30% and 39% correctly in medical student and recent graduate groups, respectively, and those in the advanced category were answered 20% and 25% correctly.
Survey respondents fared poorly when answering questions regarding SUDY, which may be a reflection of inadequate medical education regarding these disorders. Standardised teaching regarding SUDY needs to occupy a stronger focus in Canadian medical curricula in order to prevent more unnecessary deaths by these syndromes in the future.
Three oral anticoagulants have reported study results for stroke prevention in patients with atrial fibrillation (AF) (dabigatran etexilate, rivaroxaban and apixaban); all demonstrated superiority or non-inferiority compared with warfarin (RE-LY, ARISTOTLE and ROCKET-AF). This study aimed to assess the representativeness for the real-world AF population, particularly the population eligible for anticoagulants.
A cross-sectional database analysis.
Dataset derived from the General Practice Research Database (GPRD).
Primary and secondary outcomes measure
The proportion of real-world patients with AF who met the inclusion/exclusion criteria for RE-LY, ARISTOTLE and ROCKET-AF were compared. The results were then stratified by risk of stroke using CHADS2 and CHA2DS2-VASc.
83 898 patients with AF were identified in the GPRD. For the population at intermediate or high risk of stroke and eligible for anticoagulant treatment (CHA2DS2-VASc ≥1; n=78 783 (94%)), the proportion eligible for inclusion into RE-LY (dabigatran etexilate) was 68% (95% CI 67.7% to 68.3%; n=53 640), compared with 65% (95% CI 64.7% to 65.3%; n=51 163) eligible for ARISTOTLE (apixaban) and 51% (95% CI 50.7% to 51.4%; n=39 892) eligible for ROCKET-AF (rivaroxaban). Using the CHADS2 method of risk stratification, for the population at intermediate or high risk of stroke and eligible for anticoagulation treatment (CHADS2 ≥1; n=71 493 (85%)), the proportion eligible for inclusion into RE-LY was 74% (95% CI 73.7% to 74.3%; n=52 783), compared with 72% (95% CI 71.7% to 72.3%; n=51 415) for ARISTOTLE and 56% (95% CI 55.6% to 56.4%; n=39 892) for ROCKET-AF.
Patients enrolled within RE-LY and ARISTOTLE were more reflective of the ‘real-world’ AF population in the UK, in contrast with patients enrolled within ROCKET-AF who were a more narrowly defined group of patients at higher risk of stroke. Differences between trials should be taken into account when considering the applicability of findings from randomised clinical trials. However, assessing representativeness is not a substitute for assessing generalisibility, that is, how well clinical trial results would translate into effectiveness and safety in everyday routine care.