In the UK, a man’s lifetime risk of being diagnosed with prostate cancer is 1 in 8. We calculated both the lifetime risk of being diagnosed with and dying from prostate cancer by major ethnic group.
Public Health England provided prostate cancer incidence and mortality data for England (2008–2010) by major ethnic group. Ethnicity and mortality data were incomplete, requiring various assumptions and adjustments before lifetime risk was calculated using DevCan (percent, range).
The lifetime risk of being diagnosed with prostate cancer is approximately 1 in 8 (13.3 %, 13.2–15.0 %) for White men, 1 in 4 (29.3 %, 23.5–37.2 %) for Black men, and 1 in 13 (7.9 %, 6.3–10.5 %) for Asian men, whereas that of dying from prostate cancer is approximately 1 in 24 (4.2 %, 4.2–4.7 %) for White men, 1 in 12 (8.7 %, 7.6–10.6 %) for Black men, and 1 in 44 (2.3 %, 1.9–3.0 %) for Asian men.
In England, Black men are at twice the risk of being diagnosed with, and dying from, prostate cancer compared to White men. This is an important message to communicate to Black men. White, Black, and Asian men with a prostate cancer diagnosis are all as likely to die from the disease, independent of their ethnicity. Nonetheless, proportionally more Black men are dying from prostate cancer in England.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0405-5) contains supplementary material, which is available to authorized users.
Asian; Black; Epidemiology; Ethnicity; Lifetime risk; Prostate cancer; White
The RTS,S/AS01 malaria vaccine candidate recently completed Phase III trials in 11 African sites. Recommendations for its deployment will partly depend on predictions of public health impact in endemic countries. Previous predictions of these used only limited information on underlying vaccine properties and have not considered country-specific contextual data.
Each Phase III trial cohort was simulated explicitly using an ensemble of individual-based stochastic models, and many hypothetical vaccine profiles. The true profile was estimated by Bayesian fitting of these models to the site- and time-specific incidence of clinical malaria in both trial arms over 18 months of follow-up. Health impacts of implementation via two vaccine schedules in 43 endemic sub-Saharan African countries, using country-specific prevalence, access to care, immunisation coverage and demography data, were predicted via weighted averaging over many simulations.
The efficacy against infection of three doses of vaccine was initially approximately 65 % (when immunising 6–12 week old infants) and 80 % (children 5–17 months old), with a 1 year half-life (exponential decay). Either schedule will avert substantial disease, but predicted impact strongly depends on the decay rate of vaccine effects and average transmission intensity.
For the first time Phase III site- and time-specific data were available to estimate both the underlying profile of RTS,S/AS01 and likely country-specific health impacts. Initial efficacy will probably be high, but decay rapidly. Adding RTS,S to existing control programs, assuming continuation of current levels of malaria exposure and of health system performance, will potentially avert 100–580 malaria deaths and 45,000 to 80,000 clinical episodes per 100,000 fully vaccinated children over an initial 10-year phase.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0408-2) contains supplementary material, which is available to authorized users.
Malaria; Vaccine; Simulation; Public health impact
Considerable emphasis is presently being placed on usage of generic medicines by governments focussed on the potential economic benefits associated with their use. Concurrently, there is increasing discussion in the lay media of perceived doubts regarding the quality and equivalence of generic medicines. The objective of this paper is to report the outcomes of a systematic search for peer-reviewed, published studies that focus on physician, pharmacist and patient/consumer perspectives of generic medicines.
Literature published between January 2003 and November 2014, which is indexed in PubMed and Scopus, on the topic of opinions of physicians, pharmacists and patients with respect to generic medicines was searched, and articles within the scope of this review were appraised. Search keywords used included perception, opinion, attitude and view, along with keywords specific to each cohort.
Following review of titles and abstracts to identify publications relevant to the scope, 16 papers on physician opinions, 11 papers on pharmacist opinions and 31 papers on patient/consumer opinions were included in this review. Quantitative studies (n = 37) were the most common approach adopted by researchers, generally in the form of self-administered questionnaires/surveys. Qualitative methodologies (n = 15) were also reported, albeit in fewer cases. In all three cohorts, opinions of generic medicines have improved but some mistrust remains, most particularly in the patient group where there appears to be a strongly held belief that less expensive equals lower quality. Acceptance of generics appears to be higher in consumers with higher levels of education while patients from lower socioeconomic demographic groups, hence generally having lower levels of education, tend to have greater mistrust of generics.
A key factor in improving confidence in generic products is the provision of information and education, particularly in the areas of equivalency, regulation and dispelling myths about generic medicines (such as the belief that they are counterfeits). Further, as patient trust in their physician often overrules their personal mistrust of generic medicines, enhancing the opinions of physicians regarding generics may have particular importance in strategies to promote usage and acceptance of generic medicines in the future.
Generic medicine; Generic drug; Systematic review; Perceptions; Opinions; Stakeholders; Patient; Physician; Pharmacist
The introduction of modern troponin assays has facilitated diagnosis of acute myocardial infarction due to improved sensitivity with corresponding loss of specificity. Atrial fibrillation (AF) is associated with elevated levels of troponin. The aim of the present study was to evaluate the diagnostic performance of troponin I in patients with suspected acute coronary syndrome and chronic AF.
Contemporary sensitive troponin I was assayed in a derivation cohort of 90 patients with suspected acute coronary syndrome and chronic AF to establish diagnostic cut-offs. These thresholds were validated in an independent cohort of 314 patients with suspected myocardial infarction and AF upon presentation. Additionally, changes in troponin I concentration within 3 hours were used.
In the derivation cohort, optimized thresholds with respect to a rule-out strategy with high sensitivity and a rule-in strategy with high specificity were established. In the validation cohort, application of the rule-out cut-off led to a negative predictive value of 97 %. The rule-in cut-off was associated with a positive predictive value of 88 % compared with 71 % if using the 99th percentile cut-off. In patients with troponin I levels above the specificity-optimized threshold, additional use of the 3-hour change in absolute/relative concentration resulted in a further improved positive predictive value of 96 %/100 %.
Troponin I concentration and the 3-hour change in its concentration provide valid diagnostic information in patients with suspected myocardial infarction and chronic AF. With regard to AF-associated elevation of troponin levels, application of diagnostic cut-offs other than the 99th percentile might be beneficial.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0410-8) contains supplementary material, which is available to authorized users.
Acute coronary syndrome; Myocardial infarction; Atrial fibrillation; Cardiac troponin
Although global efforts in the past decade have halved the number of deaths due to malaria, there are still an estimated 219 million cases of malaria a year, causing more than half a million deaths. In this forum article, we asked experts working in malaria research and control to discuss the ways in which malaria might eventually be eradicated. Their collective views highlight the challenges and opportunities, and explain how multi-factorial and integrated processes could eventually make malaria eradication a reality.
Malaria; Plasmodium falciparum; Plasmodium vivax; Eradication; Epidemiology; Rapid diagnostics; Drug resistance; Mass drug administration; Vaccines; Vector control; Capacity building
The Alere point-of-care (POC) Pima™ CD4 analyzer allows for decentralized testing and expansion to testing antiretroviral therapy (ART) eligibility. A consortium conducted a pooled multi-data technical performance analysis of the Pima CD4.
Primary data (11,803 paired observations) comprised 22 independent studies between 2009–2012 from the Caribbean, Asia, Sub-Saharan Africa, USA and Europe, using 6 laboratory-based reference technologies. Data were analyzed as categorical (including binary) and numerical (absolute) observations using a bivariate and/or univariate random effects model when appropriate.
At a median reference CD4 of 383 cells/μl the mean Pima CD4 bias is -23 cells/μl (average bias across all CD4 ranges is 10 % for venous and 15 % for capillary testing). Sensitivity of the Pima CD4 is 93 % (95 % confidence interval [CI] 91.4 % - 94.9 %) at 350 cells/μl and 96 % (CI 95.2 % - 96.9 %) at 500 cells/μl, with no significant difference between venous and capillary testing. Sensitivity reduced to 86 % (CI 82 % - 89 %) at 100 cells/μl (for Cryptococcal antigen (CrAg) screening), with a significant difference between venous (88 %, CI: 85 % - 91 %) and capillary (79 %, CI: 73 % - 84 %) testing. Total CD4 misclassification is 2.3 % cases at 100 cells/μl, 11.0 % at 350 cells/μl and 9.5 % at 500 cells/μl, due to higher false positive rates which resulted in more patients identified for treatment. This increased by 1.2 %, 2.8 % and 1.8 %, respectively, for capillary testing. There was no difference in Pima CD4 misclassification between the meta-analysis data and a population subset of HIV+ ART naïve individuals, nor in misclassification among operator cadres. The Pima CD4 was most similar to Beckman Coulter PanLeucogated CD4, Becton Dickinson FACSCalibur and FACSCount, and less similar to Partec CyFlow reference technologies.
The Pima CD4 may be recommended using venous-derived specimens for screening (100 cells/μl) for reflex CrAg screening and for HIV ART eligibility at 350 cells/μl and 500 cells/μl thresholds using both capillary and venous derived specimens. These meta-analysis findings add to the knowledge of acceptance criteria of the Pima CD4 and future POC tests, but implementation and impact will require full costing analysis.
Pima CD4; Point of care testing; Meta-analysis; CD4 misclassification
In this video Q&A, we talk to Iain Frame and Sarah Cant from Prostate Cancer UK about the current challenges in prostate cancer research and policy and how these are being addressed.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0411-7) contains supplementary material, which is available to authorized users.
Diagnosis; Policy; Prostate cancer; Risk; Treatment
Cardiovascular factors and low education are important risk factors of dementia. We provide contemporary estimates of the proportion of dementia cases that could be prevented if modifiable risk factors were eliminated, i.e., population attributable risk (PAR). Furthermore, we studied whether the PAR has changed across the last two decades.
We included 7,003 participants of the original cohort (starting in 1990) and 2,953 participants of the extended cohort (starting in 2000) of the Rotterdam Study. Both cohorts were followed for dementia until ten years after baseline. We calculated the PAR of overweight, hypertension, diabetes mellitus, cholesterol, smoking, and education. Additionally, we assessed the PAR of stroke, coronary heart disease, heart failure, and atrial fibrillation. We calculated the PAR for each risk factor separately and the combined PAR taking into account the interaction of risk factors.
During 57,996 person-years, 624 participants of the original cohort developed dementia, and during 26,177 person-years, 145 participants of the extended cohort developed dementia. The combined PAR in the original cohort was 0.23 (95 % CI, 0.05–0.62). The PAR in the extended cohort was slightly higher at 0.30 (95 % CI, 0.06–0.76). The combined PAR including cardiovascular diseases was 0.25 (95 % CI, 0.07–0.62) in the original cohort and 0.33 (95 % CI, 0.07–0.77) in the extended cohort.
A substantial part of dementia cases could be prevented if modifiable risk factors would be eliminated. Although prevention and treatment options of cardiovascular risk factors and diseases have improved, the preventive potential for dementia has not declined over the last two decades.
Dementia; Epidemiology; Population attributable risk; Risk factors
Individual income and poverty are associated with poor health outcomes. The poor face unique challenges related to access, education, financial capacity, environmental effects, and other factors that threaten their health outcomes.
We examined the variation in the health outcomes and health behaviors among the poorest quintile in eight countries of Mesoamerica using data from the Salud Mesomérica 2015 baseline household surveys. We used multivariable logistic regression to measure the association between delivering a child in a health facility and select household and maternal characteristics, including education and measures of wealth.
Health indicators varied greatly between geographic segments. Controlling for other demographic characteristics, women with at least secondary education were more likely to have an in-facility delivery compared to women who had not attended school (OR: 3.20, 95 % confidence interval [CI]: 2.56-3.99, respectively). Similarly, women from households with the highest expenditure were more likely to deliver in a health facility compared to those from the lowest expenditure households (OR 3.06, 95 % CI: 2.43-3.85). Household assets did not impact these associations. Moreover, we found that commonly-used definitions of poverty do not align with the disparities in health outcomes observed in these communities.
Although poverty measured by expenditure or wealth is associated with health disparities or health outcomes, a composite indicator of health poverty based on coverage is more likely to focus attention on health problems and solutions. Our findings call for the public health community to define poverty by health coverage measures rather than income or wealth. Such a health-poverty metric is more likely to generate attention and mobilize targeted action by the health communities than our current definition of poverty.
Maternal and child health; Poverty and health; Health disparities; Central America; Salud Mesomérica 2015
In vivo imaging of brain amyloid using positron emission tomography (PET) scanning is widely used in research studies of dementia, with three amyloid PET ligands being licenced for clinical use. The main clinical use of PET is to help confirm or exclude the likely diagnosis of Alzheimer’s disease in challenging cases, where diagnostic uncertainty remains after current clinical and investigative work up. Whilst diagnostically valuable in such select cases, much wider clinical adoption, especially for very early disease, will be limited by both cost and the lack of a currently effective disease-modifying treatment that requires such early case identification. The use of amyloid imaging to appropriately stratify subjects for prognostic studies and therapeutic trials should increase the efficiency and potentially shorten the time of such studies, and its use combined with other biomarkers and genetics will likely lead to new ways of defining and classifying the dementias.
Amyloid; Dementia; Imaging; Positron emission tomography
The relationship between age-related frailty and the underlying processes that drive changes in health is currently unclear. Considered individually, most blood biomarkers show only weak relationships with frailty and ageing. Here, we examined whether a biomarker-based frailty index (FI-B) allowed examination of their collective effect in predicting mortality compared with individual biomarkers, a clinical deficits frailty index (FI-CD), and the Fried frailty phenotype.
We analyzed baseline data and up to 7-year mortality in the Newcastle 85+ Study (n = 845; mean age 85.5). The FI-B combined 40 biomarkers of cellular ageing, inflammation, haematology, and immunosenescence. The Kaplan-Meier estimator was used to stratify participants into FI-B risk strata. Stability of the risk estimates for the FI-B was assessed using iterative, random subsampling of the 40 FI-B items. Predictive validity was tested using Cox proportional hazards analysis and discriminative ability by the area under receiver operating characteristic (ROC) curves.
The mean FI-B was 0.35 (SD, 0.08), higher than the mean FI-CD (0.22; SD, 0.12); no participant had an FI-B score <0.12. Higher values of each FI were associated with higher mortality risk. In a sex-adjusted model, each one percent increase in the FI-B increased the hazard ratio by 5.4 % (HR, 1.05; CI, 1.04–1.06). The FI-B was more powerful for mortality prediction than any individual biomarker and was robust to biomarker substitution. The ROC analysis showed moderate discriminative ability for 7-year mortality (AUC for FI-CD = 0.71 and AUC for FI-B = 0.66). No individual biomarker’s AUC exceeded 0.61. The AUC for combined FI-CD/FI-B was 0.75.
Many biological processes are implicated in ageing. The systemic effects of these processes can be elucidated using the frailty index approach, which showed here that subclinical deficits increased the risk of death. In the future, blood biomarkers may indicate the nature of the underlying causal deficits leading to age-related frailty, thereby helping to expose targets for early preventative interventions.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0400-x) contains supplementary material, which is available to authorized users.
Ageing; Biomarkers; Cellular ageing; Deficit accumulation; Frailty; Frailty index; Frailty phenotype; Immunosenescence; Inflammation; Newcastle 85+ study
The use of adult stem cells is limited by the quality and quantity of host stem cells. It has been demonstrated that Wharton’s jelly–derived mesenchymal stem cells (WJMSCs), a primitive stromal population, could integrate into ischemic cardiac tissues and significantly improve heart function. In this randomized, controlled trial, our aim was to assess the safety and efficacy of intracoronary WJMSCs in patients with ST-elevation acute myocardial infarction (AMI).
In a multicenter trial, 116 patients with acute ST-elevation MI were randomly assigned to receive an intracoronary infusion of WJMSCs or placebo into the infarct artery at five to seven days after successful reperfusion therapy. The primary endpoint of safety: the incidence of adverse events (AEs) within 18 months, was monitored and quantified. The endpoint of efficacy: the absolute changes in myocardial viability and perfusion of the infarcted region from baseline to four months, global left ventricular ejection fraction (LVEF) from baseline to 18 months were measured using F-18-fluorodeoxyglucose positron emission computed tomography (F-18-FDG-PET) and 99mTc-sestamibi single-photon emission computed tomography (99mTc-SPECT), and two-dimensional echocardiography, respectively.
During 18 months follow-up, AEs rates and laboratory tests including tumor, immune, and hematologic indexes were not different between the two groups. The absolute increase in the myocardial viability (PET) and perfusion within the infarcted territory (SPECT) was significantly greater in the WJMSC group [6.9 ± 0.6 % (95 %CI, 5.7 to 8.2)] and [7.1 ± 0.8 % (95 %CI, 5.4 to 8.8) than in the placebo group [3.3 ± 0.7 % (95 %CI, 1.8 to 4.7), P <0.0001] and 3.9 ± 0.6(95 %CI, 2.8 to 5.0), P = 0.002] at four months. The absolute increase in the LVEF at 18 months in the WJMSC group was significantly greater than that in the placebo group [7.8 ± 0.9 (6.0 to approximately 9.7) vs. 2.8 ± 1.2 (0.4 to approximately 5.1), P = 0.001]. Concomitantly, the absolute decreases in LV end-systolic volumes and end-diastolic volumes at 18 months in the WJMSC group were significantly greater than those in the placebo group (P = 0.0004, P = 0.004, respectively).
Intracoronary infusion of WJMSCs is safe and effective in patients with AMI, providing clinically relevant therapy within a favorable time window. This study encourages additional clinical trials to determine whether WJMSCs may serve as a novel alternative to BMSCs for cardiac stem cell-based therapy.
Clinical Trials NCT01291329 (02/05/2011).
Myocardial infarction; Mesenchymal stem cells; Wharton’s jelly of umbilical cord
With more than 600,000 deaths from malaria, mainly of children under five years old and caused by infection with Plasmodium falciparum, comes an urgent need for an effective anti-malaria vaccine. Limited details on the mechanisms of protective immunity are a barrier to vaccine development. Antibodies play an important role in immunity to malaria and monocytes are key effectors in antibody-mediated protection by phagocytosing antibody-opsonised infected erythrocytes (IE). Eliciting antibodies that enhance phagocytosis of IE is therefore an important potential component of an effective vaccine, requiring robust assays to determine the ability of elicited antibodies to stimulate this in vivo. The mechanisms by which monocytes ingest IE and the nature of the monocytes which do so are unknown.
Purified trophozoite-stage P. falciparum IE were stained with ethidium bromide, opsonised with anti-erythrocyte antibodies and incubated with fresh whole blood. Phagocytosis of IE and TNF production by individual monocyte subsets was measured by flow cytometry. Ingestion of IE was confirmed by imaging flow cytometry.
CD14hiCD16+ monocytes phagocytosed antibody-opsonised IE and produced TNF more efficiently than CD14hiCD16- and CD14loCD16+ monocytes. Blocking experiments showed that Fcγ receptor IIIa (CD16) but not Fcγ receptor IIa (CD32a) or Fcγ receptor I (CD64) was necessary for phagocytosis. CD14hiCD16+ monocytes ingested antibody-opsonised IE when peripheral blood mononuclear cells were reconstituted with autologous serum but not heat-inactivated autologous serum. Antibody-opsonised IE were rapidly opsonised with complement component C3 in serum (t1/2 = 2-3 minutes) and phagocytosis of antibody-opsonised IE was inhibited in a dose-dependent manner by an inhibitor of C3 activation, compstatin. Compared to other monocyte subsets, CD14hiCD16+ monocytes expressed the highest levels of complement receptor 4 (CD11c) and activated complement receptor 3 (CD11b) subunits.
We show a special role for CD14hiCD16+ monocytes in phagocytosing opsonised P. falciparum IE and production of TNF. While ingestion was mediated by Fcγ receptor IIIa, this receptor was not sufficient to allow phagocytosis; despite opsonisation with antibody, phagocytosis of IE also required complement opsonisation. Assays which measure the ability of vaccines to elicit a protective antibody response to P. falciparum should consider their ability to promote phagocytosis and fix complement.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0391-7) contains supplementary material, which is available to authorized users.
Malaria; Phagocytosis; Monocyte subsets; Antibodies; Complement; CD16
Up to 50 % of HIV-infected persons in sub-Saharan Africa are lost from care between HIV diagnosis and antiretroviral therapy (ART) initiation. Structural barriers, including cost of transportation to clinic and poor communication systems, are major contributors.
We conducted a prospective, pragmatic, before-and-after clinical trial to evaluate a combination mobile health and transportation reimbursement intervention to improve care at a publicly operated HIV clinic in Uganda. Patients undergoing CD4 count testing were enrolled, and clinicians selected a result threshold that would prompt early return for ART initiation or further care. Participants enrolled in the pre-intervention period (January – August 2012) served as a control group. Participants in the intervention period (September 2012 – November 2013) were randomized to receive daily short message service (SMS) messages for up to seven days in one of three formats: 1) messages reporting an abnormal result directly, 2) personal identification number-protected messages reporting an abnormal result, or 3) messages reading “ABCDEFG” to confidentially convey an abnormal result. Participants returning within seven days of their first message received transportation reimbursements (about $6USD). Our primary outcomes of interest were time to return to clinic and time to ART initiation.
There were 45 participants in the pre-intervention period and 138 participants in the intervention period (46, 49, and 43 in the direct, PIN, and coded groups, respectively) with low CD4 count results. Median time to clinic return was 33 days (IQR 11–49) in the pre-intervention period and 6 days (IQR 3–16) in the intervention period (P < 0.001); and median time to ART initiation was 47 days (IQR 11–75) versus 12 days (IQR 5–19), (P < 0.001). In multivariable models, participants in the intervention period had earlier return to clinic (AHR 2.32, 95 %CI 1.53 to 3.51) and earlier time to ART initiation (AHR 2.27, 95 %CI 1.38 to 3.72). All three randomized message formats improved time to return to clinic and time to ART initiation (P < 0.01 for all comparisons versus the pre-intervention period).
A combination of an SMS laboratory result communication system and transportation reimbursements significantly decreased time to clinic return and time to ART initiation after abnormal CD4 test results.
Clinicaltrials.gov NCT01579214, approved 13 April 2012.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0397-1) contains supplementary material, which is available to authorized users.
HIV/AIDS; Sub-Saharan Africa; Clinical trial; Short message service; Financial incentive; Antiretroviral therapy
The peer review process is a cornerstone of biomedical research publications. However, it may fail to allow the publication of high-quality articles. We aimed to identify and sort, according to their importance, all tasks that are expected from peer reviewers when evaluating a manuscript reporting the results of a randomized controlled trial (RCT) and to determine which of these tasks are clearly requested by editors in their recommendations to peer reviewers.
We identified the tasks expected of peer reviewers from 1) a systematic review of the published literature and 2) recommendations to peer reviewers for 171 journals (i.e., 10 journals with the highest impact factor for 14 different medical areas and all journals indexed in PubMed that published more than 15 RCTs over 3 months regardless of the medical area). Participants who had peer-reviewed at least one report of an RCT had to classify the importance of each task relative to other tasks using a Q-sort technique. Finally, we evaluated editors’ recommendations to authors to determine which tasks were clearly requested by editors in their recommendations to peer reviewers.
The Q-sort survey was completed by 203 participants, 93 (46 %) with clinical expertise, 72 (36 %) with methodological/statistical expertise, 17 (8 %) with expertise in both areas, and 21 (10 %) with other expertise. The task rated most important by participants (evaluating the risk of bias) was clearly requested by only 5 % of editors. In contrast, the task most frequently requested by editors (provide recommendations for publication), was rated in the first tertile only by 21 % of all participants.
The most important tasks for peer reviewers were not congruent with the tasks most often requested by journal editors in their guidelines to reviewers.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0395-3) contains supplementary material, which is available to authorized users.
Peer review; Q-sort; Randomized controlled trials; Recommendations to reviewers
It is a common practice to use a singleton fetal growth standard to assess twin growth. We aim to create a twin fetal weight standard which is also adjustable for race/ethnicity and other factors.
Over half a million twin births of low risk pregnancies in the US, from 1995 to 2004, were used to construct a fetal weight standard. We used the Hadlock’s fetal growth standard and the proportionality principle to make the standard adjustable for other factors such as race/ethnicity. We validated the standard in different race/ethnicities in the US and against previously published curves from around the world.
The adjustable fetal weight standard has an excellent match with the observed birthweight data in non-Hispanic White, non-Hispanic Black, Hispanics, and Asian from 24 to 38 weeks gestation. It also had a very good fit with cross-sectional data from Australia and Norway, and a longitudinal standard from Brazil. However, our model-based 10th and 90th percentiles differed substantially from studies in Japan and US that used the last menstrual period for estimate of gestational age.
The adjustable fetal weight standard for twins is a flexible tool and can be used in different populations.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0401-9) contains supplementary material, which is available to authorized users.
Adjustable; Fetal; Standard; Twins; Weight
Annexin A1 (ANXA1) is a protein related with the carcinogenesis process and metastasis formation in many tumors. However, little is known about the prognostic value of ANXA1 in breast cancer. The purpose of this study is to evaluate the association between ANXA1 expression, BRCA1/2 germline carriership, specific tumor subtypes and survival in breast cancer patients.
Clinical-pathological information and follow-up data were collected from nine breast cancer studies from the Breast Cancer Association Consortium (BCAC) (n = 5,752) and from one study of familial breast cancer patients with BRCA1/2 mutations (n = 107). ANXA1 expression was scored based on the percentage of immunohistochemical staining in tumor cells. Survival analyses were performed using a multivariable Cox model.
The frequency of ANXA1 positive tumors was higher in familial breast cancer patients with BRCA1/2 mutations than in BCAC patients, with 48.6 % versus 12.4 %, respectively; P <0.0001. ANXA1 was also highly expressed in BCAC tumors that were poorly differentiated, triple negative, EGFR-CK5/6 positive or had developed in patients at a young age. In the first 5 years of follow-up, patients with ANXA1 positive tumors had a worse breast cancer-specific survival (BCSS) than ANXA1 negative (HRadj = 1.35; 95 % CI = 1.05–1.73), but the association weakened after 10 years (HRadj = 1.13; 95 % CI = 0.91–1.40). ANXA1 was a significant independent predictor of survival in HER2+ patients (10-years BCSS: HRadj = 1.70; 95 % CI = 1.17–2.45).
ANXA1 is overexpressed in familial breast cancer patients with BRCA1/2 mutations and correlated with poor prognosis features: triple negative and poorly differentiated tumors. ANXA1 might be a biomarker candidate for breast cancer survival prediction in high risk groups such as HER2+ cases.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0392-6) contains supplementary material, which is available to authorized users.
Breast cancer; Annexin A1; BRCA1 and BRCA2 mutations
When faced with uncertainties about the effects of medical interventions regulatory agencies, guideline developers, clinicians, and researchers commonly ask for more research, and in particular for more randomized trials. The conduct of additional randomized trials is, however, sometimes not the most efficient way to reduce uncertainty. Instead, approaches such as value of information analysis or other approaches should be used to prioritize research that will most likely reduce uncertainty and inform decisions.
In situations where additional research for specific interventions needs to be prioritized, we propose the use of quantitative benefit–harm assessments that illustrate how the benefit–harm balance may change as a consequence of additional research. The example of roflumilast for patients with chronic obstructive pulmonary disease shows that additional research on patient preferences (e.g., how important are exacerbations relative to psychiatric harms?) or outcome risks (e.g., what is the incidence of psychiatric outcomes in patients with chronic obstructive pulmonary disease without treatment?) is sometimes more valuable than additional randomized trials.
We propose that quantitative benefit–harm assessments have the potential to explore the impact of additional research and to identify research priorities Our approach may be seen as another type of value of information analysis and as a useful approach to stimulate specific new research that has the potential to change current estimates of the benefit–harm balance and decision making.
Benefit–harm assessment; Chronic obstructive pulmonary disease; Randomized trials; Research priorities
Aspirin is widely used to lessen the risks of cardiovascular events. Some studies suggest that patients with multiple sclerosis have an increased risk for some cardiovascular events, for example, venous thromboembolism and perhaps ischemic strokes, raising the possibility that aspirin could lessen these increased risks in this population or subgroups (patients with limited mobility and/or antiphospholipid antibodies). However, aspirin causes a small increased risk of hemorrhagic stroke, which is a concern as it could potentially worsen a compromised blood-brain barrier. Aspirin has the potential to ameliorate the disease process in multiple sclerosis (for example, by limiting some components of inflammation), but aspirin also has the potential to inhibit mitochondrial complex I activity, which is already reduced in multiple sclerosis. In an experimental setting of a cerebral ischemic lesion, aspirin promoted the proliferation and/or differentiation of oligodendrocyte precursors, raising the possibility that aspirin could facilitate remyelination efforts in multiple sclerosis. Other actions by aspirin may lead to small improvements of some symptoms (for example, lessening fatigue). Here we consider potential benefits and risks of aspirin usage by patients with multiple sclerosis.
Antiphospholipid antibodies; Aspirin; Experimental autoimmune encephalomyelitis; Fatigue; Multiple sclerosis; Salicylate; Stroke; Thrombosis
Down syndrome is the most common chromosomal disorder in humans as well as the most common cause of inherited intellectual disability. A spectrum of physical and functional disability is associated with the syndrome as well as a predisposition to developing particular malignancies, including testicular cancers. These tumours ordinarily have a high cure rate even in widely disseminated disease. However, individuals with Down syndrome may have learning difficulties, behavioural problems, and multiple systemic complications that have the potential to make standard treatment more risky and necessitates individualized approach in order to avoid unacceptable harm. There is also suggestion that tumours may have a different natural history. Further, people with learning disabilities have often experienced poorer healthcare than the general population. In order to address these inequalities, legislation, professional bodies, and charities provide guidance; however, ultimately, consideration of the person in the context of their own psychosocial issues, comorbidities, and possible treatment strategies is vital in delivering optimal care. We aim to present a review of our own experience of delivering individualized care to this group of patients in order to close the existing health inequality gap.
Chemotherapy; Down syndrome; Radiotherapy; Testicular cancer; Trisomy 21
The Arthroplasty Pain Experience (APEX) studies are two randomised controlled trials in primary total hip (THR) and total knee replacement (TKR) at a large UK orthopaedics centre. APEX investigated the effect of local anaesthetic wound infiltration (LAI), administered before wound closure, in addition to standard analgesia, on pain severity at 12 months. This article reports results of the within-trial economic evaluations.
Cost-effectiveness was assessed from the health and social care payer perspective in relation to quality adjusted life years (QALYs) and the primary clinical outcome, the WOMAC Pain score at 12-months follow-up. Resource use was collected from hospital records and patient-completed postal questionnaires, and valued using unit cost estimates from local NHS Trust finance department and national tariffs. Missing data were addressed using multiple imputation chained equations. Costs and outcomes were compared per trial arm and plotted in cost-effectiveness planes. If no arm was dominant (i.e., more effective and less expensive than the other), incremental cost-effectiveness ratios were estimated. The economic results were bootstrapped incremental net monetary benefit statistics (INMB) and cost-effectiveness acceptability curves. One-way deterministic sensitivity analyses explored any methodological uncertainty.
In both the THR and TKR trials, LAI was the dominant treatment: cost-saving and more effective than standard care, in relation to QALYs and WOMAC Pain. Using the £20,000 per QALY threshold, in THR, the INMB was £1,125 (95 % BCI, £183 to £2,067) and the probability of being cost-effective was over 98 %. In TKR, the INMB was £264 (95 % BCI, −£710 to £1,238), but there was only 62 % probability of being cost-effective. When considering an NHS perspective only, LAI was no longer dominant in THR, but still highly cost-effective, with an INMB of £961 (95 % BCI, £50 to £1,873).
Administering LAI is a cost-effective treatment option in THR and TKR surgeries. The evidence, because of larger QALY gain, is stronger for THR. In TKR, there is more uncertainty around the economic result, and smaller QALY gains. Results, however, point to LAI being cheaper than standard analgesia, which includes a femoral nerve block.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0389-1) contains supplementary material, which is available to authorized users.
Cost-effectiveness; Cost-utility; Local anaesthetic wound infiltration; Total hip replacement; Total knee replacement; Trial-based economic evaluation
Mothers are at risk of domestic violence (DV) and its harmful consequences postpartum. There is no evidence to date for sustainability of DV screening in primary care settings. We aimed to test whether a theory-informed, maternal and child health (MCH) nurse-designed model increased and sustained DV screening, disclosure, safety planning and referrals compared with usual care.
Cluster randomised controlled trial of 12 month MCH DV screening and care intervention with 24 month follow-up.
The study was set in community-based MCH nurse teams (91 centres, 163 nurses) in north-west Melbourne, Australia.
Eight eligible teams were recruited. Team randomisation occurred at a public meeting using opaque envelopes. Teams were unable to be blinded.
The intervention was informed by Normalisation Process Theory, the nurse-designed good practice model incorporated nurse mentors, strengthened relationships with DV services, nurse safety, a self-completion maternal health screening checklist at three or four month consultations and DV clinical guidelines. Usual care involved government mandated face-to-face DV screening at four weeks postpartum and follow-up as required.
Primary outcomes were MCH team screening, disclosure, safety planning and referral rates from routine government data and a postal survey sent to 10,472 women with babies ≤ 12 months in study areas. Secondary outcomes included DV prevalence (Composite Abuse Scale, CAS) and harm measures (postal survey).
No significant differences were found in routine screening at four months (IG 2,330/6,381 consultations (36.5 %) versus CG 1,792/7,638 consultations (23.5 %), RR = 1.56 CI 0.96–2.52) but data from maternal health checklists (n = 2,771) at three month IG consultations showed average screening rates of 63.1 %. Two years post-intervention, IG safety planning rates had increased from three (RR 2.95, CI 1.11–7.82) to four times those of CG (RR 4.22 CI 1.64–10.9). Referrals remained low in both intervention groups (IGs) and comparison groups (CGs) (<1 %).
2,621/10,472 mothers (25 %) returned surveys. No difference was found between arms in preference or comfort with being asked about DV or feelings about self.
A nurse-designed screening and care model did not increase routine screening or referrals, but achieved significantly increased safety planning over 36 months among postpartum women. Self-completion DV screening was welcomed by nurses and women and contributed to sustainability.
Australian New Zealand Clinical Trials Registry, ACTRN12609000424202, 10/03/2009
Domestic violence; Screening; Maternal and child health nursing; Cluster randomised controlled trial; Primary health care; Safety planning; Sustainability
The presence of axillary nodal metastases has a significant impact on locoregional and systemic treatment decisions. Historically, all node-positive patients underwent complete axillary lymph node dissection; however, this paradigm has changed over the last 10 years. The use of sentinel lymph node dissection has expanded from its initial role as a surgical staging procedure in clinically node-negative patients. Clinically node-negative patients with small volume disease found on sentinel lymph node dissection now commonly avoid more extensive axillary surgery. There is interest in expanding this role to node-positive patients who receive neoadjuvant chemotherapy as a way to restage the axilla in hopes of sparing women who convert to node-negative status from the morbidity of complete nodal clearance. While sentinel lymph node dissection alone may not accomplish this goal, there are novel techniques, such as targeted axillary dissection, that may now allow for reliable nodal staging after chemotherapy.
Axillary lymphadenectomy; Breast cancer; Neoadjuvant chemotherapy; Nodal metastasis; Sentinel lymph node; Targeted axillary dissection
The impact case studies submitted by UK Higher Education Institutions to the Research Excellence Framework (REF) in 2014 provide a rich resource of text describing impact beyond academia and across all disciplines. Using text mining techniques and qualitative assessment, the 6,679 non-redacted case studies submitted were analysed and the impact described was found to be multidisciplinary, multi-impactful, and multinational. By digging deeper into the data, the health gains from health research in terms of Quality Adjusted Life Years was also estimated. Similar analyses are possible using these case studies, but will require the data to be ‘re-purposed’ from the original intention (i.e., for assessment purposes) for robust analysis.
Impact; Health gains; QALY; Research Excellence Framework