Following the introduction of work-hour restrictions, residents’ workload has become an important theme in postgraduate training. The efficacy of restrictions on workload, however, remains controversial, as most research has only examined objective workload. The purpose of this study was to explore the less clearly understood component of subjective workload and, in particular, the factors that influenced residents’ subjective workload.
This study was conducted in Japan at three community teaching hospitals. We recruited a convenience sample of 31 junior residents in seven focus groups at the three sites. Audio-recorded and transcribed data were read iteratively and analyzed thematically, identifying, analyzing and reporting themes within the data and developing an interpretive synthesis of the topic.
Seven factors influenced residents’ subjective workload: (1) interaction within the professional community, (2) feedback from patients, (3) being in control, (4) professional development, (5) private life, (6) interest and (7) protected free time.
Discussion and conclusion
Our findings indicate that residents who have good interaction with colleagues and patients, are competent enough to control their work, experience personal development through working, have greater interest in their work, and have fulfilling private lives will have the least subjective workload.
Subjective workload; Work-hour restrictions; Professionalism
Human activities have had the strongest impacts on natural ecosystems since the last glacial period, including the alteration of interspecific relationships such as food webs. In this paper, we present a historical record of major alterations of trophic structure by revealing millennium-scale dietary shifts of brown bears (Ursus arctos) on the Hokkaido islands, Japan, using carbon, nitrogen, and sulfur stable isotope analysis. Dietary analysis of brown bears revealed that salmon consumption by bears in the eastern region of Hokkaido significantly decreased from 19% to 8%. In addition, consumption of terrestrial animals decreased from 56% to 5% in western region, and 64% to 8% in eastern region. These dietary shifts are likely to have occurred in the last approximately 100–200 years, which coincides with the beginning of modernisation in this region. Our results suggest that human activities have caused an alteration in the trophic structure of brown bears in the Hokkaido islands. This alteration includes a major decline in the marine-terrestrial linkage in eastern region, and a loss of indirect-interactions between bears and wolves, because the interactions potentially enhanced deer predation by brown bears.
Little is known whether time trends of in‐hospital mortality and costs of care for acute myocardial infarction (AMI) differ by type of AMI (ST‐elevation myocardial infarction [STEMI] vs. non‐ST‐elevation [NSTEMI]) and by the intervention received (percutaneous coronary intervention [PCI], coronary artery bypass grafting [CABG], or no intervention) in the United States.
Methods and Results
We conducted a serial cross‐sectional study of all hospitalizations for AMI aged 30 years or older using the Nationwide Inpatient Sample, 2001–2011 (1 456 154 discharges; a weighted estimate of 7 135 592 discharges). Hospitalizations were stratified by type of AMI and intervention, and the time trends of in‐hospital mortality and hospital costs were examined for each combination of the AMI type and intervention, after adjusting for both patient‐ and hospital‐level characteristics. Compared with 2001, adjusted in‐hospital mortality improved significantly for NSTEMI patients in 2011, regardless of the intervention received (PCI odds ratio [OR] 0.68, 95% CI 0.56 to 0.83; CABG OR 0.57, 0.45 to 0.72; without intervention OR 0.61, 0.57 to 0.65). As for STEMI, a decline in adjusted in‐hospital mortality was significant for those who underwent PCI (OR 0.83; 0.73 to 0.94); however, no significant improvement was observed for those who received CABG or without intervention. Hospital costs per hospitalization increased significantly for patients who underwent intervention, but not for those without intervention.
In the United States, the decrease in in‐hospital mortality and the increase in costs differed by the AMI type and the intervention received. These non‐uniform trends may be informative for designing effective health policies to reduce the health and economic burdens of AMI.
acute myocardial infarction; hospital costs; in‐hospital mortality; time trend
In Japan, all trainee physicians must begin clinical practice in a standardized, mandatory junior residency program, which encompasses the first two years of post-graduate medical training (PGY1 – PGY2). Implemented in 2004 to foster primary care skills, the comprehensive rotation program (CRP) requires junior residents to spend 14 months rotating through a comprehensive array of clinical departments including internal medicine, surgery, anesthesiology, obstetrics-gynecology (OBGYN), pediatrics, psychiatry, and rural medicine. In 2010, Japan’s health ministry relaxed this curricular requirement, allowing training programs to offer a limited rotation program (LRP), in which core departments constitute 10 months of training, with electives geared towards residents’ choice of career specialty comprising the remaining 14 months. The effectiveness of primary care skill acquisition during early training warrants evaluation. This study assesses self-reported confidence with clinical competencies, as well as case experience, between residents in CRP versus LRP curricula.
A nation-wide cross-sectional study of all PGY2 physicians in Japan was conducted in March 2011. Primary outcomes were self-report confidence for 98 clinical competency items, and number of cases experienced for 85 common diseases. We compared confidence scores and case experience between residents in CRP and LRP programs, adjusting for parameters relevant to training.
Among 7506 PGY2 residents, 5052 replied to the survey (67.3%). Of 98 clinical competency items, CRP residents reported higher confidence in 12 items compared to those in an LRP curriculum, 10 of which remained significantly higher after adjustment. CRP trainees reported lower confidence scores in none of the items. Out of 85 diseases, LRP residents reported less experience with 11 diseases. CRP trainees reported lower case experience with one disease, though this did not remain significant on adjusted analysis. Confidence and case experience with OBGYN- and pediatrics-related items were particularly low among LRP trainees.
Residents in the specialty-oriented LRP curriculum showed less confidence and less case experience compared to peers training in the broader CRP residency curriculum. In order to foster competence in independent primary care practice, junior residency programs requiring experience in a breadth of core departments should continue to be mandated to ensure adequate primary care skills.
Japanese junior residency education; Clinical competency
Helicopter emergency medical services with a physician (HEMS) has been provided in Japan since 2001. However, HEMS and its possible effect on outcomes for severe trauma patients have still been debated as helicopter services require expensive and limited resources. Our aim was to analyze the association between the use of helicopters with a physician versus ground services and survival among adults with serious traumatic injuries.
This multicenter prospective observational study involved 24,293 patients. All patients were older than 15 years of age, had sustained blunt or penetrating trauma and had an Injury Severity Score (ISS) higher than 15. All of the patient data were recorded between 2004 and 2011 in the Japan Trauma Data Bank, which includes data from 114 major emergency hospitals in Japan. The primary outcome was survival to discharge from hospitals. The intervention was either transport by helicopter with a physician or ground emergency services.
A total of 2,090 patients in the sample were transported by helicopter, and 22,203 were transported by ground. Overall, 546 patients (26.1%) transported by helicopter died compared to 5,765 patients (26.0%) transported by ground emergency services. Patients transported by helicopter had higher ISSs than those transported by ground. In multivariable logistic regression, helicopter transport had an odds ratio (OR) for survival to discharge of 1.277 (95% confidence interval (CI), 1.049 to 1.556) after adjusting for age, sex, mechanism of injury, type of trauma, initial vital signs (including systolic blood pressure, heart rate and respiratory rate), ISS and prehospital treatment (including intubation, airway protection maneuver and intravenous fluid). In the propensity score–matched cohort, helicopter transport was associated with improved odds of survival compared to ground transport (OR, 1.446; 95% CI, 1.220 to 1.714). In conditional logistic regression, after adjusting for prehospital treatment (including intubation, airway protection maneuver and intravenous fluid), similar positive associations were observed (OR, 1.230; 95% CI, 1.017 to 1.488).
Among patients with major trauma in Japan, transport by helicopter with a physician may be associated with improved survival to hospital discharge compared to ground emergency services after controlling for multiple known confounders.
The purpose of this study was to quantify the target coverage, homogeneity, and robustness of the dose distributions against geometrical uncertainties associated with four whole breast radiotherapy techniques.
The study was based on the planning-computed tomography-datasets of 20 patients who underwent whole breast radiotherapy. A total of four treatment plans (wedge, field-in-field [FIF], hybrid intensity-modulated radiotherapy [IMRT], and full IMRT) were created for each patient. The hybrid IMRT plans comprised two opposed tangential open beams plus two IMRT beams. Setup errors were simulated by moving the beam isocenters by 5 mm in the anterior or posterior direction.
With the original plan, the wedge technique yielded a high volume receiving ≥107% of the prescription dose (V107; 7.5%±4.2%), whereas the other three techniques yielded excellent target coverage and homogeneity. A 5 mm anterior displacement caused a large and significant increase in the V107 (+5.2%±4.1%, p<0.01) with the FIF plan, but not with the hybrid IMRT (+0.4%±1.2%, p=0.11) or full IMRT (+0.7%±1.8%, p=0.10) plan. A 5-mm posterior displacement caused a large decrease in the V95 with the hybrid IMRT (-2.5%±3.7%, p<0.01) and full IMRT (-4.3%±5.1%, p<0.01) plans, but not with the FIF plan (+0.1%±0.7%, p=0.74). The decrease in V95 was significantly smaller with the hybrid IMRT plan than with the full IMRT plan (p<0.01).
The FIF, hybrid IMRT, and full IMRT plans offered excellent target coverage and homogeneity. Hybrid IMRT provided better robustness against geometrical uncertainties than full IMRT, whereas FIF provided comparable robustness to that of hybrid IMRT.
Breast neoplasms; Intensity-modulated radiotherapy
The new diagnostic threshold of hemoglobin A1c was made based on evidence from cross-sectional studies, and no longitudinal study supports its validity. To examine whether hemoglobin A1c of 6.5% or higher defines a threshold for elevated risk of incident retinopathy, we analyzed longitudinal data of 19,897 Japanese adults who underwent a health checkup in 2006 and were followed up 3 years later. We used logistic regression models and restricted cubic spline models to examine the relationship between baseline hemoglobin A1c levels and the prevalence and the 3-year incidence of retinopathy. The restricted cubic spline model indicated a possible threshold for the risk of incident retinopathy at hemoglobin A1c levels of 6.0–7.0%. Logistic regression analysis found that individuals with hemoglobin A1c levels of 6.5–6.9% were at significantly higher risk of developing retinopathy at 3 years compared with those with hemoglobin A1c levels of 5.0–5.4% (adjusted odds ratio, 2.35 [95% CI 1.08–5.11]). Those with hemoglobin A1c levels between 5.5 and 6.4% exhibited no evidence of elevated risks. We did not observe a threshold in the analysis of prevalent retinopathy. Our longitudinal results support the validity of the new hemoglobin A1c threshold of 6.5% or higher for diagnosing diabetes.
PX-478 is a potent small-molecule inhibitor of HIF-1α. In preclinical studies, it had antitumor activity against various solid tumors in subcutaneous xenografts but had no measurable activity against a non-small cell lung cancer (NSCLC) xenograft. To determine the effectiveness of PX-478 against lung tumors, we investigated HIF-1α expression in several lung cancer cell lines, both in vitro and in vivo, and treated orthotopic mouse models of human lung cancer with PX-478.
Cells from two human lung adenocarcinoma cell models (PC14-PE6 and NCI-H441) or two human small cell lung cancer (SCLC) models (NCI-H187 and NCI-N417) were injected into the left lungs of nude mice and were randomized 16 to 18 days after injection with daily oral treatment with PX-478 or vehicle for 5 days.
In the PC14-PE6 NSCLC model, treatment with 20 mg/kg PX-478 significantly reduced the median primary lung tumor volume by 87% (p = 0.005) compared with the vehicle-treated group. PX-478 treatment also markedly reduced mediastinal metastasis and prolonged survival. Similar results were obtained in a second NSCLC model. In SCLC models, PX-478 was even more effective. In the NCI-H187 model, the median primary lung tumor volume was reduced by 99% (p = 0.0001). The median survival duration was increased by 132%. In the NCI-N417 model, the median primary lung tumor volume was reduced by 97% (p = 0.008).
We demonstrated that the PX-478, HIF-1α inhibitor, had significant antitumor activity against two orthotopic models of lung adenocarcinomas and two models of SCLC. These results suggest the inclusion of lung cancer patients in phase I clinical trials of PX-478.
Hypoxia; HIF-1α; PX-478; Orthotopic model; Lung cancer
Small-cell colon carcinoma is a very rare disease among colon neoplasms; it is difficult to achieve long-term survival due to its aggressive tumor behavior. Here we report the long-term survival of a patient with advanced small-cell colon carcinoma achieved by a combination of surgery and continuous chemotherapy.
A 67-year-old Japanese man underwent abdominal computed tomography in our institution for follow up after gastrectomy, and abnormal thickness of the sigmoid colon wall was revealed. An endoscopy demonstrated a 20mm Bormann 2 lesion with central ulceration located 20cm from the anal verge. A sigmoidectomy was performed. Histologically, the tumor deeply invaded the tissue and extended beyond the serosa, and was diagnosed as small-cell carcinoma. Cisplatin plus irinotecan was administered for adjuvant chemotherapy. Nine months after surgery, a follow-up computed tomography showed an enlarged lymph node behind the inferior vena cava and a 15×8mm nodule located at the ventral side of the cecum. Under consideration of progressive disease, cisplatin plus irinotecan therapy was performed again using the same regimen. After nine cycles of cisplatin plus irinotecan therapy, a follow-up gastric endoscopy demonstrated external tumor invasion to the duodenum wall. Carboplatin plus etoposide therapy was selected as a third-line regimen. After six cycles of carboplatin plus etoposide therapy, the recurrence sites were maintained in a stable condition, and the survival time reached approximately 30 months after the initial surgery.
We report the long-term survival of a patient with advanced small-cell colon carcinoma. In the future, the accumulation and analysis of rare cases that obtain a better survival time will contribute to clarifying neuroendocrine carcinoma biology, and help to improve the prognosis.
Chemotherapy; Small-cell carcinoma
Ras/Raf/MEK/ERK signaling is critical for tumor cell proliferation and survival. Selumetinib is a potent, selective, and orally available MEK1/2 inhibitor. In the current study, we evaluated the therapeutic efficacy of selumetinib alone or with cediranib, an orally available potent inhibitor of all three VEGFR tyrosine kinases, in murine orthotopic NSCLC models.
NCI-H441 or NCI-H460 KRAS-mutant human NSCLC cells were injected into the lungs of mice. Mice were randomly assigned to treatment with selumetinib, cediranib, paclitaxel, selumetinib plus cediranib, or control. When controls became moribund, all animals were sacrificed and assessed for lung tumor burden and locoregional metastasis. Lung tumors and adjacent normal tissues were subjected to immunohistochemical analyses.
Selumetinib inhibited lung tumor growth and, particularly at higher dose, reduced locoregional metastasis, as did cediranib. Combining selumetinib and cediranib markedly enhanced their antitumor effects, with near complete suppression of metastasis. Immunohistochemistry of tumor tissues revealed that selumetinib alone or with cediranib reduced ERK phosphorylation, angiogenesis, and tumor cell proliferation and increased apoptosis. The antiangiogenic and apoptotic effects were substantially enhanced when the agents were combined. Selumetinib also inhibited lung tumor VEGF production and VEGFR signaling.
In the current study, we evaluated therapy directed against MEK combined with antiangiogenic therapy in distinct orthotopic NSCLC models. MEK inhibition resulted in potent antiangiogenic effects with decreased VEGF expression and signaling. Combining selumetinib with cediranib enhanced their anti-tumor and antiangiogenic effects. We conclude that combining selumetinib and cediranib represents a promising strategy for the treatment of NSCLC.
angiogenesis; selumetinib; cediranib; lung cancer; VEGF; MEK
Symptoms of an adverse reaction to contrast agents for computed tomography are diverse ranging, and sometimes serious. The goal of this study is to create a scoring rule to predict adverse reactions to contrast agents used in computed tomography.
This was a retrospective cohort study of all adult patients undergoing contrast enhanced CT scan for 7 years. The subjects were randomly divided into either a derivation or validation group. Baseline data and clinically relevant factors were collected from the electronic chart. Primary outcome was any acute adverse reactions to contrast media, observed for during 24 hours after administration. All potential candidate predictors were included in a forward stepwise logistic regression model. Prediction scores were assigned based on β coefficient. A receiver operating characteristic (ROC) curve was drawn, and the area under the curve (AUC) and incidence of acute adverse reactions at each point were obtained. The same process was performed in the validation group.
36,472 patients underwent enhanced CT imaging: 20,000 patients in the derivation group and 16,472 in the validation group. A total of 409 (2.0%, 95% CI:1.9-2.3) and 347 (2.1%, 95% CI:1.9-2.3) acute adverse reactions were seen in the derivation and validation groups. Logistic regression analysis revealed that prior adverse reaction to contrast agents, urticaria, an allergic history to drugs other than contrast agents, contrast agent concentration >70%, age <50 years, and total contrast agent dose >65 g were significant predictors of an acute adverse reaction. AUC was 0.70 (95% CI:0.67-0.73) and 0.67 (95% CI:0.64-0.70) in the derivation and validation groups.
We suggest a prediction model consisting of six predictors for acute adverse reactions to contrast agents used in CT.
Gastric cancer is one of the most significant diseases, and esophago-gastro-duodenoscopy (EGD) is one of screening methods for gastric cancer. This study was conducted to identify the optimal screening interval for gastric cancer using EGD in healthy adults.
A retrospective cohort study was conducted on 3,723 healthy participants without a known diagnosis of gastric cancer at baseline from January 2005 to December 2010. Participants underwent annual health screenings, including EGD, at the Center for Preventive Medicine at St Luke’s International Hospital, a community teaching hospital in Japan. Participants with cytological abnormalities underwent further examination. A generalized estimating equation (GEE) was used to analyze the longitudinal data. We decided 0.5% of incidence of gastric cancer as a cutoff point for interval.
The mean age (SD) of the participants was 55 (11) years, and 1,879 (50.5%) were male. During the study period, gastric cancer was detected in 35 participants. However, the incidence varied based on their ages. In the age groups <40, 40–49, 50–59, 60–69 and ≥70 years old, the 5-year cumulative incidences (95%CI) of gastric cancer were 0% (0-0%), 0.3% (0.1-1.0%), 1.0% (0.5-1.8%), 1.4% (0.8-2.4%) and 1.9% (0.8-3.8%), respectively. The odds ratios of the incidence of gastric cancer per year, which were evaluated using GEE models for the age groups 40–49, 50–59, 60–69 and ≥70 years old, were 1.51 (95%CI: 0.91-2.49), 1.94 (95%CI: 1.31-2.86), 1.59 (95%CI: 1.23-2.06) and 1.46 (95%CI: 1.06-2.02), respectively.
A screening for gastric cancer using EGD may be appropriate annually for healthy people over 70 years old, every two or three years for people 60–69 years old and every four years for people 50–59 years old. People younger than 50 years old may only need repeat screenings every five years or more.
Internal hernia within the falciform ligament is exceedingly rare. A literature search revealed only 14 cases of internal herniation of the small bowel through a congenital defect of the falciform ligament, most of which were found intra-operatively.
A 77-year-old Japanese woman presented to our emergency department with sudden hematemesis, occurring at least four to five times over a 12-hour period. No ulcer or gastrointestinal bleeding was detected on gastroendoscopy. A 40mm mass in the inferior lobe of the right lung was found on a chest X-ray, and our patient’s symptoms were therefore initially ascribed to aspirated blood from lung tumor-associated hemoptysis. However, our patient continued to show signs of severe abdominal pain and decreased urine output despite aggressive hydration, leading her examining physicians to search for a possibly severe, occult abdominal pathology. On emergent computed tomography imaging, we found an acute strangulated internal hernia within the falciform ligament. Diagnosis was made by helical computed tomography, permitting rapid surgical intervention.
Our findings on computed tomography imaging assisted with the pre-operative diagnosis and enabled us to make a rapid surgical intervention. Early diagnosis may help preclude significant strangulation with unnecessary resection.
Accidental falls among inpatients are a substantial cause of hospital injury. A number of successful experimental studies on fall prevention have shown the importance and efficacy of multifactorial intervention, though success rates vary. However, the importance of staff compliance with these effective, but often time-consuming, multifactorial interventions has not been fully investigated in a routine clinical setting. The purpose of this observational study was to describe the effectiveness of a multidisciplinary quality improvement (QI) activity for accidental fall prevention, with particular focus on staff compliance in a non-experimental clinical setting.
This observational study was conducted from July 2004 through December 2010 at St. Luke’s International Hospital in Tokyo, Japan. The QI activity for in-patient falls prevention consisted of: 1) the fall risk assessment tool, 2) an intervention protocol to prevent in-patient falls, 3) specific environmental safety interventions, 4) staff education, and 5) multidisciplinary healthcare staff compliance monitoring and feedback mechanisms.
The overall fall rate was 2.13 falls per 1000 patient days (350/164331) in 2004 versus 1.53 falls per 1000 patient days (263/172325) in 2010, representing a significant decrease (p = 0.039). In the first 6 months, compliance with use of the falling risk assessment tool at admission was 91.5% in 2007 (3998/4368), increasing to 97.6% in 2010 (10564/10828). The staff compliance rate of implementing an appropriate intervention plan was 85.9% in 2007, increasing to 95.3% in 2010.
In our study we observed a substantial decrease in patient fall rates and an increase of staff compliance with a newly implemented falls prevention program. A systematized QI approach that closely involves, encourages, and educates healthcare staff at multiple levels is effective.
Accidental falls; Fall prevention; QI activities; High compliance rate; Inpatients
After recent revised grading by the US Preventive Services Task Force of mammography (MMG) recommendations for women in their 40s, it is urgent to collect data on the benefits and harm of MMG screenings in Japan. In this paper, we study the actual status and effectiveness of opportunistic breast cancer screening by MMG for women in their 40s.
From January to December 2008, the total number of opportunistic breast cancer screenings by MMG at our institute was 12823. Of them, 398 (3.1 %) who were diagnosed as category 3 or more on MMG required further exams. The data were compared between two groups (women in their 40s, women aged 50 and older). Recall rate, detection rate of breast cancers, and implementation rate of further exams were evaluated.
Recall rate was 4.0 % (166/4138) for women in their 40s and 2.4 % (166/6949) for women aged 50 and older. Detection rate of breast cancers was higher in women in their 40s (0.56 %) than women aged 50 and older (0.26 %). Non-cancer rate among women receiving invasive examination was higher in women in their 40s (0.76 %) than women aged 50 and older (0.42 %) (p = 0.02). The number of false positives required to detect one true cancer patient was smaller in women in their 40s (4.5) than women aged 50 and older (5.3).
The results from our single institute revealed that opportunistic breast cancer screening by MMG for women in their 40s shows higher net benefits than for women aged 50 and older.
Screening mammography; Opportunistic; Benefits and harm
The signal transducer and activator of transcription 3 (STAT3) is considered to be an attractive therapeutic target for oncology drug development. We identified a N-[2-(1,3,4-oxadiazolyl)]-4-quinolinecarboxamide derivative, STX-0119, as a novel STAT3 dimerization inhibitor by a virtual screen using a customized version of the DOCK4 program with the crystal structure of STAT3. In addition, we used in vitro cell-based assays such as the luciferase reporter gene assay and the fluorescence resonance energy transfer-based STAT3 dimerization assay. STX-0119 selectively abrogated the DNA binding activity of STAT3 and suppressed the expression of STAT3-regulated oncoproteins such as c-myc and survivin in cancer cells. In contrast, a truncated inactive analogue, STX-0872, did not exhibit those activities. Oral administration of STX-0119 effectively abrogated the growth of human lymphoma cells in a SCC-3 subcutaneous xenograft model without visible toxicity. Structure−activity relationships of STX-0119 derivatives were investigated using the docking model of the STAT3-SH2 domain/STX-0119.
STAT3; dimerization; inhibitor; virtual screening; protein−protein interaction; antitumor
To evaluate the optimal interval for rechecking A1C levels below the diagnostic threshold of 6.5% for healthy adults.
RESEARCH DESIGN AND METHODS
This was a retrospective cohort study. Participants were 16,313 apparently healthy Japanese adults not taking glucose-lowering medications at baseline. Annual A1C measures from 2005 to 2008 at the Center for Preventive Medicine, a community teaching hospital in Japan, estimated cumulative incidence of diabetes.
Mean age (±SD) of participants was 49.7 ± 12.3 years, and 53% were male. Mean A1C at baseline was 5.4 ± 0.5%. At 3 years, for those with A1C at baseline of <5.0%, 5.0–5.4%, 5.5–5.9%, and 6.0–6.4%, cumulative incidence (95% CI) was 0.05% (0.001–0.3), 0.05% (0.01–0.11), 1.2% (0.9–1.6), and 20% (18–23), respectively.
In those with an A1C <6.0%, rescreening at intervals shorter than 3 years identifies few individuals (∼≤1%) with an A1C ≥6.5%.
Introduction Discriminating acute lung injury (ALI) or acute respiratory distress syndrome (ARDS) from cardiogenic pulmonary edema (CPE) using the plasma level of brain natriuretic peptide (BNP) alone remains controversial. The aim of this study was to determine the diagnostic utility of combination measurements of BNP and C-reactive protein (CRP) in critically ill patients with pulmonary edema.
This was a cross-sectional study. BNP and CRP data from 147 patients who presented to the emergency department due to acute respiratory failure with bilateral pulmonary infiltrates were analyzed.
There were 53 patients with ALI/ARDS, 71 with CPE, and 23 with mixed edema. Median BNP and CRP levels were 202 (interquartile range 95-439) pg/mL and 119 (62-165) mg/L in ALI/ARDS, and 691 (416-1,194) pg/mL (p < 0.001) and 8 (2-42) mg/L (p < 0.001) in CPE. BNP or CRP alone offered good discriminatory performance (C-statistics 0.831 and 0.887), but the combination offered greater one [C-statistics 0.931 (p < 0.001 versus BNP) (p = 0.030 versus CRP)]. In multiple logistic-regression, BNP and CRP were independent predictors for the diagnosis after adjusting for other variables.
Measurement of CRP is useful as well as that of BNP for distinguishing ALI/ARDS from CPE. Furthermore, a combination of BNP and CRP can provide higher accuracy for the diagnosis.
The mu event-related desynchronization (ERD) is supposed to reflect motor preparation and appear during motor imagery. The aim of this study is to examine the modulation of ERD with transcranial direct current stimulation (tDCS).
Six healthy subjects were asked to imagine their right hand grasping something after receiving a visual cue. Electroencephalograms (EEGs) were recorded near the left M1. ERD of the mu rhythm (mu ERD) by right hand motor imagery was measured. tDCS (10 min, 1 mA) was used to modulate the cortical excitability of M1. Anodal, cathodal, and sham tDCS were tested in each subject with a randomized sequence on different days. Each condition was separated from the preceding one by more than 1 week in the same subject. Before and after tDCS, mu ERD was assessed. The motor thresholds (MT) of the left M1 were also measured with transcranial magnetic stimulation.
Mu ERD significantly increased after anodal stimulation, whereas it significantly decreased after cathodal stimulation. There was a significant correlation between mu ERD and MT.
Opposing effects on mu ERD based on the orientation of the stimulation suggest that mu ERD is affected by cortical excitability.
The Ministry of Health, Labour and Welfare of Japan has been promoting participation in scholarly activities for physicians during residency training. However, there is debate regarding whether this is worthwhile for residents.
To evaluate residents’ opinions of engaging in scholarly activities and identify factors associated with overall satisfaction with their training program.
Cross-sectional national survey.
1,124 second-year residents in teaching hospitals in Japan in 2007
Collected data included demographics, teaching hospital characteristics and resources, residents’ research experiences, including type of activities, barriers to performing scholarly activities, residents’ opinions of scholarly requirements, and resident satisfaction with their residency program.
1,124 residents/1,500 responded for a response rate of 74.9%. Our data showed that 60.2% of Japanese residents engaged in some type of scholarly activity. Barriers included: “No resident time”; “No mentor;” and “No resident interest.” Sixty-three percent of residents thought that research should be a residency requirement. In multivariate logistic analysis, residents’ overall satisfaction with their residency program was significantly associated with participation in research activity (odds ratio (OR), 1.5; 95% confidence interval (CI), 1.1–2.1); male gender (OR, 1.5; 95% CI: 1.1–2.2); satisfaction with residency compensation (OR, 3.8; 95% CI, 2.6–5.0), and satisfaction with the residency curriculum (OR, 19.5; 95% CI, 13.7–27.7).
The majority of residents surveyed thought that research activity was worthwhile. Residents’ participation in research activity was associated with higher levels of satisfaction with residency training. Implementing measures to overcome existing barriers may have educational benefits for residents.
residency; clinical research; job satisfaction; medical education; Japan
We investigated the views of newly graduating physicians on their preparedness for postgraduate clinical training, and evaluated the relationship of preparedness with the educational environment and the pass rate on the National Medical Licensure Examination (NMLE).
Data were obtained from 2429 PGY-1 physicians-in-training (response rate, 36%) using a mailed cross-sectional survey. The Dundee Ready Education Environment Measure (DREEM) inventory was used to assess the learning environment at 80 Japanese medical schools. Preparedness was assessed based on 6 clinical areas related to the Association of American Medical Colleges Graduation Questionnaire.
Only 17% of the physicians-in-training felt prepared in the area of general clinical skills, 29% in basic knowledge of diagnosis and management of common conditions, 48% in communication skills, 19% in skills associated with evidence-based medicine, 54% in professionalism, and 37% in basic skills required for a physical examination. There were substantial differences among the medical schools in the perceived preparedness of their graduates. Significant positive correlations were found between preparedness for all clinical areas and a better educational environment (all p < 0.01), but there were no significant associations between the pass rate on the NMLE and perceived preparedness for any clinical area, as well as pass rate and educational environment (all p > 0.05).
Different educational environments among universities may be partly responsible for the differences in perceived preparedness of medical students for postgraduate clinical training. This study also highlights the poor correlation between self-assessed preparedness for practice and the NMLE.
The cerebellum is one of the regions that contribute to urinary dysfunction in humans. A 43-year-old woman at age 35 had an acute onset of encephalitis that led to fever, generalized convulsion and coma. Six months after the disease onset, she regained consciousness and developed generalized myoclonus, cerebellar ataxia and overactive bladder, e.g., urinary urgency, daytime urinary frequency, and urinary incontinence. Eight years after the disease onset, she was revealed to have cerebellar atrophy on MRI, cerebellar hypoperfusion on SPECT, and detrusor overactivity on urodynamic study. Selective inflammation in the cerebellum seemed to produce cerebellar ataxia and overactive bladder in our case.
Cerebellitis; Detrusor overactivity; Overactive bladder; Autonomic dysfunction
Health locus of control influences health-related behaviour, but its association with healthcare use is unclear.
To investigate the association between individuals' health locus of control and the use of conventional and alternative health care.
Design of study
Prospective cohort study.
A nationally representative random sample of community-dwelling adult households in Japan.
Health locus of control, symptom-related visits to physicians, and the use of dietary and physical complementary and alternative medicine (CAM) was measured. Dietary CAM included supplements, such as herbs and vitamins. Physical CAM included manipulations, such as acupuncture and acupressure.
Of the 2453 adult participants studied, 2103 (86%; 95% CI [confidence interval] = 84 to 88%) developed at least one symptom during the 31-day study period. Of these symptomatic adults, 639 visited physicians (30%; 95% CI = 28 to 32%), 480 used dietary CAM (23%; 95% CI = 21 to 25%), and 156 (7%; 95% CI = 6 to 9%) used physical CAM. The likelihood of visiting a physician was not related significantly to individuals' health locus of control. Increased use of dietary CAM was weakly associated with control by spiritual powers (P = 0.028), internal control (P = 0.013), and less control by professionals (P = 0.020). Increased use of physical CAM was significantly associated with control by spiritual powers (P = 0.009) indicating a belief that supernatural forces control individuals' health status.
The likelihood of visiting a physician is not affected by individuals' health locus of control. Control by spiritual powers is involved with increased CAM use. Internal control is weakly associated with greater use of dietary CAM; professional control is weakly associated with less use of dietary CAM.
beliefs; complementary therapies; health diaries; spiritual powers
AIM: To investigate the incidence of gastrointestinal symptoms and the nature of consequent utilization of health care services in a Japanese population.
METHODS: Using self-report, we conducted a prospective cohort study of a nationally representative sample of the Japanese population over a one-month period to determine the incidence of gastrointestinal symptoms of all kinds and resultant health care utilization. Both information on visits to physicians and use of complementary and alternative medicine therapies were collected.
RESULTS: From a total of 3568 in the recruitment sample, 3477 participants completed a health diary (response rate 97%). The data of 112 participants with baseline active gastrointestinal diseases were excluded from the analysis, leaving 3365 participants in the study. The incidence of gastrointestinal symptoms was 25% and the mean number of symptomatic episodes was 0.66 in a month. Abdominal pain, diarrhea, nausea, constipation and dyspepsia were the most frequent symptoms. Female gender, younger age, and low baseline quality of life were risk factors for developing these symptoms. The participants were more likely to treat themselves, using dietary, complementary or alternative medicines, than to visit physicians, except in the case of vomiting.
CONCLUSION: Gastrointestinal symptoms are common in the Japanese population, with an incidence of 25%. Abdominal pain, diarrhea, nausea, constipation and dyspepsia are the most frequent symptoms. Risk factors for developing these symptoms include female gender, younger age, and low baseline quality of life.
Gastrointestinal diseases; Abdominal Pain; Diarrhea; Nausea; Constipation; Dyspepsia
The government-led "evidence-based guidelines for cataract treatment" labelled pirenoxine and glutathione eye drops, which have been regarded as the standard care for cataracts in Japan, as lacking evidence of effectiveness, causing great upset among ophthalmologists and professional ophthalmology societies. This study investigated the reasons why such "scientific evidence of treatment effectiveness" is not easily accepted by physicians, and thus, why they do not change their clinical practices to reflect such evidence.
We conducted a qualitative study based on grounded theory to explore physicians' awareness of "scientific evidence" and evidence-supported treatment in relation to pirenoxine and glutathione eye drops, and to identify current barriers to the implementation of evidence-based policies in clinical practice. Interviews were conducted with 35 ophthalmologists and 3 general practitioners on their prescribing behaviours, perceptions of eye drop effectiveness, attitudes toward the eye drop guideline recommendations, and their perceptions of "scientific evidence."
Although few physicians believed that eye drops are remarkably effective, the majority of participants reported that they prescribed eye drops to patients who asked for them, and that such patients accounted for a considerable proportion of those with cataracts. Physicians seldom attempted to explain to patients the limitations of effectiveness or to encourage them to stop taking the eye drops. Physicians also acknowledged the benefits of prescribing such drugs, which ultimately outweighed any uncertainty of their effectiveness. These benefits included economic incentives and a desire to be appreciated by patients. Changes in clinical practice were considered to bring little benefit to physicians or patients. Government approval, rarity of side effects, and low cost of the drops also encouraged prescription.
Physicians occasionally provide treatment without expecting remarkable therapeutic effectiveness, as exemplified by the use of eye drops. This finding highlights that scientific evidence alone cannot easily change physicians' clinical practices, unless evidence-based practices are accepted by the general public and supported by health policy.