For over 60 years, the Karnofsky Performance Status (KPS) has proven itself a valuable tool with which to perform measurement of and comparison between the functional statuses of individual patients. In recent decades conditions for patients have changed, and so too has the KPS undergone several adjustments since its initial development.
The most important works regarding the KPS tend to focus upon a variety of issues, including but not limited to reliability, validity and health-related quality of life. Also discussed is the question of what quantity the KPS may in fact be said to measure. The KPS is increasingly used as a prognostic factor in patient assessment. Thus, questions regarding if and how it affects survival are relevant.
In this paper, we propose an algorithm which uses a minimum of two and a maximum of three questions to facilitate an adequate and efficient evaluation of the KPS.
This review honors the original intention of the discoverer and gives an overview of adaptations made in recent years. The proposed algorithm suggests specific updates with the goal of ensuring continued adequacy and expediency in the determination of the KPS.
Karnofsky performance status; Quality of life; Disability evaluation; Algorithms; Neoplasms; Review
Electronic health records (EHRs) provide enormous potential for health research but also present data governance challenges. Ensuring de-identification is a pre-requisite for use of EHR data without prior consent. The South London and Maudsley NHS Trust (SLaM), one of the largest secondary mental healthcare providers in Europe, has developed, from its EHRs, a de-identified psychiatric case register, the Clinical Record Interactive Search (CRIS), for secondary research.
We describe development, implementation and evaluation of a bespoke de-identification algorithm used to create the register. It is designed to create dictionaries using patient identifiers (PIs) entered into dedicated source fields and then identify, match and mask them (with ZZZZZ) when they appear in medical texts. We deemed this approach would be effective, given high coverage of PI in the dedicated fields and the effectiveness of the masking combined with elements of a security model. We conducted two separate performance tests i) to test performance of the algorithm in masking individual true PIs entered in dedicated fields and then found in text (using 500 patient notes) and ii) to compare the performance of the CRIS pattern matching algorithm with a machine learning algorithm, called the MITRE Identification Scrubber Toolkit – MIST (using 70 patient notes – 50 notes to train, 20 notes to test on). We also report any incidences of potential breaches, defined by occurrences of 3 or more true or apparent PIs in the same patient’s notes (and in an additional set of longitudinal notes for 50 patients); and we consider the possibility of inferring information despite de-identification.
True PIs were masked with 98.8% precision and 97.6% recall. As anticipated, potential PIs did appear, owing to misspellings entered within the EHRs. We found one potential breach. In a separate performance test, with a different set of notes, CRIS yielded 100% precision and 88.5% recall, while MIST yielded a 95.1% and 78.1%, respectively. We discuss how we overcome the realistic possibility – albeit of low probability – of potential breaches through implementation of the security model.
CRIS is a de-identified psychiatric database sourced from EHRs, which protects patient anonymity and maximises data available for research. CRIS demonstrates the advantage of combining an effective de-identification algorithm with a carefully designed security model. The paper advances much needed discussion of EHR de-identification – particularly in relation to criteria to assess de-identification, and considering the contexts of de-identified research databases when assessing the risk of breaches of confidential patient information.
De-identification; Anonymisation; Electronic health records; Psychiatric case register; Medical health records security; Medical information database security
Identification and tracking of important communicable diseases is pivotal to our understanding of the geographical distribution of disease, the emergence and spread of novel and resistant infections, and are of particular importance for public health policy planning. Moreover, understanding of current clinical practice norms is essential to audit clinical care, identify areas of concern, and develop interventions to improve care quality.
However, there are several barriers to obtaining these research data. For example current disease surveillance mechanisms make it difficult for the busy doctor to know which diseases to notify, to whom and how, and are also time consuming. Consequently, many cases go un-notified. In addition assessments of current clinical practice are typically limited to small retrospective audits in individual hospitals.
Therefore, we developed a free smartphone application to try to increase the identification of major infectious diseases and other acute medical presentations and improve our understanding of clinical practice.
Within the first month there were over 1000 downloads and over 600 specific disease notifications, coming from a broad range of specialities, grades and from all across the globe, including some resource poor settings.
Notifications have already provided important information, such as new cases of TB meningitis, resistant HIV and rabies, and important clinical information, such as where patient with myocardial infarctions are and are not receiving potentially life-saving therapy.
The database generated can also answer new, dynamic and targeted questions. When a new guideline is released, for example for a new pandemic infection, we can track, in real-time, the global usage of the guideline and whether the recommendations are being followed. In addition this allows identification of where cases with key markers of severe disease are occurring. This is a potential resource for guideline-producing bodies, clinical governance and public health institutions and also for patient recruitment into ongoing studies.
Further parallel studies are needed to assess the clinical and epidemiological utility of novel disease surveillance applications, such as this, with direct comparisons made to data collected through routine surveillance routes.
Nevertheless, current disease surveillance mechanisms do not always comprehensively and accurately reflect disease distribution for many conditions. Smartphone applications, such as ClickClinica, are a novel approach with the potential to generate real-time disease surveillance data that may augment current methods.
Epidemiology; Audit; Clinical practice; Quality improvement; iPhone; Smart phone; Application; Guidelines
Electronic Patient Medication Record (ePMR) systems have important safety features embedded to alert users about potential clinical hazards and errors. To date, there is no synthesis of evidence about the effectiveness of these safety features and alerts at the point of pharmacy order entry. This review aims to systematically explore the literature and synthesise published evidence about the effectiveness of safety features and alerts in ePMR systems at the point of pharmacy order entry, in primary and secondary care.
We searched MEDLINE, EMBASE, Inspec, International Pharmaceutical Abstracts, PsycINFO, CINHAL (earliest entry to March 2012) and reference lists of articles. Two reviewers examined the titles and abstracts, and used a hierarchical template to identify comparative design studies evaluating the effectiveness of safety features and alerts at the point of pharmacy order entry. The two reviewers independently assessed the quality of the included studies using Cochrane Collaboration’s risk of bias tool.
Three randomised trials and two before-after studies met our criteria. Four studies involved integrated care facilities and one was hospital-based. The studies were all from the United States (US). The five studies demonstrated statistically significant reduction in medication errors in patients with renal insufficiency, pregnant women dispensed US Food Drug and Administration (FDA) risk category D (evidence of fetal risk but therapeutic benefits can outweigh the risk) or X (evidence suggests that risk to the fetus outweighs therapeutic benefits) medication, first dispensing of inappropriate medications in patients aged 65 and above, co-dispensing of interacting drugs, and adverse drug events related to hyperkalaemia.
This systematic review shows that the safety features of ePMR systems are effective in alerting users about potential clinical hazards and errors during pharmacy order entry. There are however, problems such as false alerts and inconsistencies in alert management. More studies are needed from other countries and pharmacy practice settings to assess the effectiveness of electronic safety features and alerts in preventing error and reducing harm to patients.
Electronic patient medication record system; Safety feature; Safety alert; Safety warning; Pharmacy order entry system; Decision support; Pharmacy computer system; Medicine supply; Drug alert
Handheld computers for data collection (HCDC) and management have become increasingly common in health research. However, current knowledge about the use of HCDC in health research in China is very limited. In this study, we administered a survey to a hard-to-reach population in China using HCDC and assessed the acceptability and adoption of HCDC in China.
Handheld computers operating Windows Mobile and Questionnaire Development Studio (QDS) software (Nova Research Company) were used for this survey. Questions on tobacco use and susceptibility were drawn from the Global Adult Tobacco Survey (GATS) and other validated instruments, and these were programmed in Chinese characters by local staff. We conducted a half-day training session for survey supervisors and a three-day training session for 20 interviewers and 9 supervisors. After the training, all trainees completed a self-assessment of their skill level using HCDC. The main study was implemented in fall 2010 in 10 sites, with data managed centrally in Beijing. Study interviewers completed a post-survey evaluation questionnaire on the acceptability and utility of HCDC in survey research.
Twenty-nine trainees completed post-training surveys, and 20 interviewers completed post-data collection questionnaires. After training, more than 90% felt confident about their ability to collect survey data using HCDC, to transfer study data from a handheld computer to a laptop, and to encrypt the survey data file. After data collection, 80% of the interviewers thought data collection and management were easy and 60% of staff felt confident they could solve problems they might encounter. Overall, after data collection, nearly 70% of interviewers reported that they would prefer to use handheld computers for future surveys. More than half (55%) felt the HCDC was a particularly useful data collection tool for studies conducted in China.
We successfully conducted a health-related survey using HCDC. Using handheld computers for data collection was a feasible, acceptable, and preferred method by Chinese interviewers. Despite minor technical issues that occurred during data collection, HCDC is a promising methodology to be used in survey-based research in China.
Surveys; Electronic data collection; Handheld computers; China
Computerized Provider Order Entry (CPOE) can improve patient safety, quality and efficiency, but hospitals face a host of barriers to adopting CPOE, ranging from resistance among physicians to the cost of the systems. In response to the incentives for meaningful use of health information technology and other market forces, hospitals in the United States are increasingly moving toward the adoption of CPOE. The purpose of this study was to characterize the experiences of hospitals that have successfully implemented CPOE.
We used a qualitative approach to observe clinical activities and capture the experiences of physicians, nurses, pharmacists and administrators at five community hospitals in Massachusetts (USA) that adopted CPOE in the past few years. We conducted formal, structured observations of care processes in diverse inpatient settings within each of the hospitals and completed in-depth, semi-structured interviews with clinicians and staff by telephone. After transcribing the audiorecorded interviews, we analyzed the content of the transcripts iteratively, guided by principles of the Immersion and Crystallization analytic approach. Our objective was to identify attitudes, behaviors and experiences that would constitute useful lessons for other hospitals embarking on CPOE implementation.
Analysis of observations and interviews resulted in findings about the CPOE implementation process in five domains: governance, preparation, support, perceptions and consequences. Successful institutions implemented clear organizational decision-making mechanisms that involved clinicians (governance). They anticipated the need for education and training of a wide range of users (preparation). These hospitals deployed ample human resources for live, in-person training and support during implementation. Successful implementation hinged on the ability of clinical leaders to address and manage perceptions and the fear of change. Implementation proceeded smoothly when institutions identified and anticipated the consequences of the change.
The lessons learned in the five domains identified in this study may be useful for other community hospitals embarking on CPOE adoption.
Quality of care; Clinical decision support; Meaningful use; Transformation
Chronic low back pain is a common chronic condition whose treatment success can be improved by active involvement of patients. Patient involvement can be fostered by web-based applications combining health information with decision support or behaviour change support. These so-called Interactive Health Communication Applications (IHCAs) can reach great numbers of patients at low financial cost and provide information and support at the time, place and learning speed patients prefer. However, high attrition often seems to decrease the effects of web-based interventions. Tailoring content and tone of IHCAs to the individual patient ´s needs might improve usage and therefore effectiveness. This study aims to evaluate a tailored IHCA for people with chronic low back pain combining health information with decision support and behaviour change support.
The tailored IHCA will be tested regarding effectiveness and usage against a standard website with identical content in a single-blinded randomized trial with a parallel design. The IHCA contains information on chronic low back pain and its treatment options including health behaviour change recommendations. In the intervention group the content is delivered in dialogue form, tailored to relevant patient characteristics (health literacy, coping style). In the control group there is no tailoring, a standard web-page is used for presenting the content. Participants are unaware of group assignment. Eligibility criteria are age ≥ 18 years , self- reported chronic low back pain, and Internet access. To detect the expected small effect (Cohen’s d = 0.2), the sample aims to include 414 patients, with assessments at baseline, directly after the first on-page visit, and at 3-month follow-up using online self-report questionnaires. It is expected that the tailored IHCA has larger effects on knowledge and patient empowerment (primary outcomes) compared to a standard website. Secondary outcomes are website usage, preparation for decision making, and decisional conflict.
IHCAs can be a suitable way to promote knowledge about chronic low back pain and self-management competencies. Results of the study can increase the knowledge on how to develop IHCAs which are more useful and effective for people suffering from chronic low back pain.
International Clinical Trials Registry DRKS00003322
Chronic low back pain; Randomized controlled trial; Study protocol; Patient information; Web
High override rates for drug-drug interaction (DDI) alerts in electronic health records (EHRs) result in the potentially dangerous consequence of providers ignoring clinically significant alerts. Lack of uniformity of criteria for determining the severity or validity of these interactions often results in discrepancies in how these are evaluated. The purpose of this study was to identify a set of criteria for assessing DDIs that should be used for the generation of clinical decision support (CDS) alerts in EHRs.
We conducted a 20-year systematic literature review of MEDLINE and EMBASE to identify characteristics of high-priority DDIs. These criteria were validated by an expert panel consisting of medication knowledge base vendors, EHR vendors, in-house knowledge base developers from academic medical centers, and both federal and private agencies involved in the regulation of medication use.
Forty-four articles met the inclusion criteria for assessing characteristics of high-priority DDIs. The panel considered five criteria to be most important when assessing an interaction- Severity, Probability, Clinical Implications of the interaction, Patient characteristics, and the Evidence supporting the interaction. In addition, the panel identified barriers and considerations for being able to utilize these criteria in medication knowledge bases used by EHRs.
A multi-dimensional approach is needed to understanding the importance of an interaction for inclusion in medication knowledge bases for the purpose of CDS alerting. The criteria identified in this study can serve as a first step towards a uniform approach in assessing which interactions are critical and warrant interruption of a provider’s workflow.
Clinical decision support; Drug-drug interaction; Medication-related decision support system; Electronic health record; Alerts
Within the field of record linkage, numerous data cleaning and standardisation techniques are employed to ensure the highest quality of links. While these facilities are common in record linkage software packages and are regularly deployed across record linkage units, little work has been published demonstrating the impact of data cleaning on linkage quality.
A range of cleaning techniques was applied to both a synthetically generated dataset and a large administrative dataset previously linked to a high standard. The effect of these changes on linkage quality was investigated using pairwise F-measure to determine quality.
Data cleaning made little difference to the overall linkage quality, with heavy cleaning leading to a decrease in quality. Further examination showed that decreases in linkage quality were due to cleaning techniques typically reducing the variability – although correct records were now more likely to match, incorrect records were also more likely to match, and these incorrect matches outweighed the correct matches, reducing quality overall.
Data cleaning techniques have minimal effect on linkage quality. Care should be taken during the data cleaning process.
Data cleaning; Data quality; Medical record linkage
The behaviour of doctors and their responses to warnings can inform the effective design of Clinical Decision Support Systems. We used data from a University hospital electronic prescribing and laboratory reporting system with hierarchical warnings and alerts to explore junior doctors’ behaviour. The objective of this trial was to establish whether a Junior Doctor Dashboard providing feedback on prescription warning information and laboratory alerting acceptance rates was effective in changing junior doctors’ behaviour.
A mixed methods approach was employed which included a parallel group randomised controlled trial, and individual and focus group interviews. Junior doctors below the specialty trainee level 3 grade were recruited and randomised to two groups. Every doctor (N = 42) in the intervention group was e-mailed a link to a personal dashboard every week for 4 months. Nineteen participated in interviews. The 44 control doctors did not receive any automated feedback. The outcome measures were the difference in responses to prescribing warnings (of two severities) and laboratory alerting (of two severities) between the months before and the months during the intervention, analysed as the difference in performance between the intervention and the control groups.
No significant differences were observed in the rates of generating prescription warnings, or in the acceptance of laboratory alarms. However, responses to laboratory alerts differed between the pre-intervention and intervention periods. For the doctors of Foundation Year 1 grade, this improvement was significantly (p = 0.002) greater in the group with access to the dashboard (53.6% ignored pre-intervention compared to 29.2% post intervention) than in the control group (47.9% ignored pre-intervention compared to 47.0% post intervention). Qualitative interview data indicated that while junior doctors were positive about the electronic prescribing functions, they were discriminating in the way they responded to other alerts and warnings given that from their perspective these were not always immediately clinically relevant or within the scope of their responsibility.
We have only been able to provide weak evidence that a clinical dashboard providing individualized feedback data has the potential to improve safety behaviour and only in one of several domains. The construction of metrics used in clinical dashboards must take account of actual work processes.
Patient safety; Clinical decision support; Junior doctors
The validity of studies describing clinicians’ judgements based on their responses to paper cases is questionable, because - commonly used - paper case simulations only partly reflect real clinical environments. In this study we test whether paper case simulations evoke similar risk assessment judgements to the more realistic simulated patients used in high fidelity physical simulations.
97 nurses (34 experienced nurses and 63 student nurses) made dichotomous assessments of risk of acute deterioration on the same 25 simulated scenarios in both paper case and physical simulation settings. Scenarios were generated from real patient cases. Measures of judgement ‘ecology’ were derived from the same case records. The relationship between nurses’ judgements, actual patient outcomes (i.e. ecological criteria), and patient characteristics were described using the methodology of judgement analysis. Logistic regression models were constructed to calculate Lens Model Equation parameters. Parameters were then compared between the modeled paper-case and physical-simulation judgements.
Participants had significantly less achievement (ra) judging physical simulations than when judging paper cases. They used less modelable knowledge (G) with physical simulations than with paper cases, while retaining similar cognitive control and consistency on repeated patients. Respiration rate, the most important cue for predicting patient risk in the ecological model, was weighted most heavily by participants.
To the extent that accuracy in judgement analysis studies is a function of task representativeness, improving task representativeness via high fidelity physical simulations resulted in lower judgement performance in risk assessments amongst nurses when compared to paper case simulations. Lens Model statistics could prove useful when comparing different options for the design of simulations used in clinical judgement analysis. The approach outlined may be of value to those designing and evaluating clinical simulations as part of education and training strategies aimed at improving clinical judgement and reasoning.
Written case simulation; Physical simulation; Representative design; Clinical judgement analysis; Risk assessment; Lens model equation; Logistic regression; Clinical vignettes
A large proportion of patients with knee and/or hip osteoarthritis (OA) do not meet the recommended levels of physical activity (PA). Therefore, we developed a web-based intervention that provides a tailored PA program for patients with knee and/or hip OA, entitled Join2move. The intervention incorporates core principles of the behaviour graded activity theory (BGA). The aim of this study was to investigate the preliminary effectiveness, feasibility and acceptability of Join2move in patients with knee and/or hip OA.
A non-randomized pilot study was performed among patients with knee and/or hip OA. Primary outcomes were PA (SQUASH Questionnaire), physical function (HOOS and KOOS questionnaires) and self-perceived effect (7-point Likert scale). Baseline, 6 and 12 week follow-up data were collected via online questionnaires. To assess feasibility and acceptability, program usage (modules completed) and user satisfaction (SUS questionnaire) were measured as secondary outcomes. Participants from the pilot study were invited to be interviewed. The interviews focused on users’ experiences with Join2move. Besides the pilot study we performed two usability tests to determine the feasibility and acceptability of Join2move. In the first usability test, software experts evaluated the website from a list of usability concepts. In the second test, users were asked to verbalize thoughts during the execution of multiple tasks.
Twenty OA patients with knee and/or hip OA between 50 and 80 years of age participated in the pilot study. After six weeks, pain scores increased from 5.3 to 6.6 (p=0.04). After 12 weeks this difference disappeared (p=0.5). Overall, users were enthusiastic about Join2move. In particular, performing exercise at one's own pace without time or travel restrictions was cited as convenient. However, some minor flaws were observed. Users perceived some difficulties in completing the entire introduction module and rated the inability to edit and undo actions as annoying.
This paper outlines the preliminary effectiveness, feasibility and acceptability of a web-based PA intervention. Preliminary results from the pilot study revealed that PA scores increased, although differences were not statistically significant. Interviews and usability tests suggest that the intervention is feasible and acceptable in promoting PA in patients with knee and/or hip OA. The intervention was easy to use and the satisfaction with the program was high.
The Netherlands National Trial Register. Trial number: NTR2483
Osteoarthritis; Physical activity; Web-based intervention; Development; Pilot study; Usability study
Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly.
Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons.
A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership.
In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data.
Computer simulation studies of the emergency department (ED) are often patient driven and consider the physician as a human resource whose primary activity is interacting directly with the patient. In many EDs, physicians supervise delegates such as residents, physician assistants and nurse practitioners each with different skill sets and levels of independence. The purpose of this study is to present an alternative approach where physicians and their delegates in the ED are modeled as interacting pseudo-agents in a discrete event simulation (DES) and to compare it with the traditional approach ignoring such interactions.
The new approach models a hierarchy of heterogeneous interacting pseudo-agents in a DES, where pseudo-agents are entities with embedded decision logic. The pseudo-agents represent a physician and delegate, where the physician plays a senior role to the delegate (i.e. treats high acuity patients and acts as a consult for the delegate). A simple model without the complexity of the ED is first created in order to validate the building blocks (programming) used to create the pseudo-agents and their interaction (i.e. consultation). Following validation, the new approach is implemented in an ED model using data from an Ontario hospital. Outputs from this model are compared with outputs from the ED model without the interacting pseudo-agents. They are compared based on physician and delegate utilization, patient waiting time for treatment, and average length of stay. Additionally, we conduct sensitivity analyses on key parameters in the model.
In the hospital ED model, comparisons between the approach with interaction and without showed physician utilization increase from 23% to 41% and delegate utilization increase from 56% to 71%. Results show statistically significant mean time differences for low acuity patients between models. Interaction time between physician and delegate results in increased ED length of stay and longer waits for beds.
This example shows the importance of accurately modeling physician relationships and the roles in which they treat patients. Neglecting these relationships could lead to inefficient resource allocation due to inaccurate estimates of physician and delegate time spent on patient related activities and length of stay.
Primary care doctors in NHSScotland have been using electronic medical records within their practices routinely for many years. The Scottish Health Executive eHealth strategy (2008-2011) has recently brought radical changes to the primary care computing landscape in Scotland: an information system (GPASS) which was provided free-of-charge by NHSScotland to a majority of GP practices has now been replaced by systems provided by two approved commercial providers. The transition to new electronic medical records had to be completed nationally across all health-boards by March 2012.
We carried out 25 in-depth semi-structured interviews with primary care doctors to elucidate GPs’ perspectives on their practice information systems and collect more general information on management processes in the patient surgical pathway in NHSScotland. We undertook a thematic analysis of interviewees’ responses, using Normalisation Process Theory as the underpinning conceptual framework.
The majority of GPs’ interviewed considered that electronic medical records are an integral and essential element of their work during the consultation, playing a key role in facilitating integrated and continuity of care for patients and making clinical information more accessible. However, GPs expressed a number of reservations about various system functionalities – for example: in relation to usability, system navigation and information visualisation.
Our study highlights that while electronic information systems are perceived as having important benefits, there remains substantial scope to improve GPs’ interaction and overall satisfaction with these systems. Iterative user-centred improvements combined with additional training in the use of technology would promote an increased understanding, familiarity and command of the range of functionalities of electronic medical records among primary care doctors.
Medical informatics; Medical informatics applications; Information systems
The openEHR project and the closely related ISO 13606 standard have defined structures supporting the content of Electronic Health Records (EHRs). However, there is not yet any finalized openEHR specification of a service interface to aid application developers in creating, accessing, and storing the EHR content.
The aim of this paper is to explore how the Representational State Transfer (REST) architectural style can be used as a basis for a platform-independent, HTTP-based openEHR service interface. Associated benefits and tradeoffs of such a design are also explored.
The main contribution is the formalization of the openEHR storage, retrieval, and version-handling semantics and related services into an implementable HTTP-based service interface. The modular design makes it possible to prototype, test, replicate, distribute, cache, and load-balance the system using ordinary web technology. Other contributions are approaches to query and retrieval of the EHR content that takes caching, logging, and distribution into account. Triggering on EHR change events is also explored.
A final contribution is an open source openEHR implementation using the above-mentioned approaches to create LiU EEE, an educational EHR environment intended to help newcomers and developers experiment with and learn about the archetype-based EHR approach and enable rapid prototyping.
Using REST addressed many architectural concerns in a successful way, but an additional messaging component was needed to address some architectural aspects. Many of our approaches are likely of value to other archetype-based EHR implementations and may contribute to associated service model specifications.
Tuberculosis (TB) is a serious public health issue in developing countries. Early prediction of TB epidemic is very important for its control and intervention. We aimed to develop an appropriate model for predicting TB epidemics and analyze its seasonality in China.
Data of monthly TB incidence cases from January 2005 to December 2011 were obtained from the Ministry of Health, China. A seasonal autoregressive integrated moving average (SARIMA) model and a hybrid model which combined the SARIMA model and a generalized regression neural network model were used to fit the data from 2005 to 2010. Simulation performance parameters of mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to compare the goodness-of-fit between these two models. Data from 2011 TB incidence data was used to validate the chosen model.
Although both two models could reasonably forecast the incidence of TB, the hybrid model demonstrated better goodness-of-fit than the SARIMA model. For the hybrid model, the MSE, MAE and MAPE were 38969150, 3406.593 and 0.030, respectively. For the SARIMA model, the corresponding figures were 161835310, 8781.971 and 0.076, respectively. The seasonal trend of TB incidence is predicted to have lower monthly incidence in January and February and higher incidence from March to June.
The hybrid model showed better TB incidence forecasting than the SARIMA model. There is an obvious seasonal trend of TB incidence in China that differed from other countries.
Hybrid model; Incidence; Prediction; Seasonality; Tuberculosis
Clinical Decision Support Systems (CDSSs) can support guideline adherence in heart failure (HF) patients. However, the use of CDSSs is limited and barriers in working with CDSSs have been described as a major obstacle. It is unknown if barriers to CDSSs are present and differ between HF nurses and cardiologists. Therefore the aims of this study are; 1. Explore the type and number of perceived barriers of HF nurses and cardiologists to use a CDSS in the treatment of HF patients. 2. Explore possible differences in perceived barriers between two groups. 3. Assess the relevance and influence of knowledge management (KM) on Responsibility/Trust (R&T) and Barriers/Threats (B&T).
A questionnaire was developed including; B&T, R&T, and KM. For analyses, descriptive techniques, 2-tailed Pearson correlation tests, and multiple regression analyses were performed.
The response- rate of 220 questionnaires was 74%. Barriers were found for cardiologists and HF nurses in all the constructs. Sixty-five percent did not want to be dependent on a CDSS. Nevertheless thirty-six percent of HF nurses and 50% of cardiologists stated that a CDSS can optimize HF medication. No relationship between constructs and age; gender; years of work experience; general computer experience and email/internet were observed. In the group of HF nurses a positive correlation (r .33, P<.01) between years of using the internet and R&T was found. In both groups KM was associated with the constructs B&T (B=.55, P=<.01) and R&T (B=.50, P=<.01).
Both cardiologists and HF-nurses perceived barriers in working with a CDSS in all of the examined constructs. KM has a strong positive correlation with perceived barriers, indicating that increasing knowledge about CDSSs can decrease their barriers.
Many medical organizations have invested heavily in electronic health record (EHR) and health information exchange (HIE) information systems (IS) to improve medical decision-making and increase efficiency. Despite the potential interoperability advantages of such IS, physicians do not always immediately consult electronic health information, and this decision may result in decreased level of quality of care as well as unnecessary costs. This study sought to reveal the effect of EHR IS use on the physicians' admission decisions. It was hypothesizing the using EHR IS will result in more accurate and informed admission decisions, which will manifest through reduction in single-day admissions and in readmissions within seven days.
This study used a track log-file analysis of a database containing 281,750 emergency department (ED) referrals in seven main hospitals in Israel. Log-files were generated by the system and provide an objective and unbiased measure of system usage, Thus allowing us to evaluate the contribution of an EHR IS, as well as an HIE network, to decision-makers (physicians). This is done by investigating whether EHR IS lead to improved medical outcomes in the EDs, which are known for their tight time constraints and overcrowding. The impact of EHR IS and HIE network was evaluated by comparing decisions on patients classified by five main differential diagnoses (DDs), made with or without viewing the patients' medical history via the EHR IS.
The results indicate a negative relationship between viewing medical history via EHR systems and the number of possibly redundant admissions. Among the DDs, we found information viewed most impactful for gastroenteritis, abdominal pain, and urinary tract infection in reducing readmissions within seven days, and for gastroenteritis, abdominal pain, and chest pain in reducing the single-day admissions' rate. Both indices are key quality measures in the health system. In addition, we found that interoperability (using external information provided online by health suppliers) contributed more to this reduction than local files, which are available only in the specific hospital. Thus, reducing the rate of redundant admissions by using external information produced larger odds ratios (of the β coefficients; e.g. viewing external information on patients resulted in negative associations of 27.2% regarding readmissions within seven days, and 13% for single-day admissions as compared with viewing local information on patients respectively).
Viewing medical history via an EHR IS and using HIE network led to a reduction in the number of seven day readmissions and single-day admissions for all patients. Using external medical history may imply a more thorough patient examination that can help eliminate unnecessary admissions. Nevertheless, in most instances physicians did not view medical history at all, probably due to the limited resources available, combined with the stress of rapid turnover in ED units.
Medical decision analysis; Electronic health record; Health information exchange; Medical informatics; Interoperability; Health maintenance organization; IS efficiency
Overtreatment of catheter-associated bacteriuria is a quality and safety problem, despite the availability of evidence-based guidelines. Little is known about how guidelines-based knowledge is integrated into clinicians’ mental models for diagnosing catheter-associated urinary tract infection (CA-UTI). The objectives of this research were to better understand clinicians’ mental models for CA-UTI, and to develop and validate an algorithm to improve diagnostic accuracy for CA-UTI.
We conducted two phases of this research project. In phase one, 10 clinicians assessed and diagnosed four patient cases of catheter associated bacteriuria (n= 40 total cases). We assessed the clinical cues used when diagnosing these cases to determine if the mental models were IDSA guideline compliant. In phase two, we developed a diagnostic algorithm derived from the IDSA guidelines. IDSA guideline authors and non-expert clinicians evaluated the algorithm for content and face validity. In order to determine if diagnostic accuracy improved using the algorithm, we had experts and non-experts diagnose 71 cases of bacteriuria.
Only 21 (53%) diagnoses made by clinicians without the algorithm were guidelines-concordant with fair inter-rater reliability between clinicians (Fleiss’ kappa = 0.35, 95% Confidence Intervals (CIs) = 0.21 and 0.50). Evidence suggests that clinicians’ mental models are inappropriately constructed in that clinicians endorsed guidelines-discordant cues as influential in their decision-making: pyuria, systemic leukocytosis, organism type and number, weakness, and elderly or frail patient. Using the algorithm, inter-rater reliability between the expert and each non-expert was substantial (Cohen’s kappa = 0.72, 95% CIs = 0.52 and 0.93 between the expert and non-expert #1 and 0.80, 95% CIs = 0.61 and 0.99 between the expert and non-expert #2).
Diagnostic errors occur when clinicians’ mental models for catheter-associated bacteriuria include cues that are guidelines-discordant for CA-UTI. The understanding we gained of clinicians’ mental models, especially diagnostic errors, and the algorithm developed to address these errors will inform interventions to improve the accuracy and reliability of CA-UTI diagnoses.
Catheter-associated bacteriuria; Urinary tract infections; Evidence based guidelines; Diagnostic errors
We describe and evaluate the development and use of a Clinical Decision Support (CDS) intervention; an alert, in response to an identified medical error of overuse of a diagnostic laboratory test in a Computerized Physician Order Entry (CPOE) system. CPOE with embedded CDS has been shown to improve quality of care and reduce medical errors. CPOE can also improve resource utilization through more appropriate use of laboratory tests and diagnostic studies. Observational studies are necessary in order to understand how these technologies can be successfully employed by healthcare providers.
The error was identified by the Test Utilization Committee (TUC) in September, 2008 when they noticed critical care patients were being tested daily, and sometimes twice daily, for B-Type Natriuretic Peptide (BNP). Repeat and/or serial BNP testing is inappropriate for guiding the management of heart failure and may be clinically misleading. The CDS intervention consists of an expert rule that searches the system for a BNP lab value on the patient. If there is a value and the value is within the current hospital stay, an advisory is displayed to the ordering clinician. In order to isolate the impact of this intervention on unnecessary BNP testing we applied multiple regression analysis to the sample of 41,306 patient admissions with at least one BNP test at LVHN between January, 2008 and September, 2011.
Our regression results suggest the CDS intervention reduced BNP orders by 21% relative to the mean. The financial impact of the rule was also significant. Multiplying by the direct supply cost of $28.04 per test, the intervention saved approximately $92,000 per year.
The use of alerts has great positive potential to improve care, but should be used judiciously and in the appropriate environment. While these savings may not be generalizable to other interventions, the experience at LVHN suggests that appropriately designed and carefully implemented CDS interventions can have a substantial impact on the efficiency of care provision.
Clinical decision support; Computerized physician order entry; Medical error; BNP testing; Alert fatigue; Sociotechnical; Unintended consequences; Appropriate testing
With an ever-growing ageing population, dementia is fast becoming the chronic disease of the 21st century. Elderly people affected with dementia progressively lose their autonomy as they encounter problems in their Activities of Daily Living (ADLs). Hence, they need supervision and assistance from their family members or professional caregivers, which can often lead to underestimated psychological and financial stress for all parties. The use of Ambient Assistive Living (AAL) technologies aims to empower people with dementia and relieve the burden of their caregivers.
The aim of this paper is to present the approach we have adopted to develop and deploy a system for ambient assistive living in an operating nursing home, and evaluate its performance and usability in real conditions. Based on this approach, we emphasise on the importance of deployments in real world settings as opposed to prototype testing in laboratories.
We chose to conduct this work in close partnership with end-users (dementia patients) and specialists in dementia care (professional caregivers). Our trial was conducted during a period of 14 months within three rooms in a nursing home in Singapore, and with the participation of eight dementia patients and two caregivers. A technical ambient assistive living solution, consisting of a set of sensors and devices controlled by a software platform, was deployed in the collaborating nursing home. The trial was preceded by a pre-deployment period to organise several observation sessions with dementia patients and focus group discussions with professional caregivers. A process of ground truth and system’s log data gathering was also planned prior to the trial and a system performance evaluation was realised during the deployment period with the help of caregivers. An ethical approval was obtained prior to real life deployment of our solution.
Patients’ observations and discussions allowed us to gather a set of requirements that a system for elders with mild-dementia should fulfil. In fact, our deployment has exposed more concrete requirements and problems that need to be addressed, and which cannot be identified in laboratory testing. Issues that were neither forecasted during the design phase nor during the laboratory testing surfaced during deployment, thus affecting the effectiveness of the proposed solution. Results of the system performance evaluation show the evolution of system precision and uptime over the deployment phases, while data analysis demonstrates the ability to provide early detection of the degradation of patients’ conditions. A qualitative feedback was collected from caregivers and doctors and a set of lessons learned emerged from this deployment experience. (Continued on next page) (Continued from previous page)
Lessons learned from this study were very useful for our research work and can serve as inspiration for developers and providers of assistive living services. They confirmed the importance of real deployment to evaluate assistive solutions especially with the involvement of professional caregivers. They also asserted the need for larger deployments. Larger deployments will allow to conduct surveys on assistive solutions social and health impact, even though they are time and manpower consuming during their first phases.
Ambient assistive living; Dementia assistance; Real life deployment; Dynamic and adaptable systems; Context aware services
Most previous Protein Protein Interaction (PPI) studies evaluated their algorithms' performance based on "per-instance" precision and recall, in which the instances of an interaction relation were evaluated independently. However, we argue that this standard evaluation method should be revisited. In a large corpus, the same relation can be described in various different forms and, in practice, correctly identifying not all but a small subset of them would often suffice to detect the given interaction.
In this regard, we propose a more pragmatic "per-relation" basis performance evaluation method instead of the conventional per-instance basis method. In the per-relation basis method, only a subset of a relation's instances needs to be correctly identified to make the relation positive. In this work, we also introduce a new high-precision rule-based PPI extraction algorithm. While virtually all current PPI extraction studies focus on improving F-score, aiming to balance the performance on both precision and recall, in many realistic scenarios involving large corpora, one can benefit more from a high-precision algorithm than a high-recall counterpart.
We show that our algorithm not only achieves better per-relation performance than previous solutions but also serves as a good complement to the existing PPI extraction tools. Our algorithm improves the performance of the existing tools through simple pipelining.
The significance of this research can be found in that this research brought new perspective to the performance evaluation of PPI extraction studies, which we believe is more important in practice than existing evaluation criteria. Given the new evaluation perspective, we also showed the importance of a high-precision extraction tool and validated the efficacy of our rule-based system as the high-precision tool candidate.
Detecting protein complexes is one of essential and fundamental tasks in understanding various biological functions or processes. Therefore accurate identification of protein complexes is indispensable.
For more accurate detection of protein complexes, we propose an algorithm which detects dense protein sub-networks of which proteins share closely located bottleneck proteins. The proposed algorithm is capable of finding protein complexes which allow overlapping with each other.
We applied our algorithm to several PPI (Protein-Protein Interaction) networks of Saccharomyces cerevisiae and Homo sapiens, and validated our results using public databases of protein complexes. The prediction accuracy was even more improved over our previous work which used also bottleneck information of the PPI network, but showed limitation when predicting small-sized protein complex detection.
Our algorithm resulted in overlapping protein complexes with significantly improved F1 score over existing algorithms. This result comes from high recall due to effective network search, as well as high precision due to proper use of bottleneck information during the network search.
Named entity recognition (NER) is an important task in clinical natural language processing (NLP) research. Machine learning (ML) based NER methods have shown good performance in recognizing entities in clinical text. Algorithms and features are two important factors that largely affect the performance of ML-based NER systems. Conditional Random Fields (CRFs), a sequential labelling algorithm, and Support Vector Machines (SVMs), which is based on large margin theory, are two typical machine learning algorithms that have been widely applied to clinical NER tasks. For features, syntactic and semantic information of context words has often been used in clinical NER systems. However, Structural Support Vector Machines (SSVMs), an algorithm that combines the advantages of both CRFs and SVMs, and word representation features, which contain word-level back-off information over large unlabelled corpus by unsupervised algorithms, have not been extensively investigated for clinical text processing. Therefore, the primary goal of this study is to evaluate the use of SSVMs and word representation features in clinical NER tasks.
In this study, we developed SSVMs-based NER systems to recognize clinical entities in hospital discharge summaries, using the data set from the concept extration task in the 2010 i2b2 NLP challenge. We compared the performance of CRFs and SSVMs-based NER classifiers with the same feature sets. Furthermore, we extracted two different types of word representation features (clustering-based representation features and distributional representation features) and integrated them with the SSVMs-based clinical NER system. We then reported the performance of SSVM-based NER systems with different types of word representation features.
Results and discussion
Using the same training (N = 27,837) and test (N = 45,009) sets in the challenge, our evaluation showed that the SSVMs-based NER systems achieved better performance than the CRFs-based systems for clinical entity recognition, when same features were used. Both types of word representation features (clustering-based and distributional representations) improved the performance of ML-based NER systems. By combining two different types of word representation features together with SSVMs, our system achieved a highest F-measure of 85.82%, which outperformed the best system reported in the challenge by 0.6%. Our results show that SSVMs is a great potential algorithm for clinical NLP research, and both types of unsupervised word representation features are beneficial to clinical NER tasks.