PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (36)
 

Clipboard (0)
None

Select a Filter Below

Year of Publication
more »
1.  The pharmacokinetics of oxypurinol in people with gout 
AIMS
Our aim was to identify and quantify the sources of variability in oxypurinol pharmacokinetics and explore relationships with plasma urate concentrations.
METHODS
Non-linear mixed effects modelling was applied to concentration–time data from 155 gouty patients with demographic, medical history and renal transporter genotype information.
RESULTS
A one compartment pharmacokinetic model with first order absorption best described the oxypurinol concentration–time data. Renal function and concomitant medicines (diuretics and probenecid), but not transporter genotype, significantly influenced oxypurinol pharmacokinetics and reduced the between subject variability in the apparent clearance of oxypurinol (CL/Fm) from 65% to 29%. CL/Fm for patients with normal, mild, moderate and severe renal impairment was 1.8, 0.6, 0.3 and 0.18 l h−1, respectively. Model predictions showed a relationship between plasma oxypurinol and urate concentrations and failure to reach target oxypurinol concentrations using suggested allopurinol dosing guidelines.
CONCLUSIONS
In conclusion, this first established pharmacokinetic model provides a tool to achieve target oxypurinol plasma concentrations, thereby optimizing the effectiveness and safety of allopurinol therapy in gouty patients with various degrees of renal impairment.
doi:10.1111/j.1365-2125.2012.04207.x
PMCID: PMC3477349  PMID: 22300439
allopurinol; gout; oxypurinol; pharmacogenetics; pharmacokinetics; urate
2.  Efficacy and safety of paracetamol for spinal pain and osteoarthritis: systematic review and meta-analysis of randomised placebo controlled trials 
Objective To investigate the efficacy and safety of paracetamol (acetaminophen) in the management of spinal pain and osteoarthritis of the hip or knee.
Design Systematic review and meta-analysis.
Data sources Medline, Embase, AMED, CINAHL, Web of Science, LILACS, International Pharmaceutical Abstracts, and Cochrane Central Register of Controlled Trials from inception to December 2014.
Eligibility criteria for selecting studies Randomised controlled trials comparing the efficacy and safety of paracetamol with placebo for spinal pain (neck or low back pain) and osteoarthritis of the hip or knee.
Data extraction Two independent reviewers extracted data on pain, disability, and quality of life. Secondary outcomes were adverse effects, patient adherence, and use of rescue medication. Pain and disability scores were converted to a scale of 0 (no pain or disability) to 100 (worst possible pain or disability). We calculated weighted mean differences or risk ratios and 95% confidence intervals using a random effects model. The Cochrane Collaboration’s tool was used for assessing risk of bias, and the GRADE approach was used to evaluate the quality of evidence and summarise conclusions.
Results 12 reports (13 randomised trials) were included. There was “high quality” evidence that paracetamol is ineffective for reducing pain intensity (weighted mean difference −0.5, 95% confidence interval −2.9 to 1.9) and disability (0.4, −1.7 to 2.5) or improving quality of life (0.4, −0.9 to 1.7) in the short term in people with low back pain. For hip or knee osteoarthritis there was “high quality” evidence that paracetamol provides a significant, although not clinically important, effect on pain (−3.7, −5.5 to −1.9) and disability (−2.9, −4.9 to −0.9) in the short term. The number of patients reporting any adverse event (risk ratio 1.0, 95% confidence interval 0.9 to 1.1), any serious adverse event (1.2, 0.7 to 2.1), or withdrawn from the study because of adverse events (1.2, 0.9 to 1.5) was similar in the paracetamol and placebo groups. Patient adherence to treatment (1.0, 0.9 to 1.1) and use of rescue medication (0.7, 0.4 to 1.3) was also similar between groups. “High quality” evidence showed that patients taking paracetamol are nearly four times more likely to have abnormal results on liver function tests (3.8, 1.9 to 7.4), but the clinical importance of this effect is uncertain.
Conclusions Paracetamol is ineffective in the treatment of low back pain and provides minimal short term benefit for people with osteoarthritis. These results support the reconsideration of recommendations to use paracetamol for patients with low back pain and osteoarthritis of the hip or knee in clinical practice guidelines.
Systematic review registration PROSPERO registration number CRD42013006367.
doi:10.1136/bmj.h1225
PMCID: PMC4381278  PMID: 25828856
3.  What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system 
Objectives
To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff.
Design
Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’.
Setting
Two major academic teaching hospitals in Sydney, Australia.
Main Outcome Measures
Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports.
Results
A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit.
Conclusions
Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation.
doi:10.1093/intqhc/mzu098
PMCID: PMC4340271  PMID: 25583702
medication error; incident reporting; safety; electronic prescribing; medication administration errors
4.  Understanding the dose–response relationship of allopurinol: predicting the optimal dosage 
Aims
The aim of the study was to identify and quantify factors that control the plasma concentrations of urate during allopurinol treatment and to predict optimal doses of allopurinol.
Methods
Plasma concentrations of urate and creatinine (112 samples, 46 patients) before and during treatment with various doses of allopurinol (50–600 mg daily) were monitored. Non-linear and multiple linear regression equations were used to examine the relationships between allopurinol dose (D), creatinine clearance (CLcr) and plasma concentrations of urate before (UP) and during treatment with allopurinol (UT).
Results
Plasma concentrations of urate achieved during allopurinol therapy were dependent on the daily dose of allopurinol and the plasma concentration of urate pre-treatment. The non-linear equation: UT = (1 – D/(ID50 + D)) × (UP – UR) + UR, fitted the data well (r2 = 0.74, P < 0.0001). The parameters and their best fit values were: daily dose of allopurinol reducing the inhibitable plasma urate by 50% (ID50 = 226 mg, 95% CI 167, 303 mg), apparent resistant plasma urate (UR = 0.20 mmol l−1, 95 % CI 0.14, 0.25 mmol l−1). Incorporation of CLcr did not significantly improve the fit (P = 0.09).
Conclusions
A high baseline plasma urate concentration requires a high dose of allopurinol to reduce plasma urate below recommended concentrations. This dose is dependent on only the pre-treatment plasma urate concentration and is not influenced by CLcr.
doi:10.1111/bcp.12126
PMCID: PMC3845316  PMID: 23590252
allopurinol; creatinine clearance; gout; urate; uric acid
5.  Multiple episodes of aspirin overdose in an individual patient: a case report 
Introduction
Aspirin overdose, though now infrequently encountered, nevertheless continues to contribute to significant morbidity and mortality. The patient described in this case report intentionally ingested overdoses of aspirin on repeated occasions. The case provided an unusual and possibly one-of-a-kind opportunity to focus on the variability in the time course of plasma salicylate concentrations with current treatment modalities of aspirin overdose in an individual patient.
Case presentation
A 75-year-old Caucasian man who weighed 45kg and had an extensive history of various drug overdoses and stage 3 chronic kidney disease presented to a tertiary university hospital on three occasions within 2 months after successive overdoses of aspirin. During his third admission, he overdosed with aspirin, while on the ward recovering from the previous aspirin overdose. The overdoses were categorized as “potentially lethal” on two occasions and as “serious” in the other two, based on the alleged dose of aspirin ingested (over 500mg/kg in the first two overdoses, and 320mg/kg and 498mg/kg in the other two, respectively). However, as assessed by the observed salicylate concentrations, the ingestions would more appropriately have been categorized as being of “moderate” severity for the first and second overdose and “mild” severity for each of the others. This categorization was more consistent with the clinical severity of his admissions. A single dose of activated charcoal was administered only after the second overdose. On each occasion, he was given intravenous fluid with the aim of achieving euvolemia. Urinary alkalization was not attempted during the first admission, which was associated with the longest apparent elimination half-life of salicylate (30 hours). A plasma potassium concentration of approximately 4mmol/L appeared to be needed for adequate urinary alkalization.
Conclusion
In a patient with impaired renal function, intravenous fluid and urinary alkalization are the mainstays of treatment of aspirin overdose. Correction of hypokalemia is recommended. Repeated doses of charcoal may be a worthwhile intervention when there is no risk of aspiration. Our experience in this case also revealed considerable unexplained variation in management despite the availability of guidelines. It is, therefore, important to monitor the implementation of available guidelines.
doi:10.1186/1752-1947-8-374
PMCID: PMC4275751  PMID: 25406385
Aspirin; Euvolemia; Overdose; Potassium; Salicylate; Toxicity; Urinary alkalization
6.  What an open source clinical trial community can learn from hackers 
Science translational medicine  2012;4(132):132cm5.
Summary
Open sharing of clinical trial data has been proposed as a way to address the gap between the production of clinical evidence and the decision-making of physicians. Since a similar gap has already been addressed in the software industry by the open source software movement, we examine how the social and technical principles of the movement can be used to guide the growth of an open source clinical trial community.
doi:10.1126/scitranslmed.3003682
PMCID: PMC4059195  PMID: 22553248
7.  Long-Term Patterns of Online Evidence Retrieval Use in General Practice: A 12-Month Study 
Background
Provision of online evidence at the point of care is one strategy that could provide clinicians with easy access to up-to-date evidence in clinical settings in order to support evidence-based decision making.
Objective
The aim was to determine long-term use of an online evidence system in routine clinical practice.
Methods
This was a prospective cohort study. 59 clinicians who had a computer with Internet access in their consulting room participated in a 12-month trial of Quick Clinical, an online evidence system specifically designed around the needs of general practitioners (GPs). Patterns of use were determined by examination of computer logs and survey analysis.
Results
On average, 9.9 searches were conducted by each GP in the first 2 months of the study. After this, usage dropped to 4.4 searches per GP in the third month and then levelled off to between 0.4 and 2.6 searches per GP per month. The majority of searches (79.2%, 2013/2543) were conducted during practice hours (between 9 am and 5 pm) and on weekdays (90.7%, 2315/2543). The most frequent searches related to diagnosis (33.6%, 821/2291) and treatment (34.5%, 844/2291).
Conclusion
GPs will use an online evidence retrieval system in routine practice; however, usage rates drop significantly after initial introduction of the system. Long-term studies are required to determine the extent to which GPs will integrate the use of such technologies into their everyday clinical practice and how this will affect the satisfaction and health outcomes of their patients.
doi:10.2196/jmir.974
PMCID: PMC2483842  PMID: 18353750
Clinical informatics; information retrieval; evidence-based medicine; family practice; evaluation studies; Internet
8.  Shared decision-making: the perspectives of young adults with type 1 diabetes mellitus 
Background
Shared decision-making (SDM) is at the core of patient-centered care. We examined whether young adults with type 1 diabetes perceived the clinician groups they consulted as practicing SDM.
Methods
In a web-based survey, 150 Australians aged 18–35 years and with type 1 diabetes rated seven aspects of SDM in their interactions with endocrinologists, diabetes educators, dieticians, and general practitioners. Additionally, 33 participants in seven focus groups discussed these aspects of SDM.
Results
Of the 150 respondents, 90% consulted endocrinologists, 60% diabetes educators, 33% dieticians, and 37% general practitioners. The majority of participants rated all professions as oriented toward all aspects of SDM, but there were professional differences. These ranged from 94.4% to 82.2% for “My clinician enquires about how I manage my diabetes”; 93.4% to 82.2% for “My clinician listens to my opinion about my diabetes management”; 89.9% to 74.1% for “My clinician is supportive of my diabetes management”; 93.2% to 66.1% for “My clinician suggests ways in which I can improve my self-management”; 96.6% to 85.7% for “The advice of my clinician can be understood”; 98.9% to 82.2% for “The advice of my clinician can be trusted”; and 86.5% to 67.9% for “The advice of my clinician is consistent with other members of the diabetes team”. Diabetes educators received the highest ratings on all aspects of SDM. The mean weighted average of agreement to SDM for all consultations was 84.3%. Focus group participants reported actively seeking clinicians who practiced SDM. A lack of SDM was frequently cited as a reason for discontinuing consultation. The dominant three themes in focus group discussions were whether clinicians acknowledged patients’ expertise, encouraged patients’ autonomy, and provided advice that patients could utilize to improve self-management.
Conclusion
The majority of clinicians engaged in SDM. Young adults with type 1 diabetes prefer such clinicians. They may fail to take up recommended health services when clinicians do not practice this component of patient-centered care. Such findings have implications for patient safety, improved health outcomes, and enhanced health service delivery.
doi:10.2147/PPA.S57707
PMCID: PMC3979791  PMID: 24729690
shared decision-making; patient perspective; patient-centered care; patient autonomy; type 1 diabetes; young adults; health service delivery; glycemic control
9.  Diabetes Education: the Experiences of Young Adults with Type 1 Diabetes 
Diabetes Therapy  2014;5(1):299-321.
Introduction
Clinician-led diabetes education is a fundamental component of care to assist people with Type 1 diabetes (T1D) self-manage their disease. Recent initiatives to incorporate a more patient-centered approach to diabetes education have included recommendations to make such education more individualized. Yet there is a dearth of research that identifies patients’ perceptions of clinician-led diabetes education. We aimed to describe the experience of diabetes education from the perspective of young adults with T1D.
Methods
We designed a self-reported survey for Australian adults, aged 18–35 years, with T1D. Participants (n = 150) were recruited by advertisements through diabetes consumer-organizations. Respondents were asked to rate aspects of clinician-led diabetes education and identify sources of self-education. To expand on the results of the survey we interviewed 33 respondents in focus groups.
Results
Survey: The majority of respondents (56.0%) were satisfied with the amount of continuing clinician-led diabetes education; 96.7% sought further self-education; 73.3% sourced more diabetes education themselves than that provided by their clinicians; 80.7% referred to diabetes organization websites for further education; and 30.0% used online chat-rooms and blogs for education. Focus groups: The three key themes that emerged from the interview data were deficiencies related to the pedagogy of diabetes education; knowledge deficiencies arising from the gap between theoretical diabetes education and practical reality; and the need for and problems associated with autonomous and peer-led diabetes education.
Conclusion
Our findings indicate that there are opportunities to improve clinician led-diabetes education to improve patient outcomes by enhancing autonomous health-literacy skills and to incorporate peer-led diabetes education and support with clinician-led education. The results provide evidence for the potential value of patient engagement in quality improvement and health-service redesign.
Electronic supplementary material
The online version of this article (doi:10.1007/s13300-014-0056-0) contains supplementary material, which is available to authorized users.
doi:10.1007/s13300-014-0056-0
PMCID: PMC4065294  PMID: 24519150
Diabetes education; Endocrinology; Patient-centered care; Patient education; Patient perspective; Qualitative research; Type 1 diabetes; Young adults
10.  Failure to utilize functions of an electronic prescribing system and the subsequent generation of ‘technically preventable’ computerized alerts 
Objectives
To determine the frequency with which computerized alerts occur and the proportion triggered as a result of prescribers not utilizing e-prescribing system functions.
Methods
An audit of electronic inpatient medication charts at a teaching hospital in Sydney, Australia, was conducted to identify alerts fired, to categorize the system functions used by prescribers, and to assess if use of short-cut system functions could have prevented the alerts.
Results
Of the 2209 active orders reviewed, 600 (27.2%) triggered at least one alert. Therapeutic duplication alerts were the most frequent (n=572). One third of these (20.2% of all alerts) was ‘technically preventable’ and would not have fired if prescribers had used a short-cut system function to prescribe. Under-utilized system functions included the option to ‘MODIFY’ existing orders and use of the ‘AND’ function for concurrent orders. Pregnancy alerts, set for women aged between 12 and 55 years, were triggered for 43% of drugs ordered for this group.
Conclusion
Developers of decision support systems should test the extent to which technically preventable alerts may arise when prescribers fail to use system functions as designed. Designs which aim to improve the efficiency of the prescribing process but which do not align with the cognitive processes of users may fail to achieve this desired outcome and produce unexpected consequences such as triggering unnecessary alerts and user frustration. Ongoing user training to support effective use of e-prescribing system functions and modifications to the mechanisms underlying alert generation are needed to ensure that prescribers are presented with fewer but more meaningful alerts.
doi:10.1136/amiajnl-2011-000730
PMCID: PMC3534451  PMID: 22735616
Computerized alerts; preventable alerts; electronic prescribing; human error; decision making; decision support; medicines; adverse events; evaluation; medication safety
12.  PACE – the first placebo controlled trial of paracetamol for acute low back pain: statistical analysis plan 
Trials  2013;14:248.
Background
Paracetamol (acetaminophen) is recommended in most clinical practice guidelines as the first choice of treatment for low back pain, however there is limited evidence to support this recommendation. The PACE trial is the first placebo controlled trial of paracetamol for acute low back pain. This article describes the statistical analysis plan.
Results
PACE is a randomized double dummy placebo controlled trial that investigates and compares the effect of paracetamol taken in two regimens for the treatment of low back pain. The protocol has been published. The analysis plan was completed blind to study group and finalized prior to initiation of analyses. All data collected as part of the trial were reviewed, without stratification by group, and classified by baseline characteristics, process of care and trial outcomes. Trial outcomes were classified as primary and secondary outcomes. Appropriate descriptive statistics and statistical testing of between-group differences, where relevant, have been planned and described.
Conclusions
A standard analysis plan was developed for the results of the PACE study. This plan comprehensively describes the data captured and pre-determined statistical tests of relevant outcome measures. The plan demonstrates transparent and verifiable use of the data collected. This a priori plan will be followed to ensure rigorous standards of data analysis are strictly adhered to.
Trial registration
Australia and New Zealand Clinical Trials Registry ACTRN12609000966291
doi:10.1186/1745-6215-14-248
PMCID: PMC3750911  PMID: 23937999
Acetaminophen; Back pain; Paracetamol; Statistical analysis plan; Randomised controlled trial
13.  PRECISE - pregabalin in addition to usual care for sciatica: study protocol for a randomised controlled trial 
Trials  2013;14:213.
Background
Sciatica is a type of neuropathic pain that is characterised by pain radiating into the leg. It is often accompanied by low back pain and neurological deficits in the lower limb. While this condition may cause significant suffering for the individual, the lack of evidence supporting effective treatments for sciatica makes clinical management difficult. Our objectives are to determine the efficacy of pregabalin on reducing leg pain intensity and its cost-effectiveness in patients with sciatica.
Methods/Design
PRECISE is a prospectively registered, double-blind, randomised placebo-controlled trial of pregabalin compared to placebo, in addition to usual care. Inclusion criteria include moderate to severe leg pain below the knee with evidence of nerve root/spinal nerve involvement. Participants will be randomised to receive either pregabalin with usual care (n = 102) or placebo with usual care (n = 102) for 8 weeks. The medicine dosage will be titrated up to the participant’s optimal dose, to a maximum 600 mg per day. Follow up consultations will monitor individual progress, tolerability and adverse events. Usual care, if deemed appropriate by the study doctor, may include a referral for physical or manual therapy and/or prescription of analgesic medication. Participants, doctors and researchers collecting participant data will be blinded to treatment allocation. Participants will be assessed at baseline and at weeks 2, 4, 8, 12, 26 and 52. The primary outcome will determine the efficacy of pregabalin in reducing leg pain intensity. Secondary outcomes will include back pain intensity, disability and quality of life. Data analysis will be blinded and by intention-to-treat. A parallel economic evaluation will be conducted from health sector and societal perspectives.
Discussion
This study will establish the efficacy of pregabalin in reducing leg pain intensity in patients with sciatica and provide important information regarding the effect of pregabalin treatment on disability and quality of life. The impact of this research may allow the future development of a cost-effective conservative treatment strategy for patients with sciatica.
Trial registration
ClinicalTrial.gov, ACTRN 12613000530729
doi:10.1186/1745-6215-14-213
PMCID: PMC3711833  PMID: 23845078
Sciatica; Pregabalin; Neuropathic pain; Randomised control trial
14.  The safety of electronic prescribing: manifestations, mechanisms, and rates of system-related errors associated with two commercial systems in hospitals 
Objectives
To compare the manifestations, mechanisms, and rates of system-related errors associated with two electronic prescribing systems (e-PS). To determine if the rate of system-related prescribing errors is greater than the rate of errors prevented.
Methods
Audit of 629 inpatient admissions at two hospitals in Sydney, Australia using the CSC MedChart and Cerner Millennium e-PS. System related errors were classified by manifestation (eg, wrong dose), mechanism, and severity. A mechanism typology comprised errors made: selecting items from drop-down menus; constructing orders; editing orders; or failing to complete new e-PS tasks. Proportions and rates of errors by manifestation, mechanism, and e-PS were calculated.
Results
42.4% (n=493) of 1164 prescribing errors were system-related (78/100 admissions). This result did not differ by e-PS (MedChart 42.6% (95% CI 39.1 to 46.1); Cerner 41.9% (37.1 to 46.8)). For 13.4% (n=66) of system-related errors there was evidence that the error was detected prior to study audit. 27.4% (n=135) of system-related errors manifested as timing errors and 22.5% (n=111) wrong drug strength errors. Selection errors accounted for 43.4% (34.2/100 admissions), editing errors 21.1% (16.5/100 admissions), and failure to complete new e-PS tasks 32.0% (32.0/100 admissions). MedChart generated more selection errors (OR=4.17; p=0.00002) but fewer new task failures (OR=0.37; p=0.003) relative to the Cerner e-PS. The two systems prevented significantly more errors than they generated (220/100 admissions (95% CI 180 to 261) vs 78 (95% CI 66 to 91)).
Conclusions
System-related errors are frequent, yet few are detected. e-PS require new tasks of prescribers, creating additional cognitive load and error opportunities. Dual classification, by manifestation and mechanism, allowed identification of design features which increase risk and potential solutions. e-PS designs with fewer drop-down menu selections may reduce error risk.
doi:10.1136/amiajnl-2013-001745
PMCID: PMC3822121  PMID: 23721982
CPOE; Prescribing errors; Unintended consequences; Information technology; Clinical information systems
16.  The influence of computerized decision support on prescribing during ward-rounds: are the decision-makers targeted? 
Objective
To assess whether a low level of decision support within a hospital computerized provider order entry system has an observable influence on the medication ordering process on ward-rounds and to assess prescribers' views of the decision support features.
Methods
14 specialty teams (46 doctors) were shadowed by the investigator while on their ward-rounds and 16 prescribers from these teams were interviewed.
Results
Senior doctors were highly influential in prescribing decisions during ward-rounds but rarely used the computerized provider order entry system. Junior doctors entered the majority of medication orders into the system, nearly always ignored computerized alerts and never raised their occurrence with other doctors on ward-rounds. Interviews with doctors revealed that some decision support features were valued but most were not perceived to be useful.
Discussion and conclusion
The computerized alerts failed to target the doctors who were making the prescribing decisions on ward-rounds. Senior doctors were the decision makers, yet the junior doctors who used the system received the alerts. As a result, the alert information was generally ignored and not incorporated into the decision-making processes on ward-rounds. The greatest value of decision support in this setting may be in non-ward-round situations where senior doctors are less influential. Identifying how prescribing systems are used during different clinical activities can guide the design of decision support that effectively supports users in different situations. If confirmed, the findings reported here present a specific focus and user group for designers of medication decision support.
doi:10.1136/amiajnl-2011-000135
PMCID: PMC3197993  PMID: 21676939
Decision support; computerized alerts; prescribing; CPOE; qualitative research; observation; interviews; human error
17.  Fractional clearance of urate: validation of measurement in spot-urine samples in healthy subjects and gouty patients 
Arthritis Research & Therapy  2012;14(4):R189.
Introduction
Hyperuricemia is the greatest risk factor for gout and is caused by an overproduction and/or inefficient renal clearance of urate. The fractional renal clearance of urate (FCU, renal clearance of urate/renal clearance of creatinine) has been proposed as a tool to identify subjects who manifest inefficient clearance of urate. The aim of the present studies was to validate the measurement of FCU by using spot-urine samples as a reliable indicator of the efficiency of the kidney to remove urate and to explore its distribution in healthy subjects and gouty patients.
Methods
Timed (spot, 2-hour, 4-hour, 6-hour, 12-hour, and 24-hour) urine collections were used to derive FCU in 12 healthy subjects. FCUs from spot-urine samples were then determined in 13 healthy subjects twice a day, repeated on 3 nonconsecutive days. The effect of allopurinol, probenecid, and the combination on FCU was explored in 11 healthy subjects. FCU was determined in 36 patients with gout being treated with allopurinol. The distribution of FCU was examined in 118 healthy subjects and compared with that from the 36 patients with gout.
Results
No substantive or statistically significant differences were observed between the FCUs derived from spot and 24-hour urine collections. Coefficients of variation (CVs) were both 28%. No significant variation in the spot FCU was obtained either within or between days, with mean intrasubject CV of 16.4%. FCU increased with probenecid (P < 0.05), whereas allopurinol did not change the FCU in healthy or gouty subjects. FCUs of patients with gout were lower than the FCUs of healthy subjects (4.8% versus 6.9%; P < 0.0001).
Conclusions
The present studies indicate that the spot-FCU is a convenient, valid, and reliable indicator of the efficiency of the kidney in removing urate from the blood and thus from tissues. Spot-FCU determinations may provide useful correlates in studies investigating molecular mechanisms underpinning the observed range of efficiencies of the kidneys in clearing urate from the blood.
Trial Registration
ACTRN12611000743965
doi:10.1186/ar4020
PMCID: PMC3580585  PMID: 22901830
18.  Nation-scale adoption of new medicines by doctors: an application of the Bass diffusion model 
Background
The adoption of new medicines is influenced by a complex set of social processes that have been widely examined in terms of individual prescribers’ information-seeking and decision-making behaviour. However, quantitative, population-wide analyses of how long it takes for new healthcare practices to become part of mainstream practice are rare.
Methods
We applied a Bass diffusion model to monthly prescription volumes of 103 often-prescribed drugs in Australia (monthly time series data totalling 803 million prescriptions between 1992 and 2010), to determine the distribution of adoption rates. Our aim was to test the utility of applying the Bass diffusion model to national-scale prescribing volumes.
Results
The Bass diffusion model was fitted to the adoption of a broad cross-section of drugs using national monthly prescription volumes from Australia (median R2 = 0.97, interquartile range 0.95 to 0.99). The median time to adoption was 8.2 years (IQR 4.9 to 12.1). The model distinguished two classes of prescribing patterns – those where adoption appeared to be driven mostly by external forces (19 drugs) and those driven mostly by social contagion (84 drugs). Those driven more prominently by internal forces were found to have shorter adoption times (p = 0.02 in a non-parametric analysis of variance by ranks).
Conclusion
The Bass diffusion model may be used to retrospectively represent the patterns of adoption exhibited in prescription volumes in Australia, and distinguishes between adoption driven primarily by external forces such as regulation, or internal forces such as social contagion. The eight-year delay between the introduction of a new medicine and the adoption of the prescribing practice suggests the presence of system inertia in Australian prescribing practices.
doi:10.1186/1472-6963-12-248
PMCID: PMC3441328  PMID: 22876867
Adoption; Diffusion of innovation; Decision-making; Prescribing behaviour; Australia; Evidence-based practice
19.  Effects of Two Commercial Electronic Prescribing Systems on Prescribing Error Rates in Hospital In-Patients: A Before and After Study 
PLoS Medicine  2012;9(1):e1001164.
In a before-and-after study, Johanna Westbrook and colleagues evaluate the change in prescribing error rates after the introduction of two commercial electronic prescribing systems in two Australian hospitals.
Background
Considerable investments are being made in commercial electronic prescribing systems (e-prescribing) in many countries. Few studies have measured or evaluated their effectiveness at reducing prescribing error rates, and interactions between system design and errors are not well understood, despite increasing concerns regarding new errors associated with system use. This study evaluated the effectiveness of two commercial e-prescribing systems in reducing prescribing error rates and their propensities for introducing new types of error.
Methods and Results
We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system) at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders) and clinical (e.g., wrong dose, wrong drug) errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious) by hospital and study period; and rates and categories of postintervention “system-related” errors (where system functionality or design contributed to the error) were calculated. Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards (respectively reductions of 66.1% [95% CI 53.9%–78.3%]; 57.5% [33.8%–81.2%]; and 60.5% [48.5%–72.4%]). The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission (95% CI 5.23–7.28) to 2.12 (95% CI 1.71–2.54; p<0.0001) and at Hospital B from 3.62 (95% CI 3.30–3.93) to 1.46 (95% CI 1.20–1.73; p<0.0001). This decrease was driven by a large reduction in unclear, illegal, and incomplete orders. The Hospital A control wards experienced no significant change (respectively −12.8% [95% CI −41.1% to 15.5%]; −11.3% [−40.1% to 17.5%]; −20.1% [−52.2% to 12.4%]). There was limited change in clinical error rates, but serious errors decreased by 44% (0.25 per admission to 0.14; p = 0.0002) across the intervention wards compared to the control wards (17% reduction; 0.30–0.25; p = 0.40). Both hospitals experienced system-related errors (0.73 and 0.51 per admission), which accounted for 35% of postsystem errors in the intervention wards; each system was associated with different types of system-related errors.
Conclusions
Implementation of these commercial e-prescribing systems resulted in statistically significant reductions in prescribing error rates. Reductions in clinical errors were limited in the absence of substantial decision support, but a statistically significant decline in serious errors was observed. System-related errors require close attention as they are frequent, but are potentially remediable by system redesign and user training. Limitations included a lack of control wards at Hospital B and an inability to randomize wards to the intervention.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Medication errors—for example, prescribing the wrong drug or giving a drug by the wrong route—frequently occur in health care settings and are responsible for thousands of deaths every year. Until recently, medicines were prescribed and dispensed using systems based on hand-written scripts. In hospitals, for example, physicians wrote orders for medications directly onto a medication chart, which was then used by the nursing staff to give drugs to their patients. However, drugs are now increasingly being prescribed using electronic prescribing (e-prescribing) systems. With these systems, prescribers use a computer and order medications for their patients with the help of a drug information database and menu items, free text boxes, and prewritten orders for specific conditions (so-called passive decision support). The system reviews the patient's medication and known allergy list and alerts the physician to any potential problems, including drug interactions (active decision support). Then after the physician has responded to these alerts, the order is transmitted electronically to the pharmacy and/or the nursing staff who administer the prescription.
Why Was This Study Done?
By avoiding the need for physicians to write out prescriptions and by providing active and passive decision support, e-prescribing has the potential to reduce medication errors. But, even though many countries are investing in expensive commercial e-prescribing systems, few studies have evaluated the effects of these systems on prescribing error rates. Moreover, little is known about the interactions between system design and errors despite fears that e-prescribing might introduce new errors. In this study, the researchers analyze prescribing error rates in hospital in-patients before and after the implementation of two commercial e-prescribing systems.
What Did the Researchers Do and Find?
The researchers examined medication charts for procedural errors (unclear, incomplete, or illegal orders) and for clinical errors (for example, wrong drug or dose) at two Australian hospitals before and after the introduction of commercial e-prescribing systems. At Hospital A, the Cerner Millennium e-prescribing system was introduced on one ward; three other wards acted as controls. At Hospital B, the researchers compared the error rates on two wards before and after the introduction of the iSoft MedChart e-prescribing system. The introduction of an e-prescribing system was associated with a substantial reduction in error rates in the three intervention wards; error rates on the control wards did not change significantly during the study. At Hospital A, medication errors declined from 6.25 to 2.12 per admission after the introduction of e-prescribing whereas at Hospital B, they declined from 3.62 to 1.46 per admission. This reduction in error rates was mainly driven by a reduction in procedural error rates and there was only a limited change in overall clinical error rates. Notably, however, the rate of serious errors decreased across the intervention wards from 0.25 to 0.14 per admission (a 44% reduction), whereas the serious error rate only decreased by 17% in the control wards during the study. Finally, system-related errors (for example, selection of an inappropriate drug located on a drop-down menu next to a likely drug selection) accounted for 35% of errors in the intervention wards after the implementation of e-prescribing.
What Do These Findings Mean?
These findings show that the implementation of these two e-prescribing systems markedly reduced hospital in-patient prescribing error rates, mainly by reducing the number of incomplete, illegal, or unclear medication orders. The limited decision support built into both the e-prescribing systems used here may explain the limited reduction in clinical error rates but, importantly, both e-prescribing systems reduced serious medication errors. Finally, the high rate of system-related errors recorded in this study is worrying but is potentially remediable by system redesign and user training. Because this was a “real-world” study, it was not possible to choose the intervention wards randomly. Moreover, there was no control ward at Hospital B, and the wards included in the study had very different specialties. These and other aspects of the study design may limit the generalizability of these findings, which need to be confirmed and extended in additional studies. Even so, these findings provide persuasive evidence of the current and potential ability of commercial e-prescribing systems to reduce prescribing errors in hospital in-patients provided these systems are continually monitored and refined to improve their performance.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001164.
ClinfoWiki has pages on medication errors and on electronic prescribing (note: the Clinical Informatics Wiki is a free online resource that anyone can add to or edit)
Electronic prescribing in hospitals challenges and lessons learned describes the implementation of e-prescribing in UK hospitals; more information about e-prescribing in the UK is available on the NHS Connecting for Health Website
The Clinicians Guide to e-Prescribing provides up-to-date information about e-prescribing in the USA
Information about e-prescribing in Australia is also available
Information about electronic health records in Australia
doi:10.1371/journal.pmed.1001164
PMCID: PMC3269428  PMID: 22303286
20.  CareTrack Australia: assessing the appropriateness of adult healthcare: protocol for a retrospective medical record review 
BMJ Open  2012;2(1):e000665.
Introduction
In recent years in keeping with international best practice, clinical guidelines for common conditions have been developed, endorsed and disseminated by peak national and professional bodies. Yet evidence suggests that there remain considerable gaps between the care that is regarded as appropriate by such guidelines and the care received by patients. With an ageing population and increasing treatment options and expectations, healthcare is likely to become unaffordable unless more appropriate care is provided. This paper describes a study protocol that seeks to determine the percentage of healthcare encounters in which patients receive appropriate care for 22 common clinical conditions and the reasons why variations exist from the perspectives of both patients and providers.
Methods/design
A random stratified sample of at least 1000 eligible participants will be recruited from a representative cross section of the adult Australian population. Participants' medical records from the years 2009 and 2010 will be audited to assess the appropriateness of the care received for 22 common clinical conditions by determining the percentage of healthcare encounters at which the care provided was concordant with a set of 522 indicators of care, developed for these conditions by a panel of 43 disease experts. The knowledge, attitudes and beliefs of participants and healthcare providers will be examined through interviews and questionnaires to understand the factors influencing variations in care.
Ethics and dissemination
Primary ethics approvals were sought and obtained from the Hunter New England Local Health Network. The authors will submit the results of the study to a relevant journal as well as undertaking oral presentations to researchers, clinicians and policymakers.
Article summary
Article focus
What is the percentage of healthcare encounters at which Australians receive appropriate care?
What influences variations in care from the perspectives of patients and healthcare providers?
Key messages
A protocol for a population-based study of appropriate care of 1000 patients using medical record review.
Strengths and limitations of this study
Obtaining a snapshot and using a consistent method for 522 indicators across 22 common conditions power diagnostic indicators because they only present once for each patient.
The potential attrition rate of healthcare providers and telephone recruitment of participants may introduce selection biases.
doi:10.1136/bmjopen-2011-000665
PMCID: PMC3263440  PMID: 22262806
21.  Errors and electronic prescribing: a controlled laboratory study to examine task complexity and interruption effects 
Objective
To examine the effect of interruptions and task complexity on error rates when prescribing with computerized provider order entry (CPOE) systems, and to categorize the types of prescribing errors.
Design
Two within-subject factors: task complexity (complex vs simple) and interruption (interruption vs no interruption). Thirty-two hospital doctors used a CPOE system in a computer laboratory to complete four prescribing tasks, half of which were interrupted using a counterbalanced design.
Measurements
Types of prescribing errors, error rate, resumption lag, and task completion time.
Results
Errors in creating and updating electronic medication charts that were measured included failure to enter allergy information; selection of incorrect medication, dose, route, formulation, or frequency of administration from lists and drop-down menus presented by the CPOE system; incorrect entry or omission in entering administration times, start date, and free-text qualifiers; and omissions in prescribing and ceasing medications. When errors occurred, the error rates across the four prescribing tasks ranged from 0.5% (1 incorrect medication selected out of 192 chances for selecting a medication or error opportunities) to 16% (5 failures to enter allergy information out of 32 error opportunities). Any impact of interruptions on prescribing error rates and task completion times was not detected in our experiment. However, complex tasks took significantly longer to complete (F(1, 27)=137.9; p<0.001) and when execution was interrupted they required almost three times longer to resume compared to simple tasks (resumption lag complex=9.6 seconds, SD=5.6; resumption lag simple=3.4 seconds, SD=1.7; t(28)=6.186; p<0.001).
Conclusion
Most electronic prescribing errors found in this study could be described as slips in using the CPOE system to create and update electronic medication charts. Cues available within the user interface may have aided resumption of interrupted tasks making CPOE systems robust to some interruption effects. Further experiments are required to rule out any effect interruption might have on CPOE error rates.
doi:10.1136/jamia.2009.001719
PMCID: PMC2995669  PMID: 20819867
22.  A proposal for identifying the low renal uric acid clearance phenotype 
Investigation of the genetic basis of hyperuricaemia is a subject of intense interest. However, clinical studies commonly include hyperuricaemic patients without distinguishing between 'over-producers' or 'under-excretors' of urate. The statistical power of studies of genetic polymorphisms of genes encoding renal urate transporters is diluted if 'over-producers' of uric acid are included. We propose that lower than normal fractional renal clearance of urate is a better inclusion criterion for these studies. We also propose that a single daytime spot urine sample for calculation of fractional renal clearance of urate should be preferred to calculation from 24-hour urine collections.
doi:10.1186/ar3191
PMCID: PMC3046522  PMID: 21162713
23.  Pharmacokinetic and pharmacodynamic interactions of echinacea and policosanol with warfarin in healthy subjects 
AIMS
This study investigated the pharmacokinetic and pharmacodynamic interactions of echinacea and policosanol with warfarin in healthy subjects.
METHODS
This was an open-label, randomized, three-treatment, cross-over, clinical trial in healthy male subjects (n= 12) of known CYP2C9 and VKORC1 genotype who received a single oral dose of warfarin alone or after 2 weeks of pre-treatment with each herbal medicine at recommended doses. Pharmacodynamic (INR, platelet activity) and pharmacokinetic (warfarin enantiomer concentrations) end points were evaluated.
RESULTS
The apparent clearance of (S)-warfarin (90% CI of ratio; 1.01, 1.18) was significantly higher during concomitant treatment with echinacea but this did not lead to a clinically significant change in INR (90% CI of AUC of INR; 0.91, 1.31). Policosanol did not significantly affect warfarin enantiomer pharmacokinetics or warfarin response. Neither echinacea nor policosanol had a significant effect on platelet aggregation after 2 weeks of pre-treatment with the respective herbal medicines.
CONCLUSION
Echinacea significantly reduced plasma concentrations of S-warfarin. However, neither echinacea nor policosanol significantly affected warfarin pharmacodynamics, platelet aggregation or baseline clotting status in healthy subjects.
doi:10.1111/j.1365-2125.2010.03620.x
PMCID: PMC2856051  PMID: 20573086
echinacea; herb-drug interaction; pharmacodynamic; pharmacokinetic; policosanol; warfarin
24.  PACE - The first placebo controlled trial of paracetamol for acute low back pain: design of a randomised controlled trial 
Background
Clinical practice guidelines recommend that the initial treatment of acute low back pain (LBP) should consist of advice to stay active and regular simple analgesics such as paracetamol 4 g daily. Despite this recommendation in all international LBP guidelines there are no placebo controlled trials assessing the efficacy of paracetamol for LBP at any dose or dose regimen. This study aims to determine whether 4 g of paracetamol daily (in divided doses) results in a more rapid recovery from acute LBP than placebo. A secondary aim is to determine if ingesting paracetamol in a time-contingent manner is more effective than paracetamol taken when required (PRN) for recovery from acute LBP.
Methods/Design
The study is a randomised double dummy placebo controlled trial. 1650 care seeking people with significant acute LBP will be recruited. All participants will receive advice to stay active and will be randomised to 1 of 3 treatment groups: time-contingent paracetamol dose regimen (plus placebo PRN paracetamol), PRN paracetamol (plus placebo time-contingent paracetamol) or a double placebo study arm. The primary outcome will be time (days) to recovery from pain recorded in a daily pain diary. Other outcomes will be pain intensity, disability, function, global perceived effect and sleep quality, captured at baseline and at weeks 1, 2, 4 and 12 by an assessor blind to treatment allocation. An economic analysis will be conducted to determine the cost-effectiveness of treatment from the health sector and societal perspectives.
Discussion
The successful completion of the trial will provide the first high quality evidence on the effectiveness of the use of paracetamol, a guideline endorsed treatment for acute LBP.
Trail registration
ACTRN12609000966291.
doi:10.1186/1471-2474-11-169
PMCID: PMC2918542  PMID: 20650012
25.  A double-blind, placebo-controlled study of the short term effects of a spring water supplemented with magnesium bicarbonate on acid/base balance, bone metabolism and cardiovascular risk factors in postmenopausal women 
BMC Research Notes  2010;3:180.
Background
A number of health benefits including improvements in acid/base balance, bone metabolism, and cardiovascular risk factors have been attributed to the intake of magnesium rich alkaline mineral water. This study was designed to investigate the effects of the regular consumption of magnesium bicarbonate supplemented spring water on pH, biochemical parameters of bone metabolism, lipid profile and blood pressure in postmenopausal women.
Findings
In this double-blind, placebo-controlled, parallel-group, study, 67 postmenopausal women were randomised to receive between 1500 mL and 1800 mL daily of magnesium bicarbonate supplemented spring water (650 mg/L bicarbonate, 120 mg/L magnesium, pH 8.3-8.5) (supplemented water group) or spring water without supplements (control water group) over 84 days. Over this period biomarkers of bone turnover (serum parathyroid hormone (PTH), 1,25-dihydroxyvitamin D, osteocalcin, urinary telopeptides and hydroxyproline), serum lipids (total cholesterol, HDL-cholesterol, LDL-cholesterol and triglycerides), venous and urinary pH were measured together with measurements of standard biochemistry, haematology and urine examinations.
Serum magnesium concentrations and urinary pH in subjects consuming the magnesium bicarbonate supplemented water increased significantly at Day 84 compared to subjects consuming the spring water control (magnesium - p = 0.03; pH - p = 0.018). The consumption of spring water led to a trend for an increase in parathyroid hormone (PTH) concentrations while the PTH concentrations remained stable with the intake of the supplemented spring water. However there were no significant effects of magnesium bicarbonate supplementation in changes to biomarkers of bone mineral metabolism (n-telopeptides, hydroxyproline, osteocalcin and 1,25-dihydroxyvitamin D) or serum lipids or blood pressure in postmenopausal women from Day 0 to Day 84.
Conclusions
Short term regular ingestion of magnesium bicarbonate supplemented water provides a source of orally available magnesium. Long term clinical studies are required to investigate any health benefits.
Trial registration
ACTRN12609000863235
doi:10.1186/1756-0500-3-180
PMCID: PMC2908636  PMID: 20579398

Results 1-25 (36)