PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-10 (10)
 

Clipboard (0)
None
Journals
Authors
more »
Year of Publication
Document Types
1.  The shortened disabilities of the arm, shoulder and hand questionnaire (QuickDASH): validity and reliability based on responses within the full-length DASH 
Background
The 30-item disabilities of the arm, shoulder and hand (DASH) questionnaire is increasingly used in clinical research involving upper extremity musculoskeletal disorders. From the original DASH a shorter version, the 11-item QuickDASH, has been developed. Little is known about the discriminant ability of score changes for the QuickDASH compared to the DASH. The aim of this study was to assess the performance of the QuickDASH and its cross-sectional and longitudinal validity and reliability.
Methods
The study was based on extracting QuickDASH item responses from the responses to the full-length DASH questionnaire completed by 105 patients with a variety of upper extremity disorders before surgery and at follow-up 6 to 21 months after surgery. The DASH and QuickDASH scores were compared for the whole population and for different diagnostic groups. For longitudinal construct validity the effect size and standardized response mean were calculated. Analyses with ROC curves were performed to compare the ability of the DASH and QuickDASH to discriminate among patients classified according to the magnitude of self-rated improvement. Cross-sectional and test-retest reliability was assessed.
Results
The mean DASH score was 34 (SD 22) and the mean QuickDASH score was 39 (SD 24) at baseline. For the different diagnostic groups the mean and median QuickDASH scores were higher than the corresponding DASH scores. For the whole population, the mean difference between the QuickDASH and DASH baseline scores was 4.2 (95% CI 3.2–5.3), follow-up scores was 2.6 (1.7–3.4), and change scores was 1.7 (0.6–2.8).
The overall effect size and standardized response mean measured with the DASH and the QuickDASH were similar. In the ROC analysis of change scores among patients who rated their arm status as somewhat or much better and those who rated it as unchanged the difference in the area under the ROC curve for the DASH and QuickDASH was 0.01 (95% CI -0.05–0.07) indicating similar discriminant ability.
Cross-sectional and test-retest reliability of the DASH and QuickDASH were similar.
Conclusion
The results indicate that the QuickDASH can be used instead of the DASH with similar precision in upper extremity disorders.
doi:10.1186/1471-2474-7-44
PMCID: PMC1513569  PMID: 16709254
2.  The disabilities of the arm, shoulder and hand (DASH) outcome questionnaire: longitudinal construct validity and measuring self-rated health change after surgery 
Background
The disabilities of the arm, shoulder and hand (DASH) questionnaire is a self-administered region-specific outcome instrument developed as a measure of self-rated upper-extremity disability and symptoms. The DASH consists mainly of a 30-item disability/symptom scale, scored 0 (no disability) to 100. The main purpose of this study was to assess the longitudinal construct validity of the DASH among patients undergoing surgery. The second purpose was to quantify self-rated treatment effectiveness after surgery.
Methods
The longitudinal construct validity of the DASH was evaluated in 109 patients having surgical treatment for a variety of upper-extremity conditions, by assessing preoperative-to-postoperative (6–21 months) change in DASH score and calculating the effect size and standardized response mean. The magnitude of score change was also analyzed in relation to patients' responses to an item regarding self-perceived change in the status of the arm after surgery. Performance of the DASH as a measure of treatment effectiveness was assessed after surgery for subacromial impingement and carpal tunnel syndrome by calculating the effect size and standardized response mean.
Results
Among the 109 patients, the mean (SD) DASH score preoperatively was 35 (22) and postoperatively 24 (23) and the mean score change was 15 (13). The effect size was 0.7 and the standardized response mean 1.2.
The mean change (95% confidence interval) in DASH score for the patients reporting the status of the arm as "much better" or "much worse" after surgery was 19 (15–23) and for those reporting it as "somewhat better" or "somewhat worse" was 10 (7–14) (p = 0.01). In measuring effectiveness of arthroscopic acromioplasty the effect size was 0.9 and standardized response mean 0.5; for carpal tunnel surgery the effect size was 0.7 and standardized response mean 1.0.
Conclusion
The DASH can detect and differentiate small and large changes of disability over time after surgery in patients with upper-extremity musculoskeletal disorders. A 10-point difference in mean DASH score may be considered as a minimal important change. The DASH can show treatment effectiveness after surgery for subacromial impingement and carpal tunnel syndrome. The effect size and standardized response mean may yield substantially differing results.
doi:10.1186/1471-2474-4-11
PMCID: PMC165599  PMID: 12809562
3.  Intervention randomized controlled trials involving wrist and shoulder arthroscopy: a systematic review 
Background
Although arthroscopy of upper extremity joints was initially a diagnostic tool, it is increasingly used for therapeutic interventions. Randomized controlled trials (RCTs) are considered the gold standard for assessing treatment efficacy. We aimed to review the literature for intervention RCTs involving wrist and shoulder arthroscopy.
Methods
We performed a systematic review for RCTs in which at least one arm was an intervention performed through wrist arthroscopy or shoulder arthroscopy. PubMed and Cochrane Library databases were searched up to December 2012. Two researchers reviewed each article and recorded the condition treated, randomization method, number of randomized participants, time of randomization, outcomes measures, blinding, and description of dropouts and withdrawals. We used the modified Jadad scale that considers the randomization method, blinding, and dropouts/withdrawals; score 0 (lowest quality) to 5 (highest quality). The scores for the wrist and shoulder RCTs were compared with the Mann–Whitney test.
Results
The first references to both wrist and shoulder arthroscopy appeared in the late 1970s. The search found 4 wrist arthroscopy intervention RCTs (Kienböck’s disease, dorsal wrist ganglia, volar wrist ganglia, and distal radius fracture; first 3 compared arthroscopic with open surgery). The median number of participants was 45. The search found 50 shoulder arthroscopy intervention RCTs (rotator cuff tears 22, instability 14, impingement 9, and other conditions 5). Of these, 31 compared different arthroscopic treatments, 12 compared arthroscopic with open treatment, and 7 compared arthroscopic with nonoperative treatment. The median number of participants was 60. The median modified Jadad score for the wrist RCTs was 0.5 (range 0–1) and for the shoulder RCTs 3.0 (range 0–5) (p = 0.012).
Conclusion
Despite the increasing use of wrist arthroscopy in the treatment of various wrist disorders the efficacy of arthroscopically performed wrist interventions has been studied in only 4 randomized studies compared to 50 randomized studies of significantly higher quality assessing interventions performed through shoulder arthroscopy.
doi:10.1186/1471-2474-15-252
PMCID: PMC4123827  PMID: 25059881
Arthroscopy; Wrist; Shoulder; Randomized trials; Jadad scale; Intervention RCT; Systematic review
4.  Bleeding and first-year mortality following hip fracture surgery and preoperative use of low-dose acetylsalicylic acid: an observational cohort study 
Background
Hip fracture is associated with high mortality. Cardiovascular disease and other comorbidities requiring long-term anticoagulant medication are common in these mostly elderly patients. The objective of our observational cohort study of patients undergoing surgery for hip fracture was to study the association between preoperative use of low-dose acetylsalicylic acid (LdAA) and intraoperative blood loss, blood transfusion and first-year all-cause mortality.
Methods
An observational cohort study was conducted on patients with hip fracture (cervical requiring hemiarthroplasty or pertrochanteric or subtrochanteric requiring internal fixation) participating in a randomized trial that found lack of efficacy of a compression bandage in reducing postoperative bleeding. The participants were 255 patients (≥50 years) of whom 118 (46%) were using LdAA (defined as ≤320 mg daily) preoperatively. Bleeding variables in patients with and without LdAA treatment at time of fracture were measured and blood transfusions given were compared using logistic regression. The association between first-year mortality and preoperative use of LdAA was analyzed with Cox regression adjusting for age, sex, type of fracture, baseline renal dysfunction and baseline cardiovascular and/or cerebrovascular disease.
Results
Blood transfusions were given postoperatively to 74 (62.7%) LdAA-treated and 76 (54%) non-treated patients; the adjusted odds ratio was 1.8 (95% CI 1.04 to 3.3). First-year mortality was significantly higher in LdAA-treated patients; the adjusted hazard ratio (HR) was 2.35 (95% CI 1.23 to 4.49). The mortality was also higher with baseline cardiovascular and/or cerebrovascular disease, adjusted HR 2.78 (95% CI 1.31 to 5.88). Patients treated with LdAA preoperatively were significantly more likely to suffer thromboembolic events (5.7% vs. 0.7%, P = 0.03).
Conclusions
In patients with hip fracture (cervical treated with hemiarthroplasty or pertrochanteric or subtrochanteric treated with internal fixation) preoperative use of low-dose acetylsalicylic acid was associated with significantly increased need for postoperative blood transfusions and significantly higher all-cause mortality during one year after surgery.
doi:10.1186/1471-2474-12-254
PMCID: PMC3220640  PMID: 22059476
5.  Relationship between distal radius fracture malunion and arm-related disability: A prospective population-based cohort study with 1-year follow-up 
Background
Distal radius fracture is a common injury and may result in substantial dysfunction and pain. The purpose was to investigate the relationship between distal radius fracture malunion and arm-related disability.
Methods
The prospective population-based cohort study included 143 consecutive patients above 18 years with an acute distal radius fracture treated with closed reduction and either cast (55 patients) or external and/or percutaneous pin fixation (88 patients). The patients were evaluated with the disabilities of the arm, shoulder and hand (DASH) questionnaire at baseline (concerning disabilities before fracture) and one year after fracture. The 1-year follow-up included the SF-12 health status questionnaire and clinical and radiographic examinations. Patients were classified into three hypothesized severity categories based on fracture malunion; no malunion, malunion involving either dorsal tilt (>10 degrees) or ulnar variance (≥1 mm), and combined malunion involving both dorsal tilt and ulnar variance. Multivariate regression analyses were performed to determine the relationship between the 1-year DASH score and malunion and the relative risk (RR) of obtaining DASH score ≥15 and the number needed to harm (NNH) were calculated.
Results
The mean DASH score at one year after fracture was significantly higher by a minimum of 10 points with each malunion severity category. The RR for persistent disability was 2.5 if the fracture healed with malunion involving either dorsal tilt or ulnar variance and 3.7 if the fracture healed with combined malunion. The NNH was 2.5 (95% CI 1.8-5.4). Malunion had a statistically significant relationship with worse SF-12 score (physical health) and grip strength.
Conclusion
Malunion after distal radius fracture was associated with higher arm-related disability regardless of age.
doi:10.1186/1471-2474-12-9
PMCID: PMC3032765  PMID: 21232088
6.  Carpal tunnel syndrome and the use of computer mouse and keyboard: A systematic review 
Background
This review examines evidence for an association between computer work and carpal tunnel syndrome (CTS).
Methods
A systematic review of studies of computer work and CTS was performed. Supplementary, longitudinal studies of low force, repetitive work and CTS, and studies of possible pathophysiological mechanisms were evaluated.
Results
Eight epidemiological studies of the association between computer work and CTS were identified. All eight studies had one or more limitation including imprecise exposure and outcome assessment, low statistical power or potentially serious biases. In three of the studies an exposure-response association was observed but because of possible misclassification no firm conclusions could be drawn. Three of the studies found risks below 1. Also longitudinal studies of repetitive low-force non-computer work (n = 3) were reviewed but these studies did not add evidence to an association. Measurements of carpal tunnel pressure (CTP) under conditions typically observed among computer users showed pressure values below levels considered harmful. However, during actual mouse use one study showed an increase of CTP to potentially harmful levels. The long term effects of prolonged or repeatedly increased pressures at these levels are not known, however.
Conclusion
There is insufficient epidemiological evidence that computer work causes CTS.
doi:10.1186/1471-2474-9-134
PMCID: PMC2569035  PMID: 18838001
7.  Local steroid injection for moderately severe idiopathic carpal tunnel syndrome: Protocol of a randomized double-blind placebo-controlled trial (NCT 00806871) 
Background
Patients with idiopathic carpal tunnel syndrome (CTS) are commonly treated with steroid injection into or proximal to the carpal tunnel. However, evidence for its efficacy beyond one month has not been established in randomized placebo-controlled trials. The primary aim of this randomized trial is to assess the efficacy of steroid injection into the carpal tunnel in relieving symptoms of CTS in patients with symptoms of such severity to warrant surgical treatment but have not been treated with steroid injection.
Methods/Design
The study is a randomized double-blind placebo-controlled trial. Patients referred to one orthopedic department because of CTS are screened. Eligibility criteria are age 18 to 70 years, clinical diagnosis of primary idiopathic CTS and abnormal nerve conduction tests or clinical diagnosis made independently by two orthopedic surgeons, failed treatment with wrist splinting, symptom severity of such magnitude that the patient is willing to undergo surgery, no severe sensory loss or thenar muscle atrophy, and no previous steroid injection for CTS. A total of 120 patients will be randomized to injection of 80 mg Methylprednisolone, 40 mg Methylprednisolone, or normal saline, each also containing 10 mg Lidocaine. Evaluation at baseline and at 5, 10, 24 and 52 weeks after injection includes validated questionnaires (CTS symptom severity scale, QuickDASH and SF-6D), adverse events, physical examination by a blinded assessor, and nerve conduction tests. The primary outcome measures are change in the CTS symptom severity score at 10 weeks and the rate of surgery at 52 weeks. The secondary outcome measures are the score change in the CTS symptom severity scale at 52 weeks, time to surgery, and change in QuickDASH and SF-6D scores and patient satisfaction at 10 and 52 weeks. The primary analysis will be carried out using mixed model analysis of repeated measures.
Discussion
This paper describes the rationale and design of a double-blind, randomized placebo-controlled trial that aims to determine the efficacy of two different doses of steroid injected into the carpal tunnel in patients with moderately severe idiopathic CTS.
Trial registration
Clinicaltrials.gov identifier NCT00806871
doi:10.1186/1471-2474-11-76
PMCID: PMC2868793  PMID: 20409331
8.  Preventing knee injuries in adolescent female football players – design of a cluster randomized controlled trial [NCT00894595] 
Background
Knee injuries in football are common regardless of age, gender or playing level, but adolescent females seem to have the highest risk. The consequences after severe knee injury, for example anterior cruciate ligament (ACL) injury, are well-known, but less is known about knee injury prevention. We have designed a cluster randomized controlled trial (RCT) to evaluate the effect of a warm-up program aimed at preventing acute knee injury in adolescent female football.
Methods
In this cluster randomized trial 516 teams (309 clusters) in eight regional football districts in Sweden with female players aged 13–17 years were randomized into an intervention group (260 teams) or a control group (256 teams). The teams in the intervention group were instructed to do a structured warm-up program at two training sessions per week throughout the 2009 competitive season (April to October) and those in the control group were informed to train and play as usual. Sixty-eight sports physical therapists are assigned to the clubs to assist both groups in data collection and to examine the players' acute knee injuries during the study period. Three different forms are used in the trial: (1) baseline player data form collected at the start of the trial, (2) computer-based registration form collected every month, on which one of the coaches/team leaders documents individual player exposure, and (3) injury report form on which the study therapists report acute knee injuries resulting in time loss from training or match play. The primary outcome is the incidence of ACL injury and the secondary outcomes are the incidence of any acute knee injury (except contusion) and incidence of severe knee injury (defined as injury resulting in absence of more than 4 weeks). Outcome measures are assessed after the end of the 2009 season.
Discussion
Prevention of knee injury is beneficial for players, clubs, insurance companies, and society. If the warm-up program is proven to be effective in reducing the incidence of knee injury, it can have a major impact by reducing the future knee injury burden in female football as well as the negative long-term disabilities associated with knee injury.
Trial registration
NCT00894595
doi:10.1186/1471-2474-10-75
PMCID: PMC2711921  PMID: 19545453
9.  Incidence and characteristics of distal radius fractures in a southern Swedish region 
Background
The incidence of distal radius fracture has increased substantially during the last 50 years according to several studies that estimated the overall incidence in various general populations. The incidence of fracture classified according to severity has not been well documented. The aim of this population-based study was to estimate the overall and type-specific incidence rates of distal radius fracture in a representative population in southern Sweden.
Methods
During 2001, all persons older than 18 years with acute distal radius fracture in the southern Swedish region of Northeastern Scania were prospectively recorded. A radiologist classified the fractures according to the AO system and measured volar tilt and ulnar variance. A fracture with volar tilt outside a range of -5° to 20° and/or ulnar variance of 2 mm or greater was defined as displaced.
Results
335 persons with acute distal radius fracture were recorded during the 1-year period. The overall incidence rate was 26 (95% confidence interval 23–29) per 10,000 person-years. Among women the incidence rate increased rapidly from the age of 50 and reached a peak of 119 per 10,000 person-years in women 80 years and older. The incidence rate among women 50 to 79 years old (56 per 10,000 person-years) was lower than that reported in previous studies of similar populations. Among men the incidence rate was low until the age of 80 years and older when it increased to 28 per 10,000 person-years. Fractures classified as AO type A comprised about 80% of the fractures in women and 64% in men. Almost two-thirds of all fractures were displaced and among men and women 80 years and older more than 80% of the fractures were displaced.
Conclusion
The incidence rate of distal radius fracture in women 50 to 79 years old was lower than previously reported, which may indicate declining incidence in this group. In both sexes, the incidence was highest in the age group of 80 years and older. With a growing number of elderly in the general population, the impact of distal radius fracture in the future may be considerable.
doi:10.1186/1471-2474-8-48
PMCID: PMC1904215  PMID: 17540030
10.  Diagnostic properties of nerve conduction tests in population-based carpal tunnel syndrome 
Background
Numerous nerve conduction tests are used for the electrodiagnosis of carpal tunnel syndrome (CTS), with a wide range of sensitivity and specificity reported for each test in clinical studies. The tests have not been assessed in population-based studies. Such information would be important when using electrodiagnosis in epidemiologic research. The purpose of this study was to compare the diagnostic accuracy of various nerve conduction tests in population-based CTS and determine the properties of the most accurate test.
Methods
In a population-based study a questionnaire was mailed to a random sample of 3,000 persons. Of 2,466 responders, 262 symptomatic (numbness/tingling in the radial fingers) and 125 randomly selected asymptomatic responders underwent clinical and electrophysiologic examinations. A standardized hand diagram was administered to the symptomatic persons. At the clinical examination, the examining surgeon identified 94 symptomatic persons as having clinically certain CTS. Nerve conduction tests were then performed on the symptomatic and the asymptomatic persons by blinded examiners. Analysis with receiver operating characteristic (ROC) curves was used to compare the diagnostic accuracy of the nerve conduction tests in distinguishing the persons with clinically certain CTS from the asymptomatic persons.
Results
No difference was shown in the diagnostic accuracy of median nerve distal motor latency, digit-wrist sensory latency, wrist-palm sensory conduction velocity, and wrist-palm/forearm sensory conduction velocity ratio (area under curve, 0.75–0.76). Median-ulnar digit-wrist sensory latency difference had a significantly higher diagnostic accuracy (area under curve, 0.80). Using the optimal cutoff value of 0.8 ms for abnormal sensory latency difference shown on the ROC curve the sensitivity was 70%, specificity 82%, positive predictive value 19% and negative predictive value 98%. Based on the clinical diagnosis among the symptomatic persons, the hand diagram (classified as classic/probable or possible/unlikely CTS) had high sensitivity but poor specificity.
Conclusions
Using the clinical diagnosis of CTS as the criterion standard, nerve conduction tests had moderate sensitivity and specificity and a low positive predictive value in population-based CTS. Measurement of median-ulnar sensory latency difference had the highest diagnostic accuracy. The performance of nerve conduction tests in population-based CTS does not necessarily apply to their performance in clinical settings.
doi:10.1186/1471-2474-4-9
PMCID: PMC156649  PMID: 12734018

Results 1-10 (10)