|Home | About | Journals | Submit | Contact Us | Français|
Introduction: Claudication is a common complaint of elderly patients. Lumbar spinal stenosis (LSS) and peripheral arterial disease (PAD) are the 2 main etiologies, producing neurogenic and vascular claudication, respectively. Physicians initially diagnose claudication based on a “typical” symptom profile. The reliability of this symptom profile to accurately diagnose LSS or PAD as a cause of claudication is unknown, leading to the potentially unnecessary use of expensive and overly sensitive imaging modalities. Furthermore, clinicians rely on this symptom profile when directing treatment for patients with concurrent imaging positive for LSS and PAD. This study evaluates the reliability of various symptom attributes, which classically have been used to characterize and differentiate the two. Methods: Patients presenting at a tertiary care centre’s vascular or spine clinics with a primary complaint of claudication were enrolled in the study. A diagnosis of either LSS or PAD was confirmed by imaging for each patient. Patients answered 14 questions characterizing their symptoms. Sensitivity, specificity, positive and negative likelihood ratios (PLR, NLR) were determined for each symptom attribute. Results: The most sensitive symptom attribute to rule out LSS was “triggering of pain with standing alone” (0.96). Four symptom attributes demonstrated a high PLR and 3 had low a NLR for diagnosing neurogenic claudication (PLR = 3.08, 2.51, 2.14, 2.9; NLR = 0.06, 0.29, 0.15). In vascular patients, calf symptoms and alleviation of pain with simply standing had a high PLR and NLR (PLR = 3.08 and 4.85; NLR = 0.31 and 0.36). Conclusion: Only 4 of 14 “classic” symptom attributes were highly sensitive for ruling out LSS and should be considered by primary care physicians before pursing expensive diagnostic imaging. Six symptom attributes should be relied upon to differentiate between LSS and NLR. Numbness, pain triggered with standing alone, location in the buttock and thigh and relief following sitting are symptom attributes that reliably characterize neurogenic claudication.
Objective: To evaluate fellowship trainee and supervisor perceptions of the relative importance of cognitive and procedural competencies in spine fellowship training. Methods: A questionnaire was administered (online and on paper) to fellow trainees and supervisors across Canada, and data were collected over a 3-month period. The questionnaire consisted of 40 multiple choice items grouped into 13 broad cognitive skills categories, as well as 29 technical/procedural items. This questionnaire was reviewed for content by spinal surgery experts (Canadian Spine Society Education Committee). Data were analyzed using qualitative and descriptive statistics (e.g., average mean scores, standard deviations, t tests). Results: The response rate was 91%, with 15 of 17 fellow trainees and 47 of 51 supervisors completing the survey. Twelve of the 13 cognitive skill categories were rated as being important to acquire by the end of the fellowship. Trainees were not comfortable performing and requested additional training in 8 of 29 spine surgery technical skill items. Specifically, additional training was believed to be required for intradural procedures (e.g., syringomyelia, intradural neoplasms) and other less common, technically demanding procedures (e.g., transoral odontoidectomy, anterior thoracic discectomy). Significant differences (p < 0.05) existed in the perceptions of the importance of specific cognitive and technical skills based on previous residency training (orthopedic or neurosurgical). No such differences were found when comparing the responses of fellow trainees and their supervisors. Conclusion: This study demonstrates that fellowship trainees and supervisors have similar perceptions about the relative importance of specific cognitive and procedural competencies required to successfully complete spine fellowship training. Furthermore, background specialty training (orthopedic or neurosurgical) influences the perceptions of both fellow trainees and supervisors regarding the importance of cognitive and technical skills deemed necessary for successful training.
Introduction: Surgeons are often reimbursed as faculty for continuing medical education (CME) events. Industry often sponsors professional society meetings such as the American Academy of Orthopaedic Surgeons and other CME events. The nature and extent of surgeon–industry conflict of interest (COI) has become a source of considerable interest. Doctors, companies and government leaders have attempted to regulate potential COIs without the opinion of the general public. The objectives of this study were to assess the opinions of North Americans on COI issues regarding CME events and to analyze population subgroups that may form similar opinions. Methods: A web-based survey was administered to sample opinions on reimbursement, disclosure and funding sources for educational events. The validity of hypothetically similar questions was calculated, and subgroup analysis was performed for respondent age, sex, education, insurance and patient status. Results: In all, 501 of 542 surveys had complete data. Our sample population was composed of more females and was older and more educated than a representative cross section of the American population. There was a high validity content for questions. Most opinions did not differ among subgroups unless weighted to represent the American population census data. Over 90% of respondents felt that industry funding for surgeon tuition and travel for either industry-sponsored or professional society CME meetings would not affect their quality of care. Respondents were also generally in favour of educational conferences for surgeons regardless of funding type. Dislcosure of surgeon–industry relations appeared to be important to respondents. Conclusion: The development of evidence-based treatment recommendations requires the inclusion of patient choice and opinion as central components of the process. The vast majority of respondents in our study do not feel that the quality of their care would diminish because of industry funding of educational events, surgeon tuition and/or travel expenses.
Objective: The management of thoracolumbar burst fractures depends on the clinical presentation of neurologic deficit and radiographic features of fracture severity suggestive of instability. When patients are neurologically intact and have mild deformity on computed tomography, conservative therapy may be used; this typically involves bracing over months to permit fracture stabilization. We investigated the utility of bracing versus no bracing among stable thoracolumbar burst fractures. Methods: Patients with stable thoracolumbar burst fractures, single-level between T12 and L2, with no neurologic deficit or lower extremity injury were entered into this study. Randomization was computer generated in sealed envelopes. We examined patient data on presentation and at 6 months follow-up, including radiographic outcomes of kyphosis and loss of anterior vertebral height and clinical self-reported pain and disability outcomes. Continuous variables were analyzed by 2-factor analysis of variance (time, treatment) at the 0.05 level of significance. Results: Seventeen patients were randomized: 9 were treated with bracing and 8 without bracing. There were no differences between patients treated with or without bracing regarding the level of injury (p = 0.18) and initial spine geometry, including the extent of fragment retropulsion (p = 0.97), anterior loss of height (p = 0.56) or Cobb angle (p = 0.26). Progressive loss of height occurred an additional mean 17% (SD 4%) in both groups (p = 0.96). The degree of kyphotic progression was also no different by treatment (bracing: mean 6° [SD 2°], no bracing: 8° [SD 2°]; p = 0.59). Improvements in self-reported pain and disability were observed in both treatment groups, to a similar extent regardless of treatment (p = 0.40). Conclusion: Patients with stable thoracolumbar burst fractures treated with or without bracing had similar outcomes at 6 months. Radiographic outcomes of fracture geometry and clinical outcomes of pain and disability scores were no different by treatment type. These patients may benefit from conservative therapy simply involving sequential imaging without brace immobilization, although a larger patient series is required.
Background: There is much debate and uncertainty about which patients with a spinal cord injury (SCI) would benefit from surgical decompression and the optimal timing for such surgery. We sought to evaluate the rates of surgery and the relation between the timing of decompressive surgery and early outcomes (neurologic status and length of stay) in a prospective cohort of patients. Methods: Patients with an SCI categorized as American Spinal Injury Association (ASIA) level A–D and radiological evidence of cord compression were entered into this multicentre study at 6 spine centres in Ontario. Patients were stratified into “early” (< 24 h) or “delayed” (> 24 h) groups based on the time to decompression. The ASIA neurologic outcome scores (at baseline admission, after surgery and at discharge from the acute care facility) and length of stay were evaluated. Results: To date, 118 patients (mean age 44.4 [SD 17.27] yr; 79.66% male : 20.34% female) have been enrolled in the study, and 104 (88%) have undergone surgery. Surgery resulted in improvement of ASIA scores across patients with different levels of spinal cord impairment. The early group was younger (mean 41.22 [SD 16.31] yr) than the late group (mean 45.90 [SD 17.14] yr). At acute discharge, 22.58% patients in the early group had a grade improvement of 1 or greater in their ASIA score compared with 15.69% in the delayed group (p = 0.1680). Of patients with an injury scored ASIA-A and early surgery, 22.22% had a grade improvement of 1 or greater from admission to acute discharge in the ASIA score compared with 6.25% in the delayed group (p = 0.1760). Length of stay was significantly shorter in the early (mean 16.17 [SD 12.25] d) versus the delayed (mean 24.82 [SD 22.82] d) or no surgery group (mean 26.31 [SD 47.37] d) (p < 0.05). Conclusion: Surgical intervention is common in SCI patients, and it is associated with reduced length of stay. Initial results of this study of a prospective Canadian cohort of patients in the Ontario Spinal Cord Injury Registry suggest that early surgical intervention in SCI may be associated with improved neurologic recovery. These data will require validation in a large cohort with longer-term follow-up.
Purpose: Cervical spine fractures in the elderly carry a high rate of mortality. The purpose of this study was to determine what factors affect the mortality rate in this patient population. Methods: We identified, 37 patients aged 60 and older (average age 75 yr) who had cervical spine (C-spine) fractures with spinal cord injuries and who were followed at two level 1 trauma centres in Ontario. Data were collected prospectively and analyzed regarding patient level and degree of injury, preinjury comorbidities, treatment and cause of death. The results were subjected to statistical analysis to identify factors affecting mortality. Results: There was a mortality rate of 38% in this population. The number one cause of death was respiratory failure. Previous comorbidities and age did not seem to affect mortality. The mortality rate of patients with complete cord injuries was 2.6 times higher than those with incomplete cord injuries (p < 0.022). Patients with a score of A or B on the American Spinal Injury Association impairment scale were 2.1 times more likely to die of their injuries than those with a score of C or D. Previous history of spinal stenosis seemed to protect patients from having a complete cord injury, and patients with a dislocation were 3 times more likely to have a complete cord injury (p < 0.04), compared with patients with fractures and no dislocation. Operative versus inoperative treatment did not affect mortality in this series of patients. In addition, neurologic recovery appeared to be uncommon in this age population. Conclusion: Having a complete cord injury significantly increases mortality in the elderly population with C-spine fractures. In addition, the degree of cord injury has considerable effect on outcome, and the level of injury may play a role in mortality. Although each case of C-spine injury is different, this study should guide physicians and patients in assessing prognosis after such injuries.
Purpose: Destructive vertebral lesions are a common source of morbidity among patients with cancer. We report the results of the first multisite randomized trial among 134 cancer patients with vertebral compression fractures (VCFs) to assess the efficacy and safety of this procedure. Methods: Adult patients diagnosed with a variety of cancers and fewer than 3 painful VCFs (4 on a visual analogue scale [VAS] for pain) were randomly assigned to immediate kyphoplasty (n = 70) or nonsurgical supportive care (n = 64). Results: The primary outcome measures were the Roland Morris Disability Questionnaire (RMDQ) and VAS back pain scores. Single VCFs were identified in 43% of patients, and 29% of patients had 2 or 3 fractures. We evaluated 59 kyphoplasty and 56 nonsurgical patients in the efficacy analysis. Baseline RMDQ scores improved by 8.3 points in the kyphoplasty group and did not change in the control group (p < 0.01). Pain scores showed a similar change, with the kyphoplasty group improving by 3.6 (p < 0.01) and the control group not improving at 1 week (p < 0.01). At 1 month, patients who underwent kyphoplasty had on average a –4.1 improvement in pain, and those treated nonsurgically had no improvement (p < 0.0001). There was no significant difference in the number of patients with serious adverse events between the kyphoplasty (16) and nonsurgical (10) groups at 1 month. None of the serious adverse events in the kyphoplasty group were related to the devices used. There was a non–Q wave myocardial infarction that resolved in the kyphoplasty group. Conclusion: Our planned interim analyses of results from this ongoing randomized study show that patients with cancer-related VCFs treated with immediate balloon kyphoplasty show a marked reduction in back disability and pain at 1 month compared with nonsurgical treatment. Since the minimally clinical significant changes in the RMD and pain scales are 2.5 and 2.0 points, kyphoplasty in patients with cancer achieved both statistically and clinically significant improvements over control patients without an increase in adverse events.
Purpose: Our aims were, first, to determine the incidence of morbidity (major and minor, medical and surgical) and mortality in adults undergoing complex spinal surgery, both trauma and elective, in a quaternary referral centre, and second, to examine the influence of the introduction of a prospective perioperative morbidity abstraction tool on the recording of these data. Methods: Data on all patients who had surgery over a 12-month period were prospectively collected using the perioperative morbidity abstraction tool as proposed by the Spine Trauma Study Group. Prior to the introduction of this system, and using the hospital inpatient database, our documented perioperative morbidity rate was 23%. Diagnosis, operative data, hospital data, major and minor complications both medical and surgical, and deaths were recorded. Results: All patients discharged from the unit had complete data available for analysis; 942 patients with an age range of 16 to 90 years (mean 54, mode 38 yr) were identified. There were 552 male and 390 female patients; 58.5% of patients had undergone elective surgery; 30% of patients had injuries graded American Spinal Injury Association (ASIA) level A or worse on admission. The average length of stay was 13.5 days (range 1–221 d). In total, 822 (87%) patients had at least 1 documented complication. There were 14 deaths during the study period. The rate of intraoperative surgical complication was 10.5% (4.5% incidental durotomy, 1.9% hardware malposition requiring revision and 2.2% blood loss > 2 L). The incidence of postoperative complication was 73.5% (wound complications 13.5%, delerium 8%, pneumonia 7%, neuropathic pain 5%, dysphagia 4.5% and neurologic deterioration 3%). Conclusion: Major spinal surgery in adults is associated with a high incidence of intra- and postoperative complications. Without strict adherence to a prospective data collection system, the true complexity of this surgery may be greatly underestimated.
Introduction: The American College of Chest Physicians recommends only the use of postoperative pharmaceutical deep vein thrombosis (DVT) prophylaxis in spine surgery. The effects of preoperative prophylaxis, specifically with respect to DVT, pulmonary embolism (PE) and spinal epidural hematoma (SEH) rates, is unknown. Methods: We conducted a 5-year retrospective review of all elective spine operations at a major institution. Patients were separated into 2 groups based on the presence or absence of preoperative DVT prophylaxis. Rates of DVT, PE and SEH were compared between the 2 groups. The study was adequately powered to detect a halving of the DVT rate. Results: Of the 3870 elective spinal operations completed between 2004 and 2008, 37% of patients received preoperative DVT prophylaxis as either 5000 units of unfractionated heparin or 40 mg of enoxaparin. Preoperative DVT prophylaxis administration was not found to have a significant effect on the rates of DVT or PE (relative risk [RR] 0.91, 95% CI 0.37–2.23). There was no apparent significant effect on the rate of SEH, although the overall incidence of SEH was very low at 0.4% (RR 1.33, 95% CI 0.50–3.56, p = 0.61). Presentation of SEH usually occurred within 3 days of surgery, although there were several cases that presented up to 20 days later. Conclusion: Administration of preoperative DVT prophylaxis does not influence the rate of clinically relevant postoperative DVT or PE and probably does not affect the rate of SEH. Given the cost and human resource burden of its administration, the use of preoperative DVT prophylaxis is not recommended. In light of the continued drive for early discharge, physicians need to be cognizant that late presentation of SEH is possible and that a vigilance mechanism is required. Patients need to be informed of the signs and symptoms of SEH and counselled to present immediately should they arise.
Introduction: The objective of minimal access spine surgery is to reduce damage to surrounding tissues while accomplishing the same goals as conventional surgery. Whether the risk of complications is affected by minimal access techniques is unknown. The purpose of this review was to attempt to answer the following 2 clinical questions. First, does minimal access tubular assisted spine surgery (MASS) decrease the rate of complications in posterior thoracolumbar decompression and/or fusion surgery compared with traditional open techniques? Second, what strategies to reduce the risk of complications in MASS have been shown to be effective? Methods: We undertook a systematic review of the English-language literature published between 1990 and July 2009. Electronic databases and reference lists of key articles were searched to identify published studies that compared the rate of complications after MASS with a control group that underwent open surgery. Two independent reviewers assessed the strength of the literature using GRADE criteria to assess quality, quantity and consistency of the results. Results: From the 361 articles identified, 13 met a priori criteria and were included for review. The single large randomized study showed less favourable results for MASS discectomy but no significant difference in complication rates. The quality of the other studies, particularly for fusion surgery, was low. Overall, the rates of reoperation, dural tear, cerebrospinal fluid leak, nerve injury and infection occurred in similar proportions in the MASS and open surgery groups. Blood loss was reduced with MASS fusion; however, the quality of those studies was very low. Operation time and hospital length of stay was variable across studies. Conclusion: First, compared with open techniques, MASS did not decrease the rate of complications for posterior lumbar spinal decompression or fusion. Second, there was no evidence to assess the effectiveness of strategies to reduce the risk of MASS-related complications.
Introduction: The utility of minimally invasive spine fusion (MIS) remains controversial. The primary objective of this study was to compare patient-reported outcomes following MIS or open fusion for spondylolisthesis. Methods: A retrospective, multicentre cohort study was performed. One-level instrumented fusions for low-grade (I–II) spondylolisthesis from 6 centres using either a posterior MIS (4 centres: open transforaminal lumbar interbody fusion [TLIF] n = 80) or open technique (4 centres: TLIF n = 42, posterolateral fusion [PSF] n = 69) with a baseline and a minimum of 2 years follow-up using the Oswestry Disability Index (ODI) were included. Linear regression modelling was used to compare the change in ODI scores as well as the number of individuals who reached the minimum clinically important difference (MCID) and the substantial clinical benefit (SCB). Results: Both groups demonstrated significant improvement in ODI (MIS: 47.0% to 20.9%; open: 50.0% to 31.5%). Significantly more patients in the MIS compared with the open group (p < 0.01) reached MCID (81% v. 64%) and SCB (69% v. 49%). Adjusting for age, sex and baseline ODI score, linear regression modelling demonstrated that MIS fusion (p = 0.002) and baseline ODI score (p < 0.0001) were significant predictors of an improved outcome. The MIS fusion patients were almost 3 times more likely to reach MCID (p = 0.008, OR 2.9, 95% CI 1.3–6.4) and SCB (p = 0.004, OR 2.8, 95% CI 1.4–5.7). Multivariable comparisons between MIS and PSF or MIS and open TLIF demonstrated similar findings for MCID and SCB. Comparison between open TLIF and PSF showed no difference (p = 0.89 for MCID and p = 0.43 for SCB). Conclusion: In this multicentred cohort study, the posterior MIS technique independently demonstrated superior outcomes at 2 years postoperatively compared with open posterior fusion for spondylolisthesis.
Objective: The reported experience with tubular assisted minimally invasive cervical lamino-foraminotomy (LF) techniques, without use of endoscopes, for the treatment of radicular pain is lacking. Tubular assisted techniques have been considered to offer significant benefit over open procedures in terms of minimizing tissue damage, operative time, blood loss, analgesic requirements and length of hospital stay. We hypothesized that minimally invasive cervical LF using microscopes would significantly reduce postoperative analgesic requirements and length of hospital stay over the traditional open LF, with no difference in complication rates. Methods: We conducted a retrospective review of 107 patients who underwent a cervical LF for radicular pain between 1999 and 2009. Patient demographics, intraoperative parameters, length of hospitalization, postoperative analgesic use, complications and short-term neurologic outcome were compared between groups. Statistical analysis was performed using χ2 and Student paired t tests. Results: Between 1999 and 2009, a total of 107 patients underwent a cervical foraminotomy. An open approach was used in 65 patients, whereas 42 underwent minimally invasive tubular assisted microscopic LF. Operative time, complications and short-term neurologic outcome were comparable between groups. Significant differences, favouring tubular assisted LF, were observed in operative blood loss, postoperative analgesic use and length of hospital stay (p < 0.05). Conclusion: Tubular assisted microscopic cervical LF for the treatment of cervical radiculopathy significantly reduces blood loss, postoperative analgesic use and length of hospital stay compared with the standard open approach. Operative time and complication rates were comparable with both techniques.
Purpose: The aim of this study was to examine how the neutral zone (NZ) and range of motion (ROM) are altered by different patterns of single-level unilateral facet and posterior ligament complex (PLC) injury in a human cervical spine for independently simulated flexion–extension, lateral bend and axial rotation. Methods: A custom-designed spinal motion simulator loaded 8 multisegment (C2–C5) cadaveric spines to produce independent flexion–extension, lateral bend and axial rotation. Three-dimensional (3-D) kinematics were captured for each motion performed. Data were collected in the intact state, followed by stages of a sequential injury pattern at the C3–C4 level from PLC disruption, facet capsular disruption and progressive facet resection. Repeated-measures analyses of veriance and post-hoc Newman–Keuls tests were used for analysis (α = 0.05). Results: The NZ and ROM increased with injury. Significant increases were observed in the NZ following facet capsular disruption with axial rotation (p = 0.032), but not following any injury pattern with flexion–extension or lateral bend (p > 0.05). Sectioning of the PLC increased ROM for flexion–extension (p = 0.001); however, only in the case of a full facet fracture was the ROM larger with axial rotation (p = 0.04) and lateral bend (p = 0.012). Conclusion: It appears that disruption of the posterior osteoligamentous complex increases cervical spine instability. Results showed a spectrum of instability that is both motion and injury dependent; however, without a complete facet fracture, stability of the spine was largely unaffected for most motions. The little change seen in the NZ for both flexion–extension and lateral bend, despite an increase in ROM, could suggest that the undisrupted anterior structures must play a significant role in stabilizing this injury pattern and that the NZ measured is largely that of the anterior column. Future studies should investigate the role of the anterior column in this injury pattern.
Introduction: Cervical spondylotic myelopathy (CSM) is the most common cause of spinal cord impairment. However, the long-term benefit of surgical treatment remains uncertain. To address this question, we undertook a multicentre, prospective study to examine long-term outcomes in patients who had surgery for CSM. Methods: A total of 280 patients were enrolled at 12 sites across North America and were stratified on the basis of their modified Japanese Orthopaedic Association (mJOA) scores into mild (> 15), moderate (12–15) and severe (< 12) cases. To date, 160 patients have 2-year follow-up data available (40% female; mean age 56 [SD 12] yr) Outcomes assessments included the mJOA, Neck Disability Index (NDI), Nurick score and SF-36. Results: All outcome parameters improved (p < 0.001) postoperatively in patients with mild, moderate and severe CSM. Improvements plateaued at 12 months after surgery and were maintained at 2 years. The mJOA scores improved on average by 3.00 points (95% CI 2.54–3.47); NDI scores improved from 37.75 to 27.10, with an average improvement of 10.64 (95% CI 7.44–13.84; the Nurick scores improved from 4.05 (SD 0.99) to 2.43 (SD 1.49), with an average improvement of 1.62 (95% CI 1.38–1.86); the SF-36 physical component scores improved from 38.30 (SD 11.25) to 42.82 (SD 10.08); and the SF-36 mental component scores improved from 41.70 (SD 10.84) to 46.07 (SD 11.37), with an average improvement of 4.53 (95% CI 2.94–6.12). Conclusion: This large prospective clinical study shows that surgical treatment for mild, moderate and severe CSM results in objective improvements in generic and disease-specific health outcomes which are maintained at 2-year follow-up.
Introduction: Magnetic resonance imaging (MRI) and computed tomography (CT) are commonly used for the diagnosis and assessment of lumbar spinal stenosis. The available literature has not identified which modality is superior. We compared the reliability and accuracy of CT and MRI in the assessment of lumbar spinal stenosis. Methods: We performed a prospective review of CT and MRI scans of 54 patients who were referred for surgical consultation. One orthopedic spine fellow and one neuroradiologist reviewed the CT and MRI scans. A qualitative and quantitative analysis was performed. Intraobserver and interobserver reliability were determined using a Kappa coefficient. The patient’s official reports were correlated with analysis that was performed by the 2 reviewers. Oswestry Disability Index and SF-36 data were correlated with the qualitative and quantitative assessment of stenosis on CT and MRI using the Pearson r coefficient. Results: Substantial interobserver agreement was achieved between surgeon and neuroradiologist (κ = 0.74) as well as between surgeon and reporting radiologist reading MRI scans (κ = 0.64). Moderate agreement was found between neuroradiologist and reporting radiologist (κ = 0.57). Almost perfect intraobserver reliability for MRI was achieved by the 2 expert reviewers (κ = 0.91 for surgeon and κ = 0.92 for neuroradiologist). Moderate interobserver agreement (κ = 0.58) was found between surgeon and neuroradiologist reading CT scans. Fair agreement was found between neuroradiologist and reporting radiologist (κ = 0.30) and between surgeon and reporting radiologist (κ = 0.32). Substantial intraobserver agreement was found for the surgeon (κ = 0.77) whereas the neuroradiologist achieved almost perfect agreement (κ = 0.96). No correlation was found between qualitative or quantitative analysis with functional status. Conclusion: This study directly demonstrates that MRI is likely a more reliable tool than CT for assessing degenerative lumbar spinal stenosis, but neither correlates with functional status.
Introduction: To enable surgical triage and manage long wait-lists, many spine surgeons will not consider seeing a patient unless a magnetic resonance imaging (MRI) scan of the spine has been completed. However, the high prevalence of anatomic abnormalities detected on MRI may in fact only serve to increase surgical referrals. The purpose of this study was to assess health services utilization following an MRI scan ordered by a primary care physician (GP). Methods: We linked chart-abstracted data for 655 lumbar MRI scans ordered by GPs to health administrative databases using an encrypted unique patient identifier. The cohort was derived from a larger provincial audit of CT and MRI use conducted by the Institute for Clinical Evaluative Sciences (ICES), which reviewed 3921 outpatient MRI spine scans performed in 2005 at 20 randomly selected Ontario hospitals (stratified by teaching status [teaching v. community] and geographic location [northern v. southern Ontario]). Health services utilization information was obtained using claims data from the Ontario Health Insurance Plan database. Results: The mean age of patients was 49.7 years, with 50.1% being female. The incidence of specialist referral during a maximum 3 years follow-up was 33.6% for orthopedic surgeons, 19.7% for neurologists, 15.6% for neurosurgeons, 11.6% for physiatrists and 3.1% for rheumatologists. Imaging subsequent to the index MRI included spine radiographs (23.8%), CT spine (11.5%) and another spine MRI (28.7%). Nerve conduction studies were ordered in 20.31% of patients; 14% of patients received nerve blocks; and 6.7% of patients received spine surgery. Conclusion: Our findings support the notion that many (50%) patients who have a lumbar MRI scan ordered by their GP are subsequently referred for surgical consultation. Given the very low rate of surgery and the current limited access to spine surgeons, these results suggest that reduction of wait times for spine surgeon consultation will require improved clinical and imaging assessment of patients with spinal disorders in the primary care setting.
Purpose: We aimed to determine the correlation between the radiologic and clinical diagnoses as well as the impact of more magnetic resonance imaging (MRI) and computed tomography (CT) scans on clinical outcomes by repeating a study conducted in 1996. Methods: We conducted a 7-week prospective study that included patients referred for back pain to spine surgeons at a single healthcare centre. Patients were included if they had not previously been seen by a surgeon for their back problems and if their back pain was related to the thoracic or lumbar spine. Demographic data, imaging findings, diagnoses determined by the surgeons and visit outcomes were collected. Results: Of 160 patients, 8 (5%) were no-shows and excluded from further analysis owing to incomplete data. The most common investigation ordered before our clinic visit was MRI (n = 111, 73%), followed by CT (n = 62, 41%). In contrast, in 1996, plain radiographs were most common (n = 92, 68%), followed by CT (n = 50, 37%), then MRI (n = 15, 11%). However, the number of patients determined to be surgical candidates remained relatively unchanged, from 19% (n = 27) in 1996 to 16% (n = 25). The most common MRI findings were degenerative disc disease (n = 78, 63%), herniated disc (n = 69, 56%) and spinal stenosis (n = 31, 25%). However, degenerative disc disease was the clinical diagnosis in only 27% (n = 41). Other common diagnoses included chronic pain (n = 33, 22%), spinal stenosis (n = 27, 18%) and disc herniation (n = 25, 16%). Conclusion: The clinical diagnosis of back pain had poor correlation with radiologic abnormalities. Despite increased MRI and CT scans, the number of patients deemed to be surgical candidates has not changed.
Objective: To determine whether intrathecal morphine (ITM) and patient-controlled analgesia (PCA) provides postoperative analgesia without respiratory depression that is superior to PCA alone following lumbar spinal surgery. Methods: Patients were randomized into 2 groups to receive either the ITM or intrathecal placebo (ITP). At the conclusion of elective lumbar spine surgery, a solution prepared by a pharmacy of either 3.5 ug/kg to a maximum of 350 ug of preservative-free morphine (ITM group) or placebo consisting of an equivalent volume of preservative-free isotonic saline (ITP group) was given. Postoperatively, all patients were given a PCA pump and observed for the first 24 hours in a step-down unit with continuous oxygen saturation monitoring. Results: The ITM group had 18 patients and the ITP group had 14 patients. The average PCA use over the first postoperative day was 40.8 mg and 48.0 mg in the ITM and ITP groups, respectively. There were 2 patients in the ITM group who did not require use of PCA. The average pain scores at 4, 8, 12 and 24 hours were 1.7, 1.6, 2.3 and 3.1, respectively, in the ITM group and 2.8, 2.8, 2.8 and 3.0 in the ITP group. Neither group had any patients who experienced episodes of respiratory depression (rate < 9 breaths/min). Both groups had similar average rates of pruritis, nausea and need for antiemetics. Conclusion: ITM provides superior postoperative analgesia without respiratory depression versus PCA alone following lumbar spinal surgery.
Introduction: Perceived pain and self-rated disability relate to surgical choice. Current definitions of disability include multiple components: impairments in body structure and function (i.e., pain), activity and participation limitations and social or environmental limitations. Characterization of disability from subjective and objective measures, and how they relate to surgical choice, is poorly understood. Accelerometers allow for detailed assessment of physical activity (PA), but there are few studies that relate PA to measures of disability. Methods: A cohort of patients with neurogenic claudication and spinal stenosis were offered lumbar decompressive surgery. Self-reported pain and disability (visual analogue scale, SF-36, Oswestry Disability Index [ODI], Roland Morris Disability Questionnaire [RMDQ]) and physical activity (GT3X accelerometer) were measured. Two groups were examined based upon surgical choice (18 yes, 12 no). Independent t tests and discriminate analysis were used. Results: Perceived pain was higher for patients choosing surgery; the surgical group had back pain that was 2.2 (out of 10) higher than the nonsurgical group, leg pain was 0.6 greater, bodily pain was 6.4 greater (out of 100), as was the RMDQ (1.3) but not the ODI total score (p < 0.05). However, subscales from the ODI and RMDQ relating to body function impairments (pain) were different (p < 0.05). For objectively measured physical activity, the total daily energy expended (calories), intensity (metabolic equivalent of task) and duration (minutes) of activity were not different between groups; however, patients choosing surgery had significantly fewer bouts of longer duration (> 30 min). The inclusion of activity intensity and bout length (derived from accelerometry) increased the ability to predict surgical choice beyond that of pain alone; no aggregate, subjective measures of disability or activity were chosen during discriminant analysis. Conclusion: Disability measures are largely pan-focused, and the addition of objectively assessed activity levels provides added predictive ability, whereas overall disability measures do not. It may be that pain and fear of incapacity are the key factors in surgical selection.
Introduction: In patients with lumbar spinal stenosis (LSS), neurogenic claudication results in gait impairment can be assessed by clinical walking tests. The extent to which this impairment impacts on community activity and participation is not known. Methods: Thirty-three patients with LSS (axial imaging, neurogenic claudication history) wore a hip-mounted accelerometer (Biotrainer Pro, 1 minute epochs) for 7 days. In addition, each patient completed the Roland Morris Disability Questionnaire, Oswestry Disability Index and Disabilities of the Arm, Should and Hand questionnaire. Accelerometer data were analyzed to derive numerous activity measures, including daily energy expenditure, duration of activity at activity thresholds and activity bout characteristics. Results: The patients (mean age 66.8 [SD 8.4] yr, mean body mass index 28.4 [SD 4.9]) were compliant with accelerometer use (average wear 6.7 days with 13 h/d worn). Participants averaged 125.9 meaningful activity minutes per day (> 1.5 metabolic equivalent of task [MET]) with a total energy expenditure of 256 kcal/day (3.3 kcal/kg/d). Physical activity durations diminished rapidly with intensity (< 2 MET, 295 min/d; 2–3 MET, 42 min/d; 3–4 MET, 6.7 min/d; 4–5 MET, 2.1 min/d). The proportion of activity bouts declined precipitously with duration above a 1.5-MET threshold (1 min duration, 51% of bouts; 1–5 min, 42%; 5–10 min, 5%; > 10 min, 2%). Only 10.3 (SD 12) minutes per day were accumulated above the moderate intensity threshold. Accordingly, only 1 participant met the age-matched physical activity guidelines; however, this is equal to the same level of compliance in the normal population. The participants accrued 646 minutes/day of very low-level activity (83% of the wear time). Discussion: This is the first study to characterize physical activity patterns of people with LSS. The activity bout durations and intensities are consistent with neurogenic claudication; however, the patterns of activity reveal opportunities for substantial increases in physical activity through presurgical and postsurgical exercise interventions.
Objective: The primary purpose of this national study was com-pare the relative improvement in quality of life after surgical intervention for focal lumbar spinal stenosis (FLSS) compared with hip and knee osteoarthritis. Methods: A Canadian multi-centre (6 academic centres) retrospective cohort study was performed. Patients who had undergone elective primary 1–2 level spinal decompression (n = 389) with (n = 224) or without (n = 179 with degenerative spondylolisthesis) instrumented fusion for spinal stenosis with a minimum of 2 years’ follow-up were included. They were compared, using multivariable regression modeling, with a cohort of patients who underwent primary total hip (n = 178) and knee (n = 235) arthroplasty for osteoarthritis (n = 413) at one centre. The primary outcome measure was the change in preoperative to postoperative SF-36, physical component score (PCS). Results: The mean age in years and percent female for the spine, hip and knee groups were 63.3/58.5%, 66.0/46.9% and 65.8/64.3%, respectively. All 3 groups experienced significant improvement in PCS (p < 0.001). Unadjusted change in PCS was superior for hips; however, there was no difference in knee compared with spine outcomes. Univariate predictors (p < 0.01) of greater PCS change included younger age, higher baseline mental component score, lower baseline PCS, fusion, spondylolisthesis and geographic site. Using the Scheffe test for multiple comparisons, no one specific site was better than the other. The difference between sites was not clinically significant. Multivariate analysis revealed that hip replacement resulted in superior change in PCS, whereas spines where equivalent to knees. A similar finding was noted regarding the number of patients reaching minimal clinical important difference and substantial clinical benefit. Conclusion: Significant improvement in health-related quality of life following surgical treatment of FLSS is consistently achieved across the country. The overall change in PCS for FLSS is equivalent to knee arthroplasty but inferior to hip replacement.
Purpose: Surgical treatment of spinal deformity is complex and subject to complications. The presence of 2 fellowship-trained spine surgeons may provide numerous advantages to patient outcomes. We aimed to compare the clinical and radiological results in patients who were surgically treated by 1 solo (group 1) versus 2 collaborative surgeons (group 2). Methods: We conducted a retrospective review of a prospective databank and identified 65 patients who were less than 30 years old at the time of surgery (69% female, mean age 15.7 yr) and who underwent corrective surgery for spinal deformity. We divided these patients into 2 groups for analysis: group 1 (n = 38), 1 surgeon; group 2 (n = 27), 2 surgeons. Hospital and office charts were reviewed for surgical data (estimated blood loss, operating room time, complications, other) and postoperative data (length of stay, complications, other). Pre- and postoperative standing 36-inch radiographs were analyzed for changes in Cobb angle, translation at the apex, relation between the central sacral vertical line and C7 plumb and instrumentation complications. The last instrumented vertebra and shoulder symmetry were recorded. Patient and surgeon satisfaction were noted. Calculations were carried out to determine similarity between the 2 groups (age, sex, body mass index, percent adolescent idiopathic scoliosis, comorbidities). Comparisons of estimated blood loss and need for transfusion, complications and length of stay were observed in the 2 groups. Percent changes in deformity parameters were compared in each group. The need for return to the operating room in both groups was determined. Results: In comparing 1 surgeon versus 2 surgeons, we did not note decreases in operative time, estimated blood loss , length of stay or overall patient satisfaction (each p > 0.05). However, within these parameters, we achieved a greater density of complex instrumentation, which provided greater Cobb angle and rotational correction, smaller final Cobb angle with improved angle of the lowest instrumented vertebra within specific curve types (each p < 0.05). Surgeon-perceived improvement in collaborative decision-making and intraoperative problem management lead to greater satisfaction regarding outcomes. Neurological complications (group 1 = 2, group 2 = 0) and reoperation (group 1 = 2, group 2 = 1) were noted. Conclusion: Collaborative surgical management offers quantifiable advantages when surgically treating spinal deformity, in addition to numerous equally important but poorly quantifiable benefits that are realized by both the patient and surgical team.
Background: Long segment lumbar fusions are employed in spinal deformities and may lead to junctional failures. Selecting the appropriate upper instrumented vertebra (UIV) in these cases is often difficult. The purpose of this study is to determine the factors associated with junctional failures. Methods: We retrospectively reviewed the cases of 28 consecutive patients who had long segment fusions of the thoraco-lumbar spine from 2001 to 2008. All patients had a minimum 4 levels fused, the lower instrumented vertebra (LIV) was L5 or S1, and the UIV was T10 or below. All patients had no previous fusion proximal to the instrumentation. Demographic and radiographic analysis was performed, including the UIV angle, defined as the angle of the UIV with the horizontal. The patients were divided into 3 groups. Group 1 contained 8 patients who sustained UIV fractures; group 2 contained 14 patients with no complications fixation and fusion-wise; and group 3 contained 6 patients with other complications associated with their fixation (i.e., pull-out, loosening, adjacent segment failure). Statistical analysis was performed using the Kruskal–Wallis test. Results: The average body mass index values (and ranges) for each group was very similar: 32.7 (21–41.5), 30.6 (22.2–46.1) and 30.0 (20–30.2) for groups 1–3, respectively. Group 1 had, on average, 5.8 (4–7) levels of fusion; group 2 had 6.1 (4–8) and group 3 had 4.8 (4–8). The average intraoperative UIV angle was 17.3° (3°–24°) for group 1, 9.7° (0°–21°) for group 2 and 4.6° (–2 to 10°) for group 3, and the immediate postoperative UIV angle was 14.6° (4°–26°), 4.81° (–5° to 16°) and 3.6° (–2° to 10°), respectively. The average intraoperative UIV angle was significantly higher (p = 0.05) for group 1 compared with group 2. Conclusion: UIV fractures were the most common type of junctional failure in our series. A high UIV angle on intraoperative lateral radiograph was associated with UIV fractures. Our study may help with surgeons making intraoperative decisions on the UIV to avoid levels with a high sagittal angle.
Introduction: Decompression and fusion is generally considered to be the treatment of choice for degenerative spondylolisthesis (DS). The purpose of this study was to compare the outcomes of anatomy-preserving decompression alone compared with decompression and fusion for DS. Methods: A Canadian multicentre (6 academic centres from the east to west coast) retrospective cohort study was performed. Patients who had undergone elective primary 1–2 level spinal decompression (D) alone (n = 42, 57% = 1 level, from one centre, leg-dominant pain, up to 25% listhesis, nonmobile [< 5 mm] on standing compared with supine or flexion–extension radiographs) or decompression and instrumented fusion (DF) (n = 133, 64% = 1 level, from 6 centres) for DS with a minimum of 2 years follow-up were included. The primary outcome measure was the change in preoperative to postoperative SF-36 physical component score (PCS) and mental component score (MCS). Results: The average age was 67.5 and 62.5 years for the D and DF groups, respectively (p = 0.007). The mean time from surgery was 29.7 and 29.2 months for the D and DF groups, respectively (p = 0.8). Average change in PCS was 11.5 versus 11.4 (p > 0.9) and MCS was 7.2 versus 4.3 (p = 0.2) for the D and DF groups, respectively. The number of patients reaching minimal clinical important difference (> 4.9 change in PCS) and substantial clinical benefit (SCB > 6.2) for decompression alone was 67% and 64%, respectively. For the decompression and fusion group, this was 71% (MCID) and 64% (SCB). Conclusion: In highly selected patients with DS from one centre, the outcome of decompression alone is comparable to decompression and fusion at multiple centres. Anatomy-preserving decompression alone should be considered a viable option for patients with stable degenerative spondylolisthesis.
Objective: The utility and cost of minimally invasive spine fusion (MIS) remains controversial. The objective of this study was to compare the perioperative morbidity and cost utility of 1-or 2-level primary decompression and fusions for low grade (I–II) degenerative or isthmic spondylolisthesis using MIS versus conventional open fusion techniques. Methods: A retrospective cohort study was performed using prospective data from 79 consecutive patients (n = 37 MIS, 1 surgeon and n = 41 open, 3 surgeons) between 2005 and 2008 at a single Canadian institution. All 4 surgeons had at least 5 years of experience with the fusion techniques studied. Independent review was performed. In-hospital detailed microcosting data and change in health utility index (HUI) at 1 year were used. Results: The groups were comparable in age, sex, preoperative hemoglobin, ASA, Charlson comorbidity index, body mass index (BMI) and levels fused. All MIS patients had an inter-body cage(s) compared with only 14 in the open group. Blood loss (206 v. 798 mL), transfusions (0% v. 17%) and length of stay (5.9 v. 8.6 d) were significantly lower (p < 0.01) in the MIS group. There were also fewer complications in the MIS group (4 v. 12). Both techniques resulted in significant improvement in Oswestry Disability Index score, back and leg pain (p < 0.001). The average cost of an open fusion was 1.28 times greater than cost of an MIS fusion (p = 0.001). The change in HUI (SF-6D) was 0.11 for MIS and 0.08 for open (p < 0.0001). There was a significant positive correlation between the length of stay and cost of surgery (open p = 0.001, MIS p = 0.01). Patient age, BMI or instrumentation did not have a significant influence on the cost. Conclusion: This matched cohort study demonstrates the cost-effectiveness (lower direct cost and greater improvement in HUI) of MIS compared with open fusion for spondylolisthesis at 1 year after surgery.
Introduction: Interleukin-1β (IL-1β) and Fas ligand, which activates the transmembrane death receptor Fas (CD95), are implicated in the pathogenesis of degenerative disc disease (DDD). Resistance to DDD, such as is the case with the nonchondrodystrophic (NCD) canines, may be due to factors secreted by notochordal cells within the NCD canine nucleus pulposis (NP) that mediate the deleterious effects of IL-1β and Fas. Here for the first time we show that notochordal-cell-conditioned medium (NCCM) confers protection of NP cells from IL-1 and IL-1+Fas ligand–mediated cell death and degeneration. Methods: We developed NCCM from 4 days of serum-free culture of notochordal cells under hypoxic (3.5% O2) conditions and used this medium to culture bovine NP cells cotreated with IL-1β or IL-1β+Fas ligand for 48 hours and compared our results to cells treated with basal medium with or without IL-1β or IL-1β+Fas ligand. We performed fluorescence-activated cell sorting analysis to evaluate for apoptosis and total cell death and performed real-time reverse transcription polymerase chain reaction examination of the important extracellular matrix genes aggrecan, collagen 1 and 2, mediators of matrix degradation ADAMTS-4 and MMP3, the matrix protection molecules TIMP-1 and -2 as well as the CD44 receptor. Results: Notochordal-cell-conditioned medium robustly protects NP cells from both total cell death and apoptosis and significantly protects NP cells from the degradative effects of IL-1β or IL-1β+Fas ligand. Genes encoding the proteoglycan and collagen degrading enzymes ADAMTS-4 and MMP3 are significantly downregulated compared with basal conditions, whereas all extracellular matrix genes-collagen 1 and 2, aggrecan, CD44 are upregulated as are the matrix-protective factors TIMP-1 and TIMP-2. Conclusion: Notochordal cell-secreted factors significantly protect NP cells from cell death and upregulate the expression of genes that encode for anabolic activity and matrix protection of the intervertebral disc NP. Harnessing the restorative/regenerative powers of notochordal cells will lead to novel cellular and molecular strategies in the treatment of DDD.
Objective: To examine subsidence of the Charité III total disc arthroplasty (TDA) and its influence on clinical outcomes. Methods: We conducted a prospective study of the Charité III TDA performed for discogenic low back pain at either L4–5 or L5–S1. Assessment included preoperative and postoperative validated patient outcome measures and various radiographic parameters. Subsidence was measured as the difference in position of the implant measured at last follow-up compared with the original position. Results: Fifty-nine of the potential 64 patients were available for the study (92%). A statistically significant improvement was demonstrated between the mean preoperative and all postoperative intervals for the Oswestry Disability Index (ODI), visual analogue scale (VAS) and SF-36 (p < 0.0001). The mean sagittal rotation was 6.5°, and the mean intervertebral translation was 1.1 mm. Subsidence was present in 44 of 53 (83%) patients. The mean subsidence was 1.7 (range 0.0–4.8) mm. There was no statistically significant effect of subsidence on clinical outcomes (ODI, VAS and SF-36) or motion at the affected intervertebral level. Postoperative disc height, age and time since implantation were determined to not be significant risk factors for subsidence. Increased body mass index showed a weakly positive correlation with degree of subsidence. Posterior vertebral body uncoverage greater than 2 mm was identified as a risk factor for subsidence with a calculated odds ratio of 5.7 for subsidence greater than 1 mm. Conclusion: Subsidence of the Charité III implant is a prevalent occurrence in TDA and is correlated with positioning of the implant away from the posterior vertebral body end plate. Pros-thesis design should afford modularity to accommodate differences in adjacent vertebral body end plates to avoid posterior vertebral body uncoverage. At an average of 5 years follow-up, there appears to be little significance of subsidence on patient satisfaction or range of motion, although this may be too short a follow-up period.
Purpose: DIAM (Medtronic) is a silicone interspinous spacer that provides stability and preserves motion for lumbar disc herniation; it also provides relief of neurogenic claudication. Arthrodesis is commonly used as an adjunct in these patients, particularly with central or recurrent disc herniations and in cases of stenosis that require extensive decompression. Fusion has associated significant morbidity and long hospital stay, whereas DIAM placement is a day-surgery procedure. The purpose of this study was to evaluate the use of DIAM interspinous spacers as an alternative to arthrodesis in individuals who might be considered for fusion. Methods: Overall, 45 patients with disc herniation or stenosis were considered for decompression and fusion but were offered alternative motion-sparing treatment with DIAM between 2006 and 2009. Preoperative Oswestry Disability Index scores, visual analogue scale, clinical and radiological data were collected and compared with data from subsequent follow-up visits at 6 weeks and 3, 6, 12 and 24 months postoperatively. Minimal follow-up was 1 year, with an average of 1.5 years. Complications, including the need for reoperation, were recorded. Inclusion criteria were patients with single-level pathology, either disc herniation (central or recurrant) and/or stenosis with or without degenerative spondylolisthesis. Exclusion criteria were multilevel disease, deformity and end-stage disc degeneration. Results: In total, 42 of 45 patients demonstrated significant improvement at final follow-up (average 1.5 yr). Three patients required reoperation and were considered failures: 2 converted to fusion and 1 had an infection. A total of 42 of 45 patients had improvement in their leg pain scores, whereas back pain scores improved in 32. Oswestry Disability Index scores improved in 42 patients. Other than recurrent disc herniation necessitating revision to an arthrodesis in 2 patients, radiological follow-up was satisfactory. Conclusion: DIAM as an interspinous spacer is a useful alternative to arthrodesis and can be performed as a day-surgery procedure, with a shortened recovery time and relatively low risk of complications or reoperation. Prospective randomized trials are needed to further evaluate the use of interspinous spacers as an alternative to fusion.
Purpose: Factors influencing patient willingness to undergo elective surgery are poorly understood. Methods: We prospectively evaluated patient concerns before surgical consultation for elective spinal, hip, knee, shoulder/elbow (S/E) or foot/ankle (F/A) conditions. Patients were surveyed for demographic data and SF-36 quality of life (QOL) scores and asked to report their greatest concern about considering surgery for their condition, as well as their willingness to undergo surgery if it was offered to them by their treating surgeon. In our prospective cohort of 743 patients, 364 (51%) were male and 293 (39%) were evaluated for a spine condition, 74 (10%) hip, 192 (26%) knee, 69 (9%) S/E and 115 (16%) F/A. Mean QOL scores were similar for patients across specialties. Results: The top 3 greatest concerns for undergoing elective musculoskeletal surgery were potential complications (20%), effectiveness (15%) and recovery time (15%). When categorized by specialty, concern of surgical complications was the most prevalent among spine (23%) and F/A patients (30%). However, patients were most commonly unsure of risks associated with their respective subspecialty surgery (spine 56%, hip 53%, knee 4%, S/E 48% and F/A 33%). The majority of hip patients (89%) perceived a high success rate for hip surgery, whereas 65% of spine patients where unsure of the success of spine surgery. Conclusion: Patient willingness to undergo surgery was greatest for hip (84%), knee (78%) and S/E (82%) surgery and least for spine (68%) and F/A surgery (74%). Although patient willingness to consider surgery is clearly a multifactorial decision, patient perception of surgical risk or success before surgical consultation is a significant factor.
Introduction: In Canada, the wait time to see a spine surgeon is often longer than the wait for spine surgery. An effective triage strategy is essential to ensure that patients who require surgery are seen earlier and appropriate nonsurgical therapies can be initiated or enhanced for the rest. Based on the success of hip and knee care pathways, the province of Saskatchewan has developed a spine care pathway to improve the delivery of timely, appropriate, evidence-based therapy for patients with back- or leg-dominant pain. Although other groups have developed care pathways in the past, this is the first government-initiated pathway to deliver comprehensive, systematic spine care on a provincial basis. Methods: Over 2 years, meetings were held between Saskatchewan spine surgeons and a variety of other stakeholders (e.g., primary care physicians, chiropractors, health region officials, Sask Health) to develop a comprehensive provincial spine care pathway. The pathway was based on a systematic review of the assessment and treatment of spinal disorders. The final pathway was subjected to an external review. Results: The Saskatchewan Spine Pathway (SSP) has 3 major components: (1) education of primary care physicians to identify and treat major patterns of back pain and leg pain; (2) SSP clinics, staffed by specialized primary care providers, assess and triage patients to receive further nonsurgical therapy, specialized imaging and/or referral to a spine surgeon; and (3) performance assessed by monitoring wait times, cost-effectiveness and patient reported outcomes. Currently, courses for primary care physicians are being developed, and “implementation teams” for the 2 major health regions (Saskatoon and Regina) are being assembled to develop the SSP clinics. Conclusion: Timely, evidence-based spine care is difficult to deliver on a provincial basis. The implementation and success of this pathway may form the basis of similar programs throughout Canada.
Introduction: Screening patients for appropriate treatment is a key component of an effective hospital-based spine service. To date, a standardized and validated method for carrying out this process has not been established. In particular, studies to determine who should staff these screening services, their safety and reliability have not been reported. The goal of this study was to determine the interexaminer reliability of patient screening assessments by chiropractors and spine physicians. Methods: We conducted a prospective observational cohort study of 50 consecutive patients with acute lower back pain of less than 16 weeks’ duration (Quebec Task Force on Spinal Disorder class I, II) who were referred to a quaternary care hospital spine program. The interexaminer agreement for 10 physical examination procedures and 5 red flag conditions was calculated using the Cohen kappa value. Patients were assessed by 1 of 3 spine physicians and 1 of 3 chiropractors for normal or abnormal deep tendon reflexes, nerve root tension signs, lower extremity sensory/motor deficit, muscle atrophy, the Schober test and depth of lordosis. Any history suggestive of cauda equina, fracture, infection, spinal malignancy or progressive neurologic deficit was recorded. The results were compared, where applicable, with previously published kappa values for lower back examination procedures. Results: In total, 4 of the 50 patients had 1 or more red flag conditions with an interobserver reliability of 0.96; 8 of 10 physical examination procedures had a kappa value of greater than 0.9; the kappa for positive sensory deficit was 0.66 and for positive femoral nerve stretch test was 0.47. Conclusion: In this pilot study, initial patient screening assessments carried out by chiropractors and spine physicians had high interobserver reliability in 8 of the 10 examination procedures tested and were superior to previously reported multidisciplinary interobserver kappa values.
Introduction: Uncertainty around back pain management results in large volumes of patients with back-related complaints being referred to orthopedic surgeons for direction. The vast majority of these referrals are nonsurgical, leading to unacceptable wait times across Canada. This reservoir delays not only those who are disabled with problems requiring surgical treatment but also those who only require direction to appropriate conservative care. Physiotherapists with advanced training in orthopedics possess skills in musculoskeletal interview and exam; orthopedic residents, on the other hand, must acquire spine-specific skills in interview and exam, interpretation of radiographic exams, surgical decision-making as well as surgical techniques in a 2- to 3-month residency rotation. We asked whether an experienced physiotherapist could become proficient in triaging surgically appropriate patients after a 2- to 3-month residency. Methods: Following a 3-month clinical residency, an experienced physiotherapist and a spine surgeon independently interviewed, physically examined and reviewed diagnostic imaging of 31 patients. It was then independently decided whether the patients were candidates for surgical treatment, required conservative management or whether further investigations were necessary to make the final determination. The level of agreement was calculated using chance-corrected agreement or kappa values. Operational definitions were reviewed, and a second group of 29 patients were assessed. Results: The initial kappa score was 0.68 (considered good clinical agreement), and the final kappa score was 0.84 (considered virtually interchangeable). Conclusion: A 3-month period can prepare an experienced orthopedic physiotherapist to triage a waiting list for surgical candidates. The therapist can add value through being better prepared to direct conservative options. Expediting triage will facilitate the right person getting to the right intervention within a reasonable time frame. Addressing the backlog of referrals will also help identify the magnitude of surgical need.
Purpose: The objective of the study was to determine the information needs of microdiscectomy patients through multiple stakeholder interviews and to develop a preliminary preoperative education tool. Methods: The following cohorts underwent focus interviews: preoperative microdiscectomy patients, postoperative microdiscectomy patients, spine surgeons, spine fellows, orthopedic surgery residents and anesthesiologists. Interviews were conducted by a moderator. Focus groups were audio taped and transcribed for verbatim analysis using a descriptive qualitative approach. Two analysts independently reviewed each transcription. Each analyst developed a list of themes and through comparison established consensus on a final list of themes and subthemes. Results: Themes emerging from the data were organized according to content areas of information needs, factors influencing how physicians delivered information and patient preferences for information delivery. Physician participants identified the following information as being important to patient education: diagnosis, surgical goal, outcome, complications, recovery, postoperative restrictions and surgical date. Factors influencing the information shared by residents and fellows included patient’s age, presence of family members, socioeconomic status, procedure complexity and patient comprehension. Time constraints that restricted the amount of information shared were identified as working within large clinics and procedure complexity. The use of audio/visual aids, an interactive web-site with frequently asked questions, review of imaging with patients and consistency of information were deemed necessary for optimal preoperative education. Perceived information needs of patients according to surgeons were relief of symptoms, complications, postoperative restrictions, incision size, date of operation, recovery time and length of hospital stay. Conclusion: Our study has generated the main components of a preliminary educational tool for spine patients undergoing microdiscectomy. Future research will test the reliability and validity of our novel tool and investigate its effectiveness to improve functional outcomes, reduce anxiety and enhance patient satisfaction.
Objective: Radicular symptoms following disc herniation result from both mechanical compression and biochemical irritation of apposed neural tissue. This study tested local anticytokine treatment of established gait and behavioural changes in an animal model of disc herniation–induced radiculopathy. The selected agent was a soluble decoy receptor for tumour necrosis factor II (sTNF RII) that sequesters this cytokine away from the cell surface. We also evaluated the efficacy of this treatment coadministered with a depot carrier, in situ–forming chitosan, harboring the potential of injectable delivery. Methods: Surgical exposure of the L5 dorsal root ganglion (DRG) was through a hemil-aminectomy and medial facetectomy. Autologous nucleus pulposus (NP) was harvested from a tail intervertebral disc. Control animals (n = 6) underwent exposure only, and experimental animals received NP placement onto the DRG with no treatment (n = 6), local delivery of sTNF RII (n = 6), local placement of chitosan (n = 6) and combined delivery of sTNF RII with chitosan (n = 6). Animals were evaluated at 1 week for mechanical allodynia by Von Frey testing, for stance symmetry by incapacitance meter measurement and for gait symmetry by digitized video analysis. Results: Animals subjected to NP stimulation exhibited substantial mechanical allodynia compared with controls, with a 50% withdrawal threshold dropping from 15 g preoperatively to 3 g postoperatively. This heightened sensitivity had the functional consequence of stance asymmetry in the injured group that preferentially loaded the contralateral hindlimb. Both mechanical allodynia and stance asymmetry were reversed upon treatment with sTNF RII alone or in combination with chitosan. Such effects were not observed with the drug carrier alone. Conclusion: Animals subjected to noncompressive NP herniation exhibited mechanical allodynia and altered stance symmetry. Application of local immunomodulator was observed to reverse such effects, either alone or administered with a sustained-release carrier, further implicating the role of proinflammatory cytokines in this phenotype.
Purpose: The use of triggered electromyography (EMG) to localize motor nerves during surgical procedures is becoming increasingly common. Driven in part by a move toward minimally invasive procedures and in part by presumed legal protection offered by a monitoring system, EMG is being used with the assumption that a relation exists between the size of the stimulating current and the distance of the stimulus probe from a motor nerve. Methods: We examined the motor response to electrical stimulus at various levels of current to determine what relation exists between current level and the response of muscle to stimulus. Using a lateral approach to the spine, the lumbar plexus and contributing nerve roots were dissected in live, anesthetized adult sheep. A stimulating probe was used at various fixed current levels, and the motor response was measured using mechanical (mechanomyography, MMG) sensors. Eighteen nerves in 6 adult sheep were tested at measured distances. Results: Our findings show a relation between current amplitude and measured MMG response which follows an S-shaped curve in the low range of current (1–6 mAmps). Working with levels of current between 1 and 6 milliamps, a relation can be established between current and distance, allowing the surgeon to determine precisely how far a nerve is from the stimulus probe. Muscle response saturated at currents above 6 mAmps. Stimulating above that level caused no change in mechanical response. Conclusion: Since electrical resistance is highly variable depending on the conducting tissue, triggered EMG monitoring systems must use currents ranging from 30 to 200 mAmps. Electromyography may be unreliable in determining the distance to nerve when stimulating above 6 mAmps since muscle response is maximized. Mechanomyography, operating at low current levels, is a more accurate way of determining the distance from a triggered electrical source to nerves.
Introduction: There is little evidence to guide treatment of patients with spinal surgical site infection (SSI) who require irrigation and débridement (I&D). The purpose of this study is to externally validate a previously constructed predictive model and design a classification system to decide which patients will need single versus multiple I&D. Methods: A consecutive series of 34 patients from a tertiary spine centre (collected from 2006 to 2008) who required I&D for spinal SSI were studied. Data were obtained from a prospectively collected outcomes database. A previously constructed predictive model based on a separate series of 128 patients that identified 6 predictors: spine location, medical comorbidities, microbiology of the SSI, presence of distant site infection (urinary tract infection or bacteremia), presence of instrumentation and bone graft type proved to be the most reliable predictors of need for multiple I&D. External validation of the model was performed by applying the model to produce predicted probabilities. Receiver operating curves were constructed and then area under the curve plots (AUCs) were calculated. Results: Validation analysis yielded AUC 0.70 (95% CI 0.51–0.89). By setting a probability cutoff of 0.24, the negative predictive value (NPV) for multiple I&D was 0.71 and positive predictive value (PPV) was 0.85. A cutoff of 0.60 probability yielded a PPV of 0.98 and NPV of 0.30. Conclusion: The validation of this predictive model proved to be good. Excellent PPV could be obtained at many cutoff points, and good NPV was obtained with lower probability cutoff points. This study forms the basis for an evidence-based classification system, the Postoperative Infection Treatment Score for the Spine (PITSS), which stratifies patients based on specific spine, patient, infection and surgical factors to assess a low, indeterminate and high risk for multiple I&D in patients with SSI.
Objective: We sought to evaluate demographics, presentation, treatment and outcomes of spinal infection in intravenous drug users. Methods: We reviewed a prospectively maintained database. Results: Over 5 years, there were 51 intravenous drug users (IVDU) treated for primary pyogenic infection of the spine: 34 were male, mean age was 43 (range 25–57) years, 23 had HIV, 43 had hepatitis C and 13 had hepatitis B. All were using cocaine and 44 were using more than 3 recreational drugs. Thirty patients had axial pain, with a mean duration 51 (range 3–120) days. Thirty-one patients were graded American Spinal Injury Association (ASIA) D or worse, with 8 graded ASIA A. The mean motor score was 58.6. The mean duration of neurologic symptoms was 7 (range 1–60) days. Twenty-six patients were receiving intravenous antibiotics for spinal infection. The mean temperature was 37.4°C (range 35.9°C–39.9°C, with 19 > 37.5°C), erythrocyte sedimentation rate was 60.8 (range 6–140, with 43 > 20), C-reactive protein was 87.75 (range 1.5–253, with 46 > 20), white cell count was 10.2 (range 3.7–30.4, with 14 > 11). Thirty-three patients had positive blood cultures (19 methicillin-sensitive [MSSA] and 9 methicillin-resistant Staphylococcus aureus [MRSA]). Forty-four patients were treated surgically: 32 cervical spine, 9 thoracic and 3 lumbar. Twenty-two surgeries had a posterior approach alone, 13 anterior only and 9 combined. The mean operative time was 263 (range 62–742) minutes. Thirteen patients required tracheostomy, 7 required early revision for hardware failure and 2 had a surgical wound infections. The mean duration of antibiotic treatment was 49 (range 28–116) days. Twenty-six patients had single-agent therapy; 17 had MSSA and 17 MRSA. At discharge, 28 patients showed neurologic improvement (mean 20 ASIA points, range 1–55), 11 showed deterioration (mean 13, range 1–50) and 5 were unchanged. There were no in-hospital deaths. At 2 years after the index admission, 13 patients were dead and none were attending the unit for follow-up. Conclusion: Primary pyogenic spinal infection in IVDUs typically presents with sepsis and acute cervical quadriplegia. Surgical management must be prompt and aggressive, with significant neurologic improvement expected in the majority of patients.
Introduction: There is currently good evidence to support the premise that the best chance for surgical cure in primary tumours of the spine is en bloc resection with disease-free margins; however, the early morbidity of these procedures begs the question of whether they are justified. The Spine Oncology Study Group (SOSG) asked: based on the rate of achievement of disease-free margins, morbidity and mortality (i.e., early efficacy of surgery), whether the Enneking principles of staging and en bloc resection of primary tumours should be applied to the spine. Methods: We undertook a formal systematic review with search of MED-LINE, Embase and the Cochrane Database of Systematic Reviews databases. Included reports described patients with low-grade malignant spine tumours, method of staging and surgical resection and their complications. Two blinded, independent reviewers used a standardized study selection worksheet. The results of the literature review were assessed by a multidisciplinary group of SOSG experts to arrive at a final grade of recommendation using the GRADE approach. Results: A total of 89 articles was identified, with 11 selected after excluding small case series and studies that included other pathologies (e.g., metastatic disease). Surgical and oncologic staging accurately predicted the attainment of wide or marginal margins in 88% of cases treated by en bloc resection. Surgical complication rates ranged from 13% to 56%, and mortality ranged from 0% to 7.7%. Conclusion: En bloc resection of primary spine tumours with disease-free margins is achievable if proper oncologic and surgical staging determines that it is feasible. The adverse event profile of these surgeries is high (even at experienced centres). Therefore, these surgeries should be performed by experienced, multidisciplinary teams (strong recommendation, low-quality evidence).
Introduction: Standardized indications and treatment for tumour-related instability are hampered by the lack of a valid and reliable classification system. The Spine Oncology Study Group (SOSG) developed the Spinal Instability Neoplastic Score (SINS), which determines an instability score and treatment recommendation by quantifying several clinical and radiologic factors. The system has been revised to simplify scoring and include consideration of bone quality and nonmechanical back pain. The objective of this study was to determine the validity and reliability of the revised SINS scoring system. Methods: Relevant clinical and radiographic data from 30 deidentified cases of spinal tumour were assessed by 24 SOSG members. Consensus opinion of SOSG members was used to categorize each case as stable, potentially unstable and unstable. On 2 separate occasions, each rater scored each case using the SINS. Each numerical score was converted to a 3-category data field, with 0–4 as stable, 5–9 as potentially unstable and 10 as unstable. Interrater and intrarater reliability for overall scores were assessed by intraclass correlation (ICC). Validity and reliability of final SINS category were assessed with a kappa statistic. Results: Overall kappa for agreement between SINS category (stable, potentially unstable, unstable) and the reference standard (validity) was 0.759 (95% CI 0.714–0.803), which indicates substantial agreement. The inter-rater and intrarater reliability for overall score was ICC 0.846 (95% CI 0.773–0.911) and ICC 0.886 (95% CI 0.868–0.902), respectively. This represents moderate agreement. Interrater and intrarater reliability for final SINS category were κ = 0.608 (95% CI 0.579–0.637) and κ = 0.726 (95% CI 0.677–0.775). These represent moderate and substantial agreement, respectively. Conclusion: The Spinal Instability Neoplastic Score appears to be a reliable and valid classification system for spine instability due to tumour. Real-world application of SINS will need to be evaluated in a prospective fashion.
Purpose: The wait for surgical treatment of scoliosis is long in some countries. Long wait times may have serious consequences if the deformity increases during the wait period. This study was undertaken to determine the surgeon’s perspective on the type and magnitude of surgery required, with specific emphasis on peri- and postoperative measures, for patients with scoliosis with long wait list times (> 6 mo). Methods: Radiographs from 11 patients who had a Cobb angle greater than 50° and had waited 6 or more months for scoliosis surgery were selected from the scoliosis database. All patients had anteroposterior (AP), AP bending and lateral radiographs taken when the primary curve magnitude was 50° and at the time of preoperative planning. The radiographs and a surgical planning questionnaire were sent to 3 surgeons. The surgeons were unaware that these sets contained films of the same patients at 2 different times. Results: The mean curve progression in the 11 patients was 25° over the time on the wait list, from an average of 50° to 75°. The type of surgery the surgeon would likely perform changed from posterior instrumentation and fusion with a screw construct in all patients to anterior release and posterior instrumentation and fusion with a screw construct in 8 of the 11 patients, in at least 1 surgeon’s opinion. The mean estimated operative time increased by 2 hours. The mean estimated length of stay at the hospital increased by 1 day, and the estimated level of difficulty of surgery increased from 3/10 to 5/10. Conclusion: Waits of longer than 6 months for scoliosis surgery lead to the need for a second anterior procedure that probably would have not been necessary had the operation been performed earlier. This, in turn, increases the risks and costs.
Purpose: The use of triggered responses to locate nerves during surgical procedures is increasing rapidly. Current systems use electromyography (EMG) to measure response of muscle to stimulus. A ramping electrical signal is used to determine the minimal level at which a measurable electrical response can be elicited. A presumption is made that nerves are not located nearby if the stimulating current is above a certain threshold. Inaccuracies are inherent in this system because of the variable sensitivity of EMG sensors, variable electrical resistance in conducting tissues, electrical noise and stimulus artifacts that interfere with EMG signals. Mechanomyography (MMG) systems measure mechanical rather than electrical response of muscle to electrical stimulus and have been used widely in laboratory settings. They have been shown to eliminate all of these variables. Methods: We compared the effectiveness of an MMG system to 2 commercially available EMG systems in localizing motor nerves in surgical fields during spinal procedures. We measured the EMG and MMG responses to a ramping electrical current. Positive and negative responses were recorded along with response times. About 6000 tests were then done on 79 nerves in 15 live, anesthetized adult sheep. Results: One of the EMG systems failed to accurately locate nerves in more than 80% of nerves tested, and further use of that system was aborted. Positive agreement (100%) was found between the other EMG system and the MMG system. The MMG system had a faster response, indicating a higher sensitivity (detection of nerves at a lower threshold). This difference was statistically significant. Conclusion: We concluded that MMG is a highly reliable method of detecting the presence of motor nerves in surgical fields. It is more sensitive than triggered EMG in locating nerves.
Introduction: Various surgical methods have been described in the correction of scoliotic curves, including anterior release, pedicle screw fixation and traction. The purpose of this study was to compare the clinical results and complications associated with 4 different surgical procedures in treating large adolescent idiopathic scoliosis (AIS) curves. Methods: A retrospective chart and radiographic review of 71 consecutive AIS patients with curves greater than 75° treated at 2 Canadian centres was performed. Overall, 20 patients had posterior fusion with traction, 20 with combined anterior/posterior fusion, 19 with posterior fusion alone and 12 with thorascopic anterior release and posterior fusion. Following research ethics board approval, clinical outcome, radiographic correction and perioperative complications were analyzed. All patients underwent pedicle screw–based posterior constructs. Results: Patients were comparable in age and weight. Values are reports as means (and ranges). Preoperative Cobb angles measured 91.5° (77°–118°) in the traction, 86.2° (75°–108°) in the combined, 80° (75°–91°) in the posterior alone and 89.0° (82°–98°) in the thorascopic groups. Operative time and hospital stay were longest in the combined approach at 15.4 (7.4–23.4) hours and 35.3 (7–101) days, respectively, compared with 4.8 (3.4–8.5) hours and 6.4 (5–13) days with traction, 6.0 (4.0–7.5) hours and 7 (6–8) days with posterior only and 7.7 (5.3–9.8) hours and 8.4 (7–11) days with thoracoscopic release. Transfusions were required in 83% of the anterior/posterior group, 65% of the thorascopic group, 42% of the posterior only and 20% of the traction group. Curve correction was not significantly different among the groups (64.8% traction, 67.4% combined, 67.5% posterior only and 74.2% in thorascopic release). The combined group had the highest rate of perioperative complications. Conclusion: Anterior release increased operative time, hospital stay, transfusion rate and periopertaive complications with little impact on the overall correction. Comparable corrections can be achieved with posterior pedicle screw–based constructs with or without the use of intraoperative skull–skeletal traction. We question the value of anterior release surgery in the treatment of large AIS curves.
Background: Nonsteroidal anti-inflammatory drugs (NSAIDs) are powerful analgesics, frequently used for postoperative pain control. However, concerns regarding the potential deleterious effects of NSAIDs on bone healing have compelled many physicians to avoid NSAIDs in patients with fractures, osteotomies and fusions. This meta-analysis assesses the available human studies involving NSAID exposure and bone healing. Objective: We sought to systematically review and analyze the best clinical evidence regarding the effects of NSAIDs on bone healing. Methods: We performed a literature search for studies of fracture, osteotomy or fusion patients with NSAID exposure, and nonunion as an outcome. Data on study design, patient characteristics and risk estimates were extracted. Pooled effect estimates were calculated. Results: Seven spine fusion and 4 long-bone fracture studies were included. A significant association between lower-quality studies and higher-reported odds ratios (ORs) was identified. When only higher-quality studies were considered, 7 spine fusion studies were analyzed, and no statistically significant association between NSAID exposure and nonunion was identified (OR 2.2, 95% CI 0.8–6.3). No statistically significant association was found in our subanalysis of patients exposed to high-dose intravenous or intramuscular (IV/IM) ketorolac (OR 2.0, 95% CI 0.4–11.1), low-dose IV/IM ketorolac (OR 1.2, 95% CI 0.3–4.5) or standard oral NSAIDs (OR 7.1, 95% CI 0.1–520). In our subanalysis of the 4 most clinically relevant studies of adult spine fusion, patients with well-defined perioperative NSAID exposure, no statistically significant association was found between NSAID exposure and risk of nonunion (OR 0.8, 95% CI 0.4–1.4). Conclusion: Studies on NSAID exposure and long-bone healing were of lesser quality than the spine fusion studies. Within the spine literature, we could not demonstrate any increased risk of nonunion with NSAID exposure. Randomized controlled trials (and meta-analyses of such trials) on the impact of standard NSAID and COX-2 inhibitor exposure in spine and long-bone fracture, fusion and osteotomy populations are warranted to confirm or refute the findings of this meta-analysis of observational studies.
Objective: To ascertain the incidence of radiculitis and radiographic complications associated with the use of bone morphogenetic protein (BMP) in minimally invasive (MIS) transforaminal lumbar interbody fusion (TLIF). Methods: We retrospectively reviewed prospectively collected data of 121 consecutive 1 and 2 level MIS-TLIFs performed by 1 surgeon at a single institution. A small kit (4.2 mg BMP) per disc level was used in the BMP group (n = 82), and local autologous bone was used in the control group (n = 39). An independent observer reviewed the patient’s charts, radiographs and 6-month postoperative computed tomography (CT) scans retrospectively. Results: There was no significant difference in the mean age, body mass index, ASA use and baseline outcomes between groups. At the 6-week follow-up, new symptoms suggestive of radiculitis were found in 5 (6.1%) versus 2 (5.1%) patients in the BMP and control groups, respectively (p = 0.65). At 6 months, this had reduced to 2 and none, respectively. Evidence on CT of continuous inter-body bony bridging was seen in 78.4% versus 82.8% of the BMP and control groups, respectively (p = 0.8). The rest of the patients had doubtful intersegment body bridging. No nonunions were seen. Peridiscal osteolysis, cage subsidence and heterotopic ossification (HO) were seen in 24% versus 8% (p = 0.07), 67 versus 82% (p = 0.11) and 53 versus 25% (p = 0.008) of the BMP group versus the control group, respectively. There was no correlation between HO and radiculitis. There was a significant improvement in outcome scores at the 2 years follow-up in both groups (p < 0.0001). There was no significant difference between the 2 groups in back and leg pain at 1.5, 3, 6, 12 and 24 months and Oswestry Disability Index/SF-36 scores at 6, 12 and 24 months (p > 0.2). Conclusion: The use of low-dose BMP in MIS-TLIF does not seem to result in an increased incidence of radiculitis, but it is associated with a greater incidence of potentially significant radiographic findings of HO and osteolysis.
Introduction: Odontoid fractures are the most common cervical spine injuries in the elderly. Although octogenarians are the fastest growing age group, limited data are available about the natural history after they sustain odontoid fractures. Published mortality rates vary greatly but are high enough to elicit comparisons to post–hip fracture mortality. It has also been suggested that halo-vest immobilization independently predicts mortality. Methods: All traumatic odontoid fractures (type II or III) seen at our institution between 1996 and 2008 were identified, and only patients who were older than 80 years of age were selected. A retrospective chart review was performed for injury characteristics, comorbidities, hospitalization details, treatment regimen and documented complications. Patients were stratified using the Charlson comorbidities index. The primary outcome was mortality at 1 year and was identified using a provincial database. Results: A total of 72 patients were identified, with a median age of 86 (range 80–102) years. Patient treatment regimens included rigid neck collar, halo vest orthosis, surgery, or a combination thereof. Thirty-one percent of the cohort (22 patients) was treated by halo vest immobilization. The overall 1-year mortality rate was 15% (n = 11), with only 1 halo vest patient dying during this period. The majority of deaths (9/11) occurred in first 2 weeks following the injury. Conclusion: The mortality rate in the octogenarian population sustaining an odontoid fracture is high and approaches the 1-year hip fracture mortality rate. The use of a halo vest was not associated with increased mortality in our study. Optimal treatment regimens and strategies to minimize morbidity necessitate further study.
Introduction: The optimal timing of decompressive surgery for acute spinal cord injury (SCI) is controversial. This study aims to characterize expert opinion regarding the optimal timing of surgical decompression of the injured spinal cord. Methods: A 20-question survey was sent to orthopedic and neurosurgical spine surgeons around the world. Response frequencies were compiled for respondent demographics and preference for timing of surgical decompression in 6 distinct clinical scenarios. We used χ2 statistics to compare response frequencies based on specialty and fellowship training. Results: A total of 971 spine surgeons responded to the survey. In almost every clinical scenario, the majority of respondents (> 80%) preferred to decompress the spinal cord within 24 hours. A complete cervical SCI would preferably be decompressed within 6 hours by 46.2% of respondents, but 72.9% would operate within 6 hours for an incomplete SCI in an otherwise identical clinical scenario. Conclusion: Early decompression (within 24 h) should be considered as part of the therapeutic management of any patient with SCI, particularly those with cervical SCI. Very early decompression (within 12 h) should be considered for patients with an incomplete cervical SCI (with the possible exception of central cord syndrome).
Introduction: To determine curve type, Lenke classification for adolescent idiopathic scoliosis (AIS) uses strict cut-off values on radiological measurements such as Cobb angles, which are known to have significant interobserver variability. There is a documented variability in surgical treatment of AIS, yet the influence of curve types on that variability has not yet been studied. Objective: To use an automated method to classify AIS patients using radiological measurements. Our working hypothesis is that Kohonen self-organizing maps (SOM) can avoid limitations seen with classification using strict criteria. It can also highlight treatment variability depending on curve types. Methods: Preoperative Cobb angles from 1801 surgically treated AIS cases were inputted into a neural network to generate an SOM onto which Lenke classes and fusion levels were transposed. Geometric validation of the map using 3-dimensional reconstruction was done, and kappa statistics were used to evaluate treatment variability. Results: Kohonen SOMs classify scoliotic spines with a distribution gradient for each of the parameters inputted. Regionalization of Lenke curve types is respected when transposed onto that map. There are nodes with transition zones between 2 curve types epicentres. The types of fusions were only homogeneous in single thoraco-lumbar curves. Overall, 71 3-dimensional reconstructions of scoliotic spines were mapped on the Kohonen map showing conservation of geometrical neighbouring. Conclusion: Self-organizing maps can efficiently classify AIS while respecting neighbouring of similar scoliotic spines. There is variability in surgical treatment of AIS with all curve types, with the exception of some single thoraco-lumbar/lumbar curves which are consistently treated with a fusion limited to the thoraco-lumbar segment. Such classifications will allow clinicians to better query large databases and look up similar cases while eliminating the limitations imposed by classifications using rigid criteria. Ultimately, this could allow optimization of patient surgical treatment by comparing similar cases that are treated differently.
Background: With limited resources and increasing health care costs, there are increased barriers to completing spinal surgery in a timely fashion. The Medtronic Canadian Advisory Committee surveyed spinal surgeons in Canada about these barriers. Two surveys were completed. Methods: All surgeons who performed deformity surgery were surveyed. In all, 37 of 79 responded to the first survey and 39 to the second. The data were collected and collated by an independent company. Results: Seventy-eight percent had 1 or 1.5 days of surgery per week and 80% had one of those days dedicated to deformity cases; 61% stated that less than 10% of their cases were cancelled, but 31% reported 10%–25% and 8% reported 25%–50%. Nursing shortages, lack of beds and pressures to do other cases were common reasons. Sixty-three percent use neuromonitoring, and it was a barrier 8% of the time. The second survey showed that bed shortages caused 28% of the cancellations and nursing shortages caused 35%. Of responders, 84% said there was competition for intensive care unit (ICU) beds; 18% sent routine posterior instrumentation patients to the ICU. Conclusion: A significant number of spinal deformity cases are being cancelled, and as resources are cut owing to economic pressures, an even greater percentage of these will be cancelled owing to even greater competition for limited resources. This surgery is seen as low priority and cannot compete with cardiac surgery and transplantation for ICU resources. Nursing shortages are one of the biggest barriers to completing spinal deformity surgery in a timely manner. These shortages affect the ICU, operating room and ward, effectively blocking beds. Canadian spine surgeons need to advocate for increased resources to manage these patients.
Introduction: There is very little evidence to guide the treatment of patients with spinal surgical site infection (SSI) who require irrigation and débridement (I&D) with respect to need for single or multiple I&Ds. The purpose of this study was to build a predictive model that stratifies patients with spinal SSI to determine which patients will need single versus multiple I&D. Methods: A consecutive series of 128 patients from a tertiary spine centre (collected from 1999 to 2005) who required I&D for spinal SSI were studied based on data from a prospectively collected outcomes database. Over 30 variables were identified by extensive literature review as possible risk factors for SSI and were tested as possible predictors of risk for multiple I&D. Logistic regression was conducted to assess each variable’s predictability by a “bootstrap” statistical method. Logistic regression was applied using outcome of I&D, single or multiple, as the “response.” Results: Twenty-four of 128 patients required multiple I&D. Primary spine diagnosis was represented by about 25% trauma, 25% deformity, 25% degenerative and 25% oncology/inflammatory/other. Six predictors — spine location, medical comorbidities, microbiology of the SSI, presence of distant site infection (i.e., urinary tract infection [UTI] or bacteremia), presence of instrumentation and bone graft type — proved to be the most reliable predictors of need for multiple I&D. Internal validation of the predictive model yielded an area under the curve of 0.84. Conclusion: Infection factors played an important role in the need for multiple I&D. Patients with a positive methycillin-resistant Staphylococcus aureus culture or those with a distant site infection such as bacteremia with or without a UTI or pneumonia, were strong predictors of need for multiple I&D. Presence of instrumentation, location of surgery in the posterior lumbar spine and use of nonautograft bone predicted multiple I&D. Diabetes also proved to be the most significant medical comorbidity for multiple I&D.
Introduction: There is a common misconception that early return to work harms the back, particularly for patients with neurologic deficits. This study compares the rehabilitation outcomes of 2 distinct groups of patients with low back pain: those with objective neurologic findings (irritation or conduction tests, n = 158) and those with normal neurologic findings (n = 3103). Methods: This was a prospective observational cohort study of acute and chronic low back pain cases (n = 3261) treated nonoperatively at 49 spine care rehabilitation clinics in Ontario, Canada, and 5 clinics in New Zealand between January 2007 and August 2009. Results: The mean age of the cohort was 40.3 (SD 11.7, range 18–74) years, with 58.1% men. There were no baseline statistical differences between groups for symptom duration, numerical pain rating, sex, or work status. At entry, the group with objective findings used significantly more medication, had lower perceived function and was, on average, 4 years older. At the end of treatment, the objective findings group averaged more time in treatment (15.2 d v. 12.5 d, p < 0.013) and still had significantly fewer patients who were medication free (29.6% v. 40.7%, p < 0.019) or symptom free (19.0% v. 32.0%, p = 0.001) compared with those with normal neurology. However, at 3-month follow-up, there were no statistically significant differences in perceived function (% change 24.7 v. 25.4) or return-to-work rates (79.1% v. 83.0%) between groups. Conclusion: Despite slower treatment response, higher medication use and less pain reduction in those with objective findings, at 3 months, the 2 groups had comparable functional improvements and durable return-to-work rates. In conclusion, if the neurologic deficit does not directly interfere with job demands, return to work can be an achievable goal even for those with objective neurologic findings.
Introduction: Metabolic syndrome (MetS) has been shown to be a risk factor for chronic diseases such as cardiovascular disease, including myocardial infarction and stroke, and dementia. Moreover, the risk factors that make up MetS (central obesity, diabetes, hypertension and dyslipidemia) have also been demonstrated to have independent relations with degenerative joint disease. The relation between spinal osteoarthritis (OA) and MetS has not been studied previously. The purpose of this study was to determine whether the prevalence of severe spinal OA increases with the number of MetS risk factors. Methods: We retrospectively reviewed data from a single-surgeon, high-volume spine surgery practice between 2002 and 2007. Demographic data, including the components of MetS risk factors, were collected. Prevalent severe OA was defined as cervical or lumbar stenosis causing neurologically based symptoms, and degenerative spondylolisthesis as compared with early OA in those with lumbar and cervical spondylosis causing axial pain only. Logistic regression modelling was used to determine the odds (adjusted for age and sex) of having severe spine OA with an increasing number of MetS risk factors. Results: In our cohort of 1502 patients, there were 839 (55.9%) patients diagnosed with severe spinal OA and 663 of 839 (44.1%) patients with early OA. The overall prevalence of MetS was 30 of 1502 (2.0%), 26 of 839 (3.1%) in the severe OA group and 4 of 663 (0.6%) in the early OA group (p = 0.001). Logistic regression modelling demonstrated that those with all 4 MetS risk factors had almost 4 times greater odds of having severe spine OA as compared with those with no MetS risk factors (OR 3.9, 95% CI 1.4–11.6, p = 0.01). Conclusion: The components of MetS are more prevalent in patients with severe spinal OA causing neurologic symptoms compared with those with spondylosis causing axial pain. This finding warrants further longitudinal investigation into this association.
Background: The implantation of metal cages for anterior column reconstruction in established spinal infections is highly controversial. Traditionally, autologous bone graft has been the gold standard. There are, however, a number of limitations in terms of the associated donor site morbidity, inadequate availability in lengthy defects and the potential for graft slippage and fracture. Purpose: To assess the feasibility of using titanium cages in the reconstruction of anterior column defects in tuberculous and pyogenic spondylodiscitis. Methods: Between January 1999 and December 2007, 31 patients diagnosed with spondylodiscitis were treated with single staged anterior débridement, titanium cage placement and adjuvant posterior instrumentation. Two patients died and 3 had inadequate follow-up. The average age of the remaining 26 patients was 45.5 years. The dorsolumbar spine was predominantly involved. Nineteen (76%) patients had tuberculous spondylodiscitis and 7 (24%) had pyogenic infection. Neurological assessment was done using the Frankel score. Clinical outcome was graded by use of the MacNab criteria. At follow-up, patients were assessed for healing of disease, bony fusion, neurologic recovery and regression of pain. Results: Average follow-up was 32.5 months. Fusion was seen in 25 (96%) patients. Wound-related complications were seen in 2 (7.6%) patients; however, none had residual infection at final follow-up. Average loss of correction was 3°. Neurological recovery was seen in 10 (83.3%) patients. Good to excellent outcome was seen in 23 (88.4%) patients. Conclusion: Titanium cages with supplemented posterior instrumentation may be recommended as an alternative for defect grafting in spondylodiscitis. Their use permits immediate loading of the spine and does not hinder disease healing.
Background: Descriptive classifications for fractures of the thoracic and lumbar spine have been used as the gold standard for several decades. Unfortunately, these systems lack reliability and have little prognostic or predictive value. The recent development of the Thoracolumbar Injury Severity Score (TLISS) and the Thoracolumbar Injury Classification and Severity Score (TLICS) was intended to improve the reliability of classifying fractures and aid in using accurate classification in guided treatment and predicting outcomes. Objective: We sought to determine the inter- and intraobserver reliability of TLICS in a group of orthopedic and neurosurgical surgeons and trainees. Methods: Fifty-four cases of thoracic and lumbar spine fractures from the patient database of a level 1 trauma centre were presented to 5 spine surgeons, 2 spine fellows and 2 senior residents. Each participant was asked to rate the fracture using 2 classification systems (Denis and TLICS) on 2 separate occasions. Subgroup analysis included each of the 3 domains within TLICS, as well as comparisons of orthopedic and neurosurgical specialists, and trainees compared with consultant surgeons. Reliability was assessed using the Cronbach α coefficient. Results: Overall reliability (Cronbach α) was 0.611 for all TLICS evaluations and was not significantly different between the first and second evaluations of the cases. Statistically significant differences were found in the TLICS subgroup analysis in assessment of the posterior ligamentous complex and the fracture morphology. Interestingly, there was a significant difference in the classification by the Denis method between reviews. No difference was found in the scoring between trainees (fellows and senior residents) and consultant surgeons, but orthopedic surgeons scored cases higher than their neurosurgical colleagues in TLICS. Conclusion: We present an independent assessment of the TLICS in a group of spine surgeons and trainees. Our study confirms that the TLICS is reliable but also confirms that the posterior ligamentous complex domain remains problematic.
Introduction: At the first Canadian Spine Society meeting in Banff, Alberta, a technique for controlling cervical osteotomies in ankylosing spondylitis kyphotic deformities of the cervical spine was presented in 2 patients. We present and discuss refinement of the technique and equipment over the past 10 years and its application in the treatment of 17 patients with cervicothoracic kyphosis. Methods: Patients with late stage ankylosing spondylitis who were referred for spine surgery for surgical correction of this deformity were enrolled to have osteotomies using a technique that involves: 1) general anesthetic with spinal cord monitoring (SSEP and MEP); 2) sitting position on a regular Midmark operating table; 3) a posterior cervical osteotomy as described by Urist and Simmonds; 4) gradual, precise correction of the deformity using distraction devices attached to a halo ring and vest; and 5) internal fixation of the spine. Results: Seventeen patients with ankylosing spondylitis underwent this surgical correction. Average chin–brow angle before surgery was 47° (range 31°–70°) and after surgery was 7° (range 0°–20°) for an average correction of 41°. Nine patients also had a coronal tilt of the head averaging 14° (range 5°–35°), which was also corrected to 4° (range 0°–12°) for an average correction of 10° with the technique. Conclusion: This technique for correction of spinal deformity offers several advantages over other described methods. Positioning in a sitting position is relatively easy and allows for clinical assessment of head and neck position throughout the procedure. General anesthetic with cord monitoring is better tolerated by the patients than doing an awake procedure. The distraction bars maintain constant stability of the spine throughout the procedure and permit very accurate correction of deformity, even in a multiplanar fashion.
Purpose: Several clinical, imaging and therapeutic factors affecting recovery following spinal cord injury (SCI) have been described. A systematic review of the topic is still lacking. Our primary aim was to systematically review clinical factors that may predict neurologic and functional recovery following blunt traumatic SCI in adults. Such work would help to guide clinical care and direct future research. Methods: Both MEDLINE and Embase (to April 2008) were searched using index terms for various forms of SCI, paraplegia or quadri/tertraplegia and functional and neurologic recovery. Our search was limited to published articles that were in English and included human participants. Article selection included class I and II evidence, blunt traumatic SCI, injury level above L1/2, baseline assessment within 72 hours of injury, use of the American Spinal Injury Association (ASIA) scoring system for clinical assessment, and functional and neurologic outcome. We located a total of 1526 and 1912 citations from MEDLINE and Embase, respectively. Two surgeons reviewed titles, abstracts and full text articles for each database. Ten articles were identified; only 1 of them was level 1 evidence. Age and sex were identified as 2 patient-related predictors of recovery following SCI. Results: Whereas motor and functional recovery decreased with increasing age for complete SCI, there was no correlation in cases of incomplete SCI. Therefore, treatment should not be restructured based on age in incomplete SCI. Among the injury-related predictors, severity of SCI was the most significant. Complete injuries correlated with increased mortality and worse neurologic and functional outcomes. Other predictors included SCI level, energy transmitted by the injury and baseline electrophysiological testing.
Background: The osteogenic effects of bone morphogenetic proteins (BMPs) on mesenchymal stem cells (MSCs) are less profound in humans compared with rodents. The mechanism for this phenomenon is unclear. This study evaluated the effects of macrophages on proliferation and BMP-2–induced osteogenic differentiation of human MSCs. Methods: We isolated MSCs from human bone marrow. Human monocytes THP-1 (human acute monocytic leukemia cell line) were induced into macrophages by phorbol myristate acetate. The conditioned media (CM) from monocytes and macrophages were collected separately. After being treated with CM from monocytes or macrophages for 5 and 7 days, the proliferation rate of human MSCs was determined by the WST-8 assay. A group without CM served as a control. Pretreated human MSCs were then induced toward osteogenic differentiation by osteoinductive medium supplemented with 0.1 ug/mL BMP-2. Expression levels of osteogenic markers were determined by real-time quantitative polymerase chain reaction. Alkaline phosphatase (ALP) activity and mineral deposition were assessed by p-NPP colorimetric kinetic assay and calcium assay, respectively. Results: The number of MSCs was significantly decreased in the group with macrophage CM at both 5 and 7 days (both p < 0.001) as com-pared with the control group but not in the group with monocyte CM. Expression levels of ALP and bone sialoprotein 2 in the macrophage CM group were significantly lower than those in the control group (p = 0.003 and p < 0.001, respectively). Alkaline phosphatase activity was also significantly lower in the group with macrophage CM than the control group (p < 0.001). Although the expression levels of osteocalcin and RUNX2 as well as calcium deposition in the macrophage CM group were reduced, they did not reach statistical significance. Conclusion: Macrophages suppressed the proliferation of MSCs and inhibited BMP-2–induced osteogenic differentiation of human MSCs. In addition to known BMP antagonists, macrophages might be another important factor in suppressing the osteogenic effect of BMP-2 on human MSCs.
From observations of 1000 consecutive evaluations patients with cervical injury from motor vehicle collisions, assessments were conducted for possible injuries at the cranio–cervical region (or fetter) and their correlation with mechanism of injury, symptoms, findings and radiological changes including “paradoxical flexion and the ‘V’-sign,” basion–axillary, basion–dens and dens–atlas intervals, computed tomography findings and magnetic resonance imaging changes. At the caudal fetter of the cervical spine, “shoulder” symptoms were correlated with symptoms, mechanism of injury, postural features and scoliosis relating to leg length discrepancy, physical findings and imaging of the region. The results may shed new light on common post–vehicular crash symptoms that previously may have appeared to lack objective evidence.
Purpose: The optimal dose effect of recombinant human bone morphogenetic protein 7 (rhBMP7) on osteogenic differentiation of human bone marrow mesenchymal stem cells (hBMMSCs) remains unclear. In this study, we explored the optimal dosage of rhBMP7. Methods: We isolated hBMMSCs by gradient centrifugation and cultured them in mesenchymal stem cell growth medium until reaching 60%–80% confluence and then cultured them in osteogenic differentiation medium. Four groups were designed and studied with a gradient concentration of rhBMP7 (0 [control], 0.01 [O+0.01], 0.1 [O+0.1] and 1.0 μg/mL [O+1.0]). Alkaline phosphatase (ALP) activity, calcium deposition and mRNA expression of 3 osteogenic marker genes were examined at 2.5 and 5 weeks for each group to assess osteogenesis. Results: Alkaline phosphatase activity at 2.5 weeks was significantly increased in groups with 0.1 and 1.0 μg/mL rhBMP7 as compared with the control group (p = 0.002), and this increase coincided with the increased expression of the mRNA level of ALP. The calcium deposition in the O+1.0 group at 2.5 weeks was also significantly higher than in all of the other groups in a dose-dependent manner (all p ≤ 0.006), and the pattern remained unchanged at 5 weeks (all p < 0.001). The expression levels of osteocalcin and osteopontin in the O+1.0 group was significantly higher than in the control and O+0.01 groups at 2.5 and 5 weeks (all p < 0.05). The calcium concentration at 5 weeks in the O+0.1 and O+1.0 groups were significantly higher than at 2.5 weeks (p = 0.001 and p < 0.001, respectively). Conclusion: rhBMP7 can stimulate osteogenesis in a dose- and time-dependant manner, and the optimal dosage of rhBMP7 for osteogenesis of hBMMSCs identified in our study was 1.0 μg/mL.
Background: Surgical decompression is the accepted intervention for cervical spondylotic myelopathy (CSM), but who benefits and when to intervene remain difficult to predict. Although standardized, objective means of functional assessment do not exist, gait analysis shows good correlation to validated subjective scoring tools. Gait accelerometry with the Walkabout Portable Gait Monitor (WPGM) is validated and widely used at our institution. Applied in the clinical environment, it is user friendly, provides overall assessments of the power expended in gait in each of the 3 cardinal directions and side-to-side comparisons of symmetry, compared with standard timed walking or stepping tests which have shown promise. Purpose: We assessed the gaits of CSM patients using the WPGM before and after decompressive surgery, which may allow for more reliable prediction of outcomes, allowing surgeons to counsel patients with greater precision. Methods: This prospective clinical gait study (approved by the institutional ethics review board) has recruited 15 patients with a goal of 50. Patients are administered validated CSM-specific questionnaires, the SF-36 and asked to walk a 30-m corridor wearing the WPGM preoperatively, at 6 weeks and at 3, 6 and 12 months postoperatively. Gait data were analyzed comparing to self and healthy controls using means, standard deviations, t tests and analysis of variance. Results: Preliminary findings at 6 months show a statistically significant (p < 0.05) improvement in preoperatively pathologic vertical/forward and horizontal/vertical power ratios beginning at 6 weeks postoperatively. Vertical and forward asymmetries neared statistical significance preoperatively, worsened at 6 weeks postoperatively and improved to normal by 6 months. Velocity and step length trended toward improvement after 6 weeks. Correlations with clinical questionnaire data are pending. Conclusion: The results suggest that the gait of patients with CSM, as compared with controls, have altered 3-dimensional power expendature and asymmeteries which normalize by 6 months after decompression.
Background: We assess 3000 new spine patients annually, of whom about 900 (30%) go on to surgery. The primary objective of this abstract is to describe the development and maintenance of a prospective data collection and storage system for all patients assessed for surgery. Database development: In 2005, options for data collection and storage methods were discussed with our local health region and the Office of Medical Bioethics. We decided to manage the collection and storage of data internally. A privacy impact assessment was submitted to the provincial privacy commissioner’s office. A pilot study was undertaken using Tele-Form, survey design and data collection software, testing the software and study protocols. A TeleForm system was purchased and used to create all questionnaires. Data collection: Data collection began in late 2007. Patients have been recruited from practices involving 10 orthopedic and neurosurgeons. A patient-completed “consent” and “baseline questionnaire” gather medical history, demographic, occupational and generic-/disease-specific quality of life data. For all surgical patients, 3 additional questionnaires are completed. The surgeon or designate completes these 3 forms: an “initial clinical assessment,” indicating clinical diagnosis, pathology and comorbid states; a “surgical summary,” detailing the procedure performed; and an “early adverse events form,” to record within-hospital adverse events. Follow-up at 3 months postsurgery and annually thereafter includes any changes in comorbid health status, occupational role as well as quality of life measures. Protocols are in place to ensure forms are returned complete. Once collected, data are scanned, verified and stored with appropriate backup. Discussion: It has taken a number of years to get from conception to useful data collection. The current model is robust and modular. Currently, we collect long-term data on elective surgical patients, with a goal of expanding to include spine trauma and nonsurgical patients.
Objective: We report 8 cases of severely misplaced pedicle screws in patients with adolescent idiopathic scoliosis (AIS) after posterior surgery. Methods: We reviewed 8 cases of patients with medially misplaced pedicle screws after posterior surgery for AIS. A pedicle screw was considered severely misplaced if the spinal canal intrusion was greater than the pedicle screw diameter. The percentage of spinal canal intrusion was measured from computed tomography scans. Results: In 2 patients, pedicle screw misplacement was recognized during surgery, and all implants were removed. They both had motor deficits from which one patient recovered completely. Two patients had early postoperative postural headache that disappeared after the misplaced screw was removed. Four patients had an uneventful early postoperative course. One of these developed Brown–Sequard syndrome 2 years after surgery and underwent complete implant removal. Another patient had onset of paresthesia 3 years after surgery, and complete implant removal was performed uneventfully. One patient with intact neurologic status had uneventful implant removal 9 years after surgery, and the last patient refused implant removal and remains neurologically intact 3 years after surgery. Conclusion: Improper pedicle screw placement can lead to neurologic complications that appear early or late (after 2 yr). Late neurologic complications were associated with screw loosening in 2 cases. The neurologic status can remain normal for a certain period of time even with a 61% thoracic canal intrusion. No major complication occurred with implant removal. We recommend removal of severely misplaced pedicle screws because of the risk of early or late neurologic complications, especially when there is evidence of screw loosening.
Purpose: Adolescent idiopathic scoliosis is a common condition in which the spine is rotated into a 3-dimensional deformity. This deformity can affect the lungs and chest wall, which can adversely affect pulmonary function. Surgery generally consists of a posterior spinal fusion and instrumentation to prevent further progression of the spinal deformity. During surgery, patients remain in the prone position for several hours with their body weight supported by pads under the chest, pelvis and lower extremities. As a significant portion of the patient’s body weight is on the chest, pulmonary status may be further compromised. Two different chest pad designs, 1-piece and 2-piece, are commonly used. Understanding whether one design may provide better force distribution and limit the pressure applied to the patient could improve pulmonary function and reduce the risk of pressure ulcers. Methods: Thirteen female patients (10–18 yr) with right thoracic idiopathic scoliosis ( 45°) were evaluated. A Jackson Spinal Surgery and Imaging Table (Orthopaedic Systems Inc.) was used. Pressure transducer mats (Vista Medical) were overlaid on the chest and hip supports. The average and maximum pressure and resulting force were found for each pad. Results: The 1- and 2-pad systems had average chest pressures of 27.88 (SD 5.17) kPa and 21.29 (SD 4.77) kPa (p = 0.01) and maximum chest pressures of 81.54 (SD 12.64) kPa and 56.44 (SD 14.21) kPa (p = 0.00), respectively. The 1- and 2-pad systems had average pelvis pressures of 19.03 (SD 4.01) kPa and 22.09 (SD 4.74) kPa (p = 0.04) and maximum pelvis pressures of 102.9 (SD 39.6) kPa and 136.9 (SD 60.3) kPa (p = 0.03), respectively. Conclusion: Patients experienced significantly higher average and maximum chest pressures with the 1-pad chest support system, which corresponded to significantly lower average and maximum pressures at the pelvis supports.
Background: We have previously demonstrated that bioengineered synthetic hydrogel channels can facilitate axon regeneration and functional recovery after spinal cord injury. Unfortunately, these synthetic channels underwent calcification, and with prolonged implantation, functional recovery was compromised. To circumvent this adverse reaction, we have developed biodegradable synthetic channels composed of a polymer of poly-lactate-co-glycolate (PLGA) and used superparamagnetic iron oxide (SPIO) nanoparticles to label the channel to allow noninvasive assessment of the channel over time. Methods: We assessed biocompatibility in a Sprague–Dawley rat spinal cord transection model and used magnetic resonance imaging (MRI) to visualize the channel noninvasively and serially over time. Using a dip coating technique, PLGA channels were constructed by dip coating of PLGA onto a glass rod. The channels were then foamed with supercritical carbon dioxide. The SPIO nanoparticles were incorporated directly into the channel during the dip coating. Adult, female rats then underwent complete spinal cord transection at the eighth thoracic level. The transected stumps were then inserted into either end of the SPIO channel. Controls underwent transection and channel implantation that did not contain SPIO. The animals underwent a survival period of 12 weeks and were assessed for mortality, morbidity and functional recovery using the Basso Beattie and Bresnahan (BBB) locomotor scale and weekly MRI imaging. Results: The SPIO nanoparticles provided contrast for the biodegradable PLGA channels and allowed visualization of the channel by MRI. There was no increase in morbidity or mortality, and there was no statistically significant reduction in BBB locomotor scores with the incorporation of the nanoparticles. Conclusion: Nanoparticle incorporation into biodegradable channels for SCI can allow noninvasive assessment of the location of biomaterial and its effects on the local host tissue through the use of MRI. The use of nanoparticles can help expedite the evaluation of biodegradable biomaterials for spinal cord injury repair.
Purpose: The average orthopedic surgery wait time is 34.1 weeks in New Brunswick (Esmail 2008); thus, there is a need to find alternative treatments for pathologies such as lumbar disc herniations (LDH). The literature has demonstrated that selective nerve root injections (SNRI) are able to alleviate radicular symptoms caused by LDH (Riew 2006) and may be beneficial as an alternative to surgery. Therefore it is necessary to determine whether SNRIs provide significant symptom resolution alleviating the need for surgery or if their success is transient and prolongs the time to surgery. We sought to evaluate the success of SNRI in patients with LDH. Methods: A retrospective chart review was used to identify 91 patients diagnosed with LDH who received SNRIs between September 2006 and July 2008. Criteria for referral to SNRI were patients who had exhausted nonoperative measures and who wished to proceed with surgery. Patient demographics, diagnosis, Workplace Health, Safety and Compensation Commission of Newfoundland and Labrador (WHSCC) status, spinal levels and medication injected, treatment/outcome and time from referral to treatment were evaluated. SNRIs were deemed successful if they were found to alleviate the need for surgery. Results: Time from referral to injection was on average 123 (SD 88) days. In total, 51 patients (22 female, 29 male) aged 45.8 (SD 10.2) years reported positive results and avoided surgery following SNRI, whereas 40 patients (16 female, 24 male) aged 43.1 (SD 12.0) years proceeded to surgery within 190 (SD 125) days after injection. Fifteen patients received multiple injections and, of these, 9 did not require surgical intervention. Age, sex and level/side of pathology did not influence the treatment outcome (p > 0.05), whereas WHSCC status and injecting physician appeared to influence outcome, although not significantly. Conclusion: SNRIs are an important treatment tool, preventing the need for surgery in 56% of patients with LDH and easing the surgical burden on a taxed health care system.
Introduction: Cervical spondylosis is a very common degenerative condition, and surgical management of this is restricted to a select group of patients. Surgical indications are based on the correlation of clinical presentation and the identification of a specific lesion on imaging. Magnetic resonance imaging (MRI) studies have surpassed other modalities and help guide surgeons’ decision-making on approach, levels and type of fusion. Appreciating the cervical alignment, however, remains important and may be poorly represented by MRI alone. The Cobb angle, a surrogate for alignment, is typically calculated based on an upright radiograph. Our hypothesis is that there is sufficient difference between the MRI and radiograph scans of patients to have an impact on management. Methods: Our retrospective review of 100 patients seen at 2 university centres (Université Laval and University of Toronto) was undertaken. The Cobb angle was used to judge alignment between C2 and C6 using both MRI scans and upright radiographs. Repeated measurements were taken to determine intraobserver differences. No significant intraobserver differences were seen. Results: Results were examined to identify if differences exist between Cobb angles for both imaging modalities. Differences were noted between the 2 groups. Scans acquired using MRI appear to over estimate the degree of curvature as measured by the Cobb angle. On average, there was a difference of 6.8° between radiograph and MRI scan measurements. Conclusion: Our hypothesis emphasizes the need for the inclusion of upright radiographs in the work-up of a surgical patients. The impact of the radiograph may influence not only approach but also the number of levels. This study will lead to further analysis of the surgical cases and construct failure