|Home | About | Journals | Submit | Contact Us | Français|
The interest and research into disease-related biomarkers has greatly accelerated over the last 10 years. The potential clinical benefits for disease-specific biomarkers include a more rapid and accurate disease diagnosis, and potential reduction in size and duration of clinical drug trials, which would speed up drug development. The application of biomarkers into the clinical arena of motor neuron disease should both determine if a drug hits its proposed target and whether the drug alters the course of disease. This article will highlight the progress made in discovering suitable biomarker candidates from a variety of sources, including imaging, neurophysiology and proteomics. For biomarkers to have clinical utility, specific criteria must be satisfied. While there has been tremendous effort to discover biomarkers, very few have been translated to the clinic. The bottlenecks in the biomarker pipeline will be highlighted as well as lessons that can be learned from other disciplines, such as oncology.
Amyotrophic lateral sclerosis (ALS) is a rapidly progressive and fatal neurodegenerative disease that affects 2.7–7.4 people per 100,000 in western countries. The disease duration is typically 2–5 years, but it can be much longer . Unfortunately, riluzole is the only licensed disease-modifying drug and it extends lifespan by only a few months. Many other clinical drug trials have failed to demonstrate any benefit. This has come at a huge cost to the pharmaceutical industry and, more importantly, has resulted in the continued suffering of those with the disease [2,3].
In recent years, improved strategies have been sought by pharmaceutical companies and academia to enhance the efficiency and speed of drug development. The US FDA critical path initiative and the European Medicines Agency (EMA) have identified that biomarkers have a crucial role in such a strategy. A biomarker is defined as any characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes or pharmacogenomic processes to a therapeutic intervention. Both the FDA and EMA recognize the increasingly important role of biomarkers in the drug-development process. Recently, the EMA and FDA concluded the first joint qualification process for a biomarker, resulting in the acceptance of a seven protein panel for drug-induced renal toxicity .
Biomarker discovery efforts utilize numerous platforms including genomics, proteomics, metabolomics, imaging modalities and neurophysiology, and often search for the diagnostic utility of biomarkers. Diagnostic biomarkers would permit diagnosis at an earlier stage of disease (potentially preclinical), as well being informative with regards to underlying pathological causative pathways. However, diagnostic biomarkers may not change during disease progression and, therefore, may be informative regarding the initiation of disease but have little merits for monitoring disease progression. For example, a specific protein or metabolite may exhibit alterations during disease initiation and remain at the same altered level throughout the course of disease. Therefore, one must remember that a diagnostic biomarker may not be useful as a surrogate marker of disease progression or a surrogate end point. However, it is possible that diagnostic biomarkers function within a biochemical pathway targeted by a specific drug therapy. In this case, the diagnostic biomarker may also be useful to monitor the ability of the drug to hit its target in the nervous system. Overall, it is likely that biomarkers of disease initiation and those of disease progression may be entirely different panels of biomarkers.
For use in clinical trials, a surrogate end point would be of great value. A surrogate end point is defined as a biomarker that is intended to substitute for a known clinical end point, such as survival in the case of motor neuron disease. A surrogate end point is expected to predict benefit (or harm, or lack of benefit) based on epidemiologic, therapeutic, pathophysiologic or other scientific evidence . Such biomarkers are also frequently used to monitor disease progression in response to therapy.
Clinical end points, such as survival, have routinely been used; however, this can be impractical owing to the relatively long time periods required for the end points to be achieved. Surrogate end points that can be reliably substituted for clinical end points will improve the efficiency of clinical trials. For example, serum cholesterol level served as a biomarker for the evaluation of 3-hydroxy-3-methyl glutaryl co enzyme A-reductase inhibitors to diminish the risk of coronary artery disease . However, there are limitations that must be taken into consideration, as will be discussed.
The successful use of protein-based biomarkers as surrogate end points within clinical trials for ALS has yet to be accomplished, although a recent Phase I clinical trial for memantine utilized cytoskeletal proteins as biomarkers drug effect . Additional clinical utilities evident for ALS-specific biomarkers. For clinical research studies and trials, ALS diagnosis is made using the Airlie House modification El Escorial criteria with the Awaji electromyographic algorithm, which defines the clinical and electromyographic data as equivalent formulating the diagnosis. Clinically, evidence of upper and lower motor neuron involvement in three different regions along the neuroaxis, and absence of sensory changes that cannot otherwise explained, is required together with evidence for clinical progression [7-9]. These criteria are mainly used for research purposes and are not yet mandatory in the clinical setting. Even without the use of such criteria, diagnosis of motor neuron disease typically takes 12–14 months from symptom onset. Introduction of drugs at this stage of disease progression may hinder their efficacy and effectively limit success of ALS clinical trials. Therefore, biomarkers that can rapidly diagnose ALS at an early stage would be of immense value to resolve this fundamental issue. As we move into an era of personalized medicine, we must also recognize that ALS is a heterogeneous neuro degenerative condition and biomarkers may help stratif patients into populations such as fast or slow progressors. Identification of patient subpopulations may introduce clinical trial inclusion criteria that optimize clinical trial design and enhance potential success of the trial. Therapeutic drug levels can vary widely between patients, so biomarkers may be used to facilitate safety, efficacy and exposure–effect relationships.
Surrogate biomarkers can be developed to monitor drug effects or disease progression. Surrogate markers that measure a drugs interactions with its intended target via receptor occupancy would greatly assist in successfully moving a drug through clinical development. in addition, surrogate biomarkers of disease progression would allow the objective measurement of secondary end points for drug efficacy rather than relying solely on survival as the outcome measure. The current, most widely used measure to monitor disease progression is the revised ALS functional rating scale (FRS) . This quality-of-life scale has been validated as a good measure of clinical disease progression. However, its sensitivity at the early stages remains low and overt clinical scales provide no insight into the biologic mechanisms of disease or disease progression. Other clinical measures typically used as outcomes include indices of limb strength (i.e., manual muscle testing and maximal voluntary isometric contraction) and respiratory function (i.e., forced vital capacity). Therefore, many investigators have searched for biochemical measures for biomarkers of ALS disease progression to provide insight into the mechanisms of disease.
Many reviews have reported biomarkers for diagnostic purposes but this article will focus on the other aspects, highlighted previously, where biomarkers can play a role in clinical trials. We will not provide an exhaustive list of biomarkers but, rather, have sought examples to demonstrate the principles. For a more complete list, we would recommend recent reviews on this topic [11,12]. The challenges facing this field along with lessons to be gained from other areas such as oncology will also be discussed.
Figure 1 lists various technologic platforms used to discover and validate biomarkers for motor neuron disease. Each of these approaches will be discussed in this section.
Advantages of imaging modalities include the noninvasive and radiation-free mode of examination. However, the often rapid disease progression together with the fact that ALS patients typically have difficulty lying flat, owing to bulbar and respiratory problems, makes this approach difficult for longitudinal studies. Imaging is also currently restricted to focusing on neuronal integrity, loss of motor neurons or corticospinal tracts. It does not capture many of the biological causes of ALS at the subcellular level. While correlating imaging results to clinical data remains challenging, it will be important for the identification of an imaging modal that identifies patient changes before overt clinical symptom change in order for imaging to be useful in monitoring disease progression or patient responses in clinical trials. Further improvement in the resolution of this technology is likely to see a greater role for these imaging methods in longitudinal studies.
Waragai and colleagues reported that the high signal in T2 became less marked with disease progression in one ALS patient, possibly via accumulating corticospinal tract gliosis during disease progression . Peretti-Viton and colleagues demonstrated that primary lateral sclerosis patients had decreased changes in T2 hyperintensity over time .
Proton magnetic resonance spectroscopy was introduced to detect subtle neurodegenerative changes not apparent on a conventional MRI scan (an overview of this technique is provided by Prost ). Proton magnetic resonance spectroscopy of the brain exhibits three major metabolites: N-acetyl aspartate (NAA), cho-line-containing compounds (Cho) and creatine phospho creatine (Cr). A multivoxel approach allows cleaner results to be obtained compared with the previous single-voxel method. NAA is predominantly present in the motor neurons in the brain and, therefore, changes in this metabolite are likely to reflect a loss or dysfunction of neurons. Although NAA:Cho or NAA:Cr ratios are often used to minimize the interindividual differences that occur while measuring absolute concentrations, Cr is thought to represent gliosis whereas Cho is thought to be a marker associated with membrane phospholipids. Wang and colleagues reported a moderate correlation between NAA:Cr ratios and disease severity. The NAA:Cr ratio could also be used to predict severity . However, Pohl and colleagues argue that NAA:Cho is better at quantifying upper motor neuron (UMN) degeneration than NAA:Cr, as Cr levels fall in ALS, reducing the sensitivity of this measure. Pohl and colleagues followed 16 patients longitudinally for 1 year and demon strated a significant decrease in the NAA:Cho ratio but a nonsignificant decrease in NAA:Cr. Importantly, there was considerable interindividual variability for onset and slope of NAA:Cho deterioration. A moderate correlation was seen between a decline of these metabolic alterations and clinical disease progression. A correlation with ALS-FRS is hindered by the fact that the ALS-FRS rating scale does not distinguish between UMN and lower motor neuron dysfunction . Nevertheless, Suhy and colleagues reported longitudinal decreases in NAA, Cr and Cho levels in the ALS motor cortex but not in the non-motor areas of the brain, suggesting that Cr and Cho levels may rise in early stages of the disease, but subsequently decrease over time . A recent study by Mitsumoto et al. also demonstrated reduced levels of NAA and NAA:Cr in ALS patients when compared with control subjects, but little differences between ALS and disease mimics (primary lateral sclerosis, primary motor area and UMN). Over time, the rate of change in NAA and NAA:Cr in ALS patients was not significant when compared with baseline, whereas clinical measures and neuro physiologic measures (see later) significantly decreased during the same time frame  These results suggest that imaging biomarkers may change slowly over time and may be less sensitive measures of disease progression than neurophysiologic or biochemical biomarkers.
Magnetic resonance spectroscopy has been used as a surrogate marker in clinical drug trials. For example, an increased NAA:Cr ratio was detected in response to riluzole treatment, suggesting an improvement in UMN integrity . In another study, Kalra and colleagues used mass spectroscopy to monitor the effect of gabapentin on the periorlandic cortical neuronal integrity but did not find any significant drug effect .
Diffusion tensor imaging (DTI) is a useful tool for measuring UMN dysfunction in vivo and an overview of the technique is provided by Rovaris and Colleagues . The key measurements are fractional anisotropy and mean diffusivity. Diffusion of water molecules is normally anisotropic in white-matter tracts since axonal membranes and myelin sheaths provide barriers to the motion of water molecules. Fractional anisotropy is useful in assessing myelination. Mean diffusivity is a measure of the magnitude of diffusion and, therefore, can reflect axonal degeneration. However, a region-of-interest approach is often used where contamination from noncorticospinal tracts is difficult to avoid. Ellis and colleagues demonstrated a correlation between fractional anisotropy and measures of disease severity, speed of disease progression and UMN involvement. The mean diffusivity correlated with disease duration. Taken together, this suggests fractional anisotropy may be affected early in disease, whereas elevation in mean diffusivity reflects more chronic changes with loss of neurons . Recently, Stanton and colleagues used DTI to demonstrate that fractional anisotropy correlated with clinical measures of severity and UMN involvement . This correlation was further confirmed by Nickerson and colleagues who demonstrated that fractional anisotropy was significantly decreased after 9 months in two ALS cases compared with normal age and matched controls .
Bartels and colleagues investigated the frequency and functional relevance of corpus callosum degeneration in ALS. DTI was used to measure fractional anisotropy in the corpus callosum area containing the crossed motor fibers and in the pyramidal tracts. Patients performed a contralateral co-movement test that reflected corpus callosum degeneration. Fractional anisotropy was found to correlate with the contralateral co-movement test and the ALS-FRS as a measure of disease progression. Therefore, the contralateral co-movement tests together with the DTI result provide a quantitative measurement independent of ALS-FRS that could be used in future clinical trials . Iwata and colleagues demonstrated that fractional anisotropy reflected functional abnormality of the intracranial corticospinal tracts in ALS but with no correlation with ALS-FRS . DTI has additional technical limitations that can be overcome by diffusion-spectrum imaging. Diffusion-spectrum imaging can provide more accurate measurements than DTI by improving sensitivity and specificity to detect white-matter degradation in ALS . Hagmann and colleagues have utilized this method to map the structural core of the human cerebral cortex .
Voxel-based morphometry is a fully automated operator-independent whole-brain image-analysis technique that allows a voxel-wise comparison of segmented gray- and white-matter images between the two groups of subjects. An overview of the technique is provided by Ashburner and Friston . Grosskreutz and colleagues demonstrated atrophy in the motor cortex, extending to primary sensory areas in mildly affected ALS patients, but it did not correlate with clinical variables. Regional atrophy of the right dorsolateral prefrontal cortex did correlate with ALS-FRS. This is also the case in cognitively unimpaired patients and may reflect a structural change before clinical signs appear. Longitudinal studies using voxel-based morphometry are required to further evaluate its utility in clinical drug trials .
Arterial-spin labeling MRI is a relatively new imaging modality that can measure brain perfusion using magnetically labeled endogenous arterial-blood water; an overview of this technique is provided by Detre and colleagues . Rashid and colleagues used a continuous ALS-MRI with patients diagnosed with multiple sclerosis . They found evidence of reduced perfusion in the gray matter, reflecting decreased metabolism secondary to neuronal and axonal loss. Increased perfusion is seen in the white matter, which reflects increased metabolic activity suggesting increased cellularity and inflammation . Longitudinal studies are required to assess its potential as a surrogate marker for ALS.
Functional MRI allows us to visualize the brain in its dynamic state. There is increased brain response during the performance of motor tasks in ALS compared with controls, suggesting compensatory response to overcome neuronal and functional loss. Lulé and colleagues aimed to study the longitudinal effect of progressive motor neuron degeneration on cortical representation and motor imagery and function in ALS. ALS patients exhibited a stronger response in the premotor areas and primary-motor areas in the early stages of disease. After 6 months, additional response was seen in the precentral gyrus and the frontoparietal areas . Correlations with ALS-FRS total score and subscores with decreased perfusion have been found using SPECT  and differences between sporadic and D90A superoxide dismutase (SOD1) cases were found when correlating ALS-FRS with11C-flumazenil PET .
Magnetoencephalography (MEG; electroencephalography combined with regional activity localized using MRI) has been used by Boyajian and colleagues. MEG demonstrated slow-wave dipole sources in all ALS cases studied, which, by comparison, was absent in all controls . Slow-wave dipole density in the right cingulated gyrus was positively associated with upper-extremity functional disability. Slow-wave dipole density in the parietal lobes appeared to correlate with disease duration . Although MEG has the advantage of not requiring the patient to lay flat (subject sits during the procedure), it still requires MRI to accurately map the MEG signal to brain structures. An overview of the techniques for functional MRI and MEG is provided by Babiloni and colleagues .
Muscle wasting and fasciculations are clinical features of ALS. It is, therefore, reasonable to seek biomarkers using neurophysiological measures. UMN markers include measuring the development of cortical hyperexcitability using transcranial magnetic stimulation. Specifically, there is a reduction in short-interval intracortical inhibition (SICI) associated with an increase in intracortical facilitation and cortical stimulus-response gradient, indicative of cortical hyperexcitability. Vucic and colleagues used transcranial magnetic stimulation to longitudinally monitor cortical hyperexcitability in three presymptomatic patients who were carriers of the SOD1 mutation . A reduction in SICI was found to occur before the onset of symptoms. Further validation studies using a much larger subject cohort are required to verify these initial findings. Early alterations in SICI within this subject population (carriers of SOD1 mutations) could be a useful diagnostic measure and permit early therapeutic intervention. However, the authors note that SICI is found to be reduced in other neurological and psychiatric disorders; thus, SICI reduction may reflect a compensatory response related to brain injury or ‘bystander’ phenomena. Therefore, the specificity of SICI reductions may be low and, thus, not a useful early diagnostic for the general population.
Motor-unit number estimation (MUNE) has emerged as an important area of research as a surrogate marker that reflects lower motor neuron loss. MUNE provides a numerical value reflecting the number of functioning motor units in the muscle. The major limitations of MUNE include the active denervation of ALS, the absence of a gold standard and operator dependence, making it difficult to estimate numbers . Shefner and colleagues reported that rigorous training can lead to reliable testing and data monitoring. In addition, a reduction in MUNE by 23% over 6 months was demonstrated during a multicenter clinical trial for creatine . Aggarwal and Nicholson carried out a longitudinal study of 19 patients who were SOD1 mutation carriers with no neurological symptoms or signs, and followed them for upto 3 years. Motor neuron loss was detected in two of the 19 SOD1 mutation carriers before the onset of clinical symptoms . A recent longitudinal study of 73 ALS patients indicated that MUNE may be useful to stratify ALS patients as rapid or slow progressors . The ability to stratify ALS patients into different progression rates would be useful for the design of future clinical trials.
A neurophysiological index (NI) derived from compound muscle action potential, distal motor latency and the F-wave frequency has been demonstrated to be sensitive and reproducible. Interestingly, it is able to differentiate between rapid and slow disease progression . The advantage of these measurements is that they can be taken in typical neurophysiological laboratories. DeCarvalho and Swash demonstrated that NI and MUNE show a larger change than clinical measures, including ALS-FRS, when assessing ALS patients longitudinally at 6-month intervals over 1 year. In fact, changes in NI and MUNE were highly correlated, but the former has the advantage of objectivity, simplicity and does not require special methods. Unlike MUNE, NI is sensitive to reinnervation and F-wave excitability .
A change in electrical-impedance myography has been demonstrated to be a candidate biomarker for ALS disease progression and severity . During impedance myography, surface electrodes are placed over the muscle and a low-intesity, high-frequency current is applied through the electrodes to measure the underlying muscle impedance. Muscle atrophy that occurs in motor neuron disease will result in increased current resistance. This technique is currently being tested as a diagnostic tool, and, in the future, may contribute to monitoring disease progression or modulation of muscle atrophy by therapeutic intervention .
Nogo-A is a protein involved in preventing neurite outgrowth and nerve regeneration in the CNS. The detection of Nogo-A in muscle biopsies has been demonstrated to predict ALS disease progression with high accuracy. Importantly, Nogo-A can be detected in clinically and electrophysiologically unaffected areas, thereby permitting the initiation of potentially disease-modifying therapies at an earlier stage . However, this test is invasive and this limits its potential as a biomarker assay of disease progression. In addition, follow-up studies have not yet verified the initial findings regarding Nogo-A as a biomarker for ALS.
Current therapies target motor neuron survival and function. Peripheral measures of motor units or muscle mass may be insufficiently sensitive as biomarkers for drugs that enhance central motor neuron survival and, thus, may increase patient survival. Future clinical trials that incorporate stem cell therapies may generate increased motor-unit activity or increase muscle mass such that these neurophysiologic markers may become useful surrogate markers of these therapies.
The study of protein-based biomarkers that may be useful to monitor disease progression or drug efficacy in clinical trials has focused on specific body fluids, including blood, cerebrospinal fluid (CSF) and urine. Since CSF is within the CNS and contains proteins and peptides released by cells prior to elimination from the CNS, it can be expected that changes in CSF protein levels will reflect any under lying neurological disorder. A major disadvantage of using CSF to monitor ALS disease progression or drug efficacy in a clinical trial is the invasive nature of the procedure and increased potential adverse effects compared with peripheral blood sampling. Nevertheless, a few longitudinal human studies have measured CSF proteins. Grundstrom and colleagues demonstrated that glial-derived neurotrophic factor is elevated in ALS compared with controls, but no statistically significant differences were seen in the six patients who had CSF taken before and 3 weeks after taking riluzole . A recent report on a Phase I clinical trial of memantine in 20 ALS patients measured total Tau and phosphorylated neurofilament heavy chain (pNFH) prior to and during 12-month treatment with a combination of riluzole and memantine. It demonstrated that ALS patients who exhibited the greatest decline in the rate of disease progression, as measured by ALS-FRS, also exhibited the greatest reduction in CSF Tau and pNFH levels . This suggests that levels of CSF protein may provide a measure of drug efficacy in future memantine clinical trials. Cyclooxygenase-2 has been demonstrated to be elevated in the SOD1 transgenic mouse models and is thought to produce prostaglandins that trigger neuronal and actrocytic release of glutamate . A double-blind controlled study of celecoxib in ALS, designed by the Northeast ALS (NEALS) consortium, was carried out using prostaglandin as a secondary end point in the clinical trial . No beneficial effect was detected for celecoxib and no change was detected in CSF levels of prostaglandin; however, ALS patients failed to exhibit increased levels of CSF prostaglandin prior to treatment. This result highlights the importance for successful verification and validation studies of the biomarker prior to use in clinical trials. Finally, a small clinical trial for edaravone, a free-radical scavenger, utilized CSF levels of 3-nitrotyrosine as a secondary end point biomarker for oxidative stress . The results demonstrated a slightly reduced rate of decline in ALS-FRS, which corresponded to a reduced level of 3-nitrotyrosine in the CSF. While 3-nitrotyrosine is not a protein biomarker, rather a metabolic marker, for reactive nitrogen species on protein, we wished to highlight the use of this marker as a secondary end point in a small clinical trial for ALS.
Other proteins have also been correlated to clinical parameters or measures of disease progression. VEGF has been linked to ALS. The VEGF gene is stimulated by hypoxia through the binding of hypoxia-inducible factor to a hypoxia-responsive element in the promoter region. Moreau and colleagues found a positive correlation between VEGF levels and hypoxemia, specifically in ALS patients, with a reverse correlation in hypoxemic neurological controls . Erythropoietin has a specific receptor in the brain and is synthesized in astrocytes as well as neurons. Erythropoietin has been demonstrated to act as a neurotrophic factor supporting differentiation and regeneration of neurons. Brettschneider and colleagues demonstrated that low erythropoietin levels were associated with rapid progression of disease . VgF, a nerve growth factor-inducible peptide had been demonstrated to distinguish ALS from controls, which is thought to be via abnormal metabolism. Zhao and colleagues demonstrated that reduced levels of full-length VgF in the CSF of ALS decrease as a function of progression of muscle weakness, characterized by an increasing number of muscles assessed by manual muscle testing . Tanaka and colleagues demonstrated that intrathecal macrophage chemoattractant protein-1 was negatively correlated with ALS-FRS and positively correlated with disease progression rate . More recently, Kuhle and colleagues also demonstrated higher levels of macrophage chemoattractant protein-1 in those who are known to have a shorter survival . Substance P is a neurotrophic factor that has been reported by Matsuishi and colleagues to be elevated in the CSF, particularly where the disease duration was less than 2.5 years . Fujita and colleagues investigated the activity of transglutaminase activity and demonstrated that levels were higher compared with controls in the early stages of disease and then lower in the later stages .
Markers of axonal degeneration, including neurofilament and Tau, have been investigated as potential markers of disease progression. Brettschnieder and colleagues demonstrated that pNFH levels were fivefold higher in the CSF of ALS patients compared with controls, and that high levels of pNFH are associated with more rapid disease progression. In addition, pNFH levels were higher in those with predominantly UMN symptoms . Zetterberg and colleagues demonstrated that neurofilament light-chain levels were eightfold higher than disease controls and 12-fold higher than healthy controls. A high level of neurofilament light chain was found to be associated with more rapidly progressive death. Interestingly, neuro filament light-chain levels were lower in patients with SOD1 mutations underlying the heterogeneity of ALS with different underlying pathophysiological pathways being involved and the need for stratification .
Blood-based biomarkers are based on the assumption that changes in the CNS will be relayed to the systemic circulation. Muscle-derived biomarkers may also accumulate in the peripheral blood during the course of disease. An advantage of blood-based biomarkers is the relative ease of sample collection and the ability to enroll subjects in longitudinal studies with serial blood draws. Ono and colleagues investigated components of the connective tissue and reported that plasma levels of fibronectin demonstrated negative correlation with duration of disease . In addition, hyaluronic acid was demonstrated to be significantly increased in ALS and positively correlated with disease duration . Cytokines are believed to play a role in neurodegenerative diseases and numerous studies have examined levels in motor neuron disease patients. Ono and colleagues reported elevated levels of IL-6 in serum that correlated with disease . Shi and colleagues measured the intracellular production of cytokines from T cells and found that CD4+IL-13+ T-cell percentages demonstrated a significant negative correlation with revised ALS-FRS scores and a significant positive correlation with disease-progression rate . Growth factors have also been investigated in ALS. It had been reported that TGF-b1 caused reactive astrogliosis in the Wobbler mouse, a rodent model of ALS. Houi and colleagues reported plasma levels of TGF-β1 levels were raised and positively correlated with the duration of disease . Hosback and colleagues demonstrated that IGF plasma levels were raised in ALS patients and were more prominent in those with long survival times .
Bogdanov and colleagues demonstrated that that 8-oxo-deoxyguanosine, a marker of DNA damage and repair, was only elevated in CSF, plasma and urine in sporadic ALS patients, supporting the oxidative stress hypothesis . This was further confirmed by Mitsumoto and colleagues .
Several studies have reported a link between collagen abnormalities of the skin and ALS. Degradation of collagen can be detected by an increase in urinary excretion of collagen metabolites, glucosylgalactosyl hydroxylysine (Glu–Gal–Hyl) and galactosyl hydroxylysine (Gal–Hyl). Glu–Gal–Hyl is decreased in ALS and this decrease becomes more pronounced throughout the duration of ALS . A summary of these examples is provided in Table 1.
Despite these initial discoveries, large validation studies are required to verify the utility of any of these protein-based biomarkers to monitor disease progression. The main limitations of the studies discussed previously are the small numbers of subjects typically included in the discovery experiments. While correlations have been found between disease severity and duration, longitudinal studies using a large patient cohort are lacking. Future targets for longitudinal biomarkers should include pathways mechanistically linked to ALS. Recently, mutations in TAR DNA-binding protein (TDP)-43 and RNA-binding protein FUS have been discovered in familial ALS [71-73]. Both are RNA-binding proteins that likely regulate RNA processing from transcription through to translation, thereby linking this biochemical pathway to ALS. Recent studies have demonstrated elevated TDP-43 levels in ALS [74,75]. To date, the pathology associated with the TDP-43 mutations have not been found in familial SOD1-positive patients and FUS mutations have not been detected in sporadic ALS [72,73]. Therefore, it is plausible that different biomarker(s) may be applicable to different subgroups of ALS patients. This would further compound the potential heterogeneous nature of ALS.
For clinical research studies, ALS is currently diagnosed using the El Escorial criteria together with the revised version of the Airlie House criteria . These criteria do not have prognostic value and were not designed for such purposes. However, these clinical criteria are used for entry into clinical drug trials, with the exception of atypical cases [8,9]. While diagnosis is a clinical matter, we must recognize that ALS encompasses a variety of etiological factors, including both genetic and environmental. For the design of clinical trials and biomarker discovery studies, the heterogeneity of ALS poses a significant challenge. Inclusion of a divergent patient population may result in masking drug effects for a particular subgroup.
Traditionally, ALS has been classified according to the type of motor neuron (upper or lower) and region of the neuroaxis affected (i.e., bulbar vs limb onset). Over the past 5 years, various genetic mutations have been discovered in addition to SOD1. It was initially felt that these mutations were associated with a particular clinical syndrome. However, it soon became apparent that each mutation displayed phenotypic heterogeneity that also overlapped with other motor neuron syndromes. Phenotypic differences have even been observed in those harboring SOD1 mutations, in which motor and sensory neurons as well as spinocerebellar and corticospinal tracts can be variably affected. This concept is known as the `divergent clinical spectrum-convergent molecular etiology' . Mechanisms that have been postulated in ALS include defects in neuronal cytoskeleton, abnormalities in the molecular motors of axon transport, disruption of endosomal trafficking, mitochondrial toxicity, defects in RNA machinery and abnormal motor neuron cell environment.
Dissecting out these subgroups at the molecular level is an important but challenging dilemma. It is plausible that there are interactions between the various pathways leading to inconsistent patterns of motor neuron degeneration further confounding our ability to identify subgroupss with a more homogenous presentation. Gordon and colleagues attempted to initiate this process using a strategy of identifying an extreme and rapidly identified primary lateral sclerosis . Similarly, other categories can be dissected out from a prognostic basis, including more aggressive forms, indolent forms, focal or monomelic forms, such as brachial amyotrophic diplegia (flail arm), and pseudopolyneuritic forms (flail leg), which have been demonstrated to have a better prognosis and can be distinguished from primary motor area .
Lessons can be learned from other heterogenous diseases on how to subclassify ALS. Post and colleagues used a cluster ana lysis to identify Parkinson's disease subgroups and then validated these patient groups in terms of their level of impairment, disability, perceived quality of life and use of dopaminergic therapy. Three distinct subgroups were found using this method. The authors suggest only including variables that are biologically plausible in relation to the disease studied. The main advantage of this data-driven approach is that no assumption is needed in advance regarding the classification. This method can be applied to genomic, proteomic, pathological, phenotypical or a combination of approaches . However, it should be noted that the choice of variables and the number of clusters sought influences the results.
Other neurodegenerative conditions pose similar issues with regards to heterogeneity. There is now pathological evidence of overlapping protein inclusions between diseases and shared biochemical pathways so further progress in bio-marker discovery for other neurodegenerative conditions are also likely to benefit ALS patients. Recent advances in diagnostic biomarkers for Alzheimer's disease (AD) have demonstrated the strength of combining imaging and biochemical information for making early and accurate predictions of who will develop AD .
Further clinicopathological correlations must be examined in light of the recent discovery of new mutations associated with the affore-mentioned pathological inclusions in motor neuron diseases.
As many as 50% of late-phase clinical trials across all diseases fail to demonstrate drug efficacy between case and controls . The incorporation of biomarkers within clinical trials may reduce this ‘drug attrition’ rate. Such biomarkers can be subdivided into ‘target’ biomarkers (does the drug hit its target and, therefore, provide benefit) and ‘efficacy’ biomarkers (indicators of positive drug effect via the mechanism of drug action). An efficacy biomarker that changes in level upon administration of a drug must demonstrate correlation with clinical benefit (Figure 2).
Target biomarkers can provide important information regarding pharmacokinetic and pharmacodynamic relationships to aid clinical drug-trial design, thereby speeding drug develop ment. Such biomarkers can be bound to the target receptor, which is then displaced upon administration of the drug reducing the amount of bound marker (see Figure 2). This information can assist in making rapid ‘go/no-go’ decisions regarding which therapeutic compounds to move forward in clinical development; an important issue for an orphan disease like ALS where patients may only have one opportunity to enroll in a clinical trial, and the ability to rapidly identify drugs that ‘hit the target’ is a critical issue. In an ideal situation, a target biomarker would be used in preclinical models to demonstrate that the drug candidate gets to the appropriate location within the body, and an efficacy bio-marker would demonstrate the effectiveness of drug action during the subsequent clinical trial. The addition of disease progression biomarkers in late-stage clinical trials may aid demonstration of disease modification by the drug .
Krishna and colleagues provided a case study of sitagliptin, a novel DPP4 inhibitor for Type 2 diabetes, using this approach. The average time taken for a new molecule to be taken to Phase III trials is approximately 3.5 years. The use of target and efficacy biomarkers in this case study reduced the time to 2.1 years . Enhanced inflammation, as noted by gliosis or increased levels of inflammatory cytokines, is apparent in many neurodegenerative diseases, including ALS. Increased levels of peripheral benzodiazepine receptors on activated microglial cells are typically observed during neuroinflammation. PET ligands have been developed to peripheral benzodiazepine receptors and, thus, may be used as a biomarker for neuroinflammation . Doorduin and colleagues suggest that anti-inflammatory neuroprotective drugs, such as minocycline and cyclooxygenase, could be monitored in clinical trials by the use of these PET ligands .
A target biomarker will not only aid in pre-clinical drug development, but could also assist in determining optimal drug dosage in early human studies. For example, the PET ligand to brain neurokinin 1 receptor is used to monitor receptor occupancy of the drug aprepitant during treatment. In vivo measurements of receptor occupancy guided the selected dose of aprepitant used in late-stage clinical trials to maximize therapeutic effects, and aided the decision-making processes regarding the termination of a Phase III trial with aprepitant .
For a biomarker to be used as a surrogate marker and clinical end point, it must meet certain criteria including:
While it is important to consider the use of biomarkers as surrogate end points in clinical drug trials, it is equally important to consider the regulatory context of such use. The use of biomarkers to obtain information early in drug development is desirable yet often difficult to achieve owing to a lack of appropriate biomarkers or lack of knowledge concerning the mechanism of drug action. For example, an imaging biomarker may allow the assessment of whether a particular drug is reaching its desired target.
A drug must demonstrate “substantial evidence of effectiveness” as defined by the regulatory or oversight agency of the respective country or region, such as the FDA or the EMA. It is further defined as “…evidence consisting of adequate and well controlled investigations, including clinical investigations”. Clinical benefit (i.e., extended survival) requires large and long-duration clinical trials, greatly slowing the rate of drug development towards regulatory approval. Enactment of the FDA Modernization Act of 1997 (FDAMA) generated a ‘fast-track’ provision for single clinical trial effectiveness and approval of drugs that address unmet medical needs. Approval may result from demonstrating efficacy with a surrogate end point, but drug approval is subject to postapproval validation of the surrogate end point or confirmation of the clinical benefit to the patients. Widespread use of the FDAMA provisions has not occured yet, owing to the absence of validated surrogate end points and routine methods for advancing biomarkers to surrogate end point status. The EMA has largely resisted expediting the drug approval process and has only recently initiated strategies for the fast track of drugs for pandemic diseases . However, drugs for motor neuron disease would meet EMA guidelines for orphan medicinal product designation.
The surrogate marker must predict clinical benefit based on epidemiological, therapeutic, pathophysiological or other evidence. Therefore, it is critical to recognize that current FDA guidelines permit the use of ‘unregulated’ surrogate markers. Validated surrogate markers are those that have established evidence for a drug-induced effect. An unvalidated surrogate marker only needs to demonstrate that clinical benefit is ‘reasonably likely’. Therefore, under these requirements, drugs approved on the basis of a surrogate marker introduce uncertainty regarding the drug's true clinical benefits. However, it can be argued that a surrogate marker that demonstrates correlation with disease progression in the untreated state would be acceptable. Examples include longitudinal measurements of total brain volume and hippocampal volume in AD as a surrogate of disease progression. However, as Fleming and DeMets have observed, such correlations in the untreated state will not necessarily be translated in the treated state . For example, it is possible that the drug may have a desired effect on clinical benefit without any benefit on the surrogate marker and vice versa. To overcome this, it is important to understand the biological relationship between the surrogate and clinical outcome .
The use of surrogate markers offers the potential to reduce the size of clinical trials and potentially shorten the duration of the trial. Another advantage of using surrogate end points is the ability to more accurately test specific mechanisms of disease, particularly in diseases of multifactorial causations, such as ALS.
Surrogate end points are limited in the fact that short clinical studies may not translate into persistent benefit of the drug in the medium to long term. Other factors, such as loss to follow-up, drop-outs, crossovers and confounding effects of other drugs and therapies, reveal themselves in relatively long clinical drug trials, but may not be present when using shorter trials with surrogate end points, resulting in a false-positive result. The validity of the marker needs to be tested vigorously so that correct decisions can be made by the clinician. Safety is potentially an essential shortcoming of using surrogate end points as large randomized clinical trials are much more suitably powered for the evaluation of safety and risk-benefit relationships.
There are also statistical considerations when analyzing results involving surrogate end points. A common problem is missing or censored clinical data. Whilst uninformative censoring occurs independently of the variable studies, informative censoring does depend on the variable of interest that can introduce bias. One approach to counteract this is to use different statistical methods to correct for the missing data. Another problem is using inclusion criteria based on an abnormal value of the surrogate outcome in the study, leading to a truncated distribution. Heterogeneity of variance for the marker can sometimes be observed, that is some individuals display a higher variability within the measurements of the baseline variable. Finally, there may be a temporal shift in the distribution of the baseline variable that must to be taken into account when enrolling patients at different stages in the disease process .
Owing to these concerns, it is now accepted in therapeutic communities that surrogate end points should not be used as sole end points in clinical trials unless there is extensive evidence to demonstrate their validity . As mentioned previously, ALS-FRS is a current tool used to monitor disease progression. It has been validated despite criticisms, such as small changes reducing the sensitivity. However, a biomarker would have the dual purpose of monitoring disease progression as well as elucidating the underlying mechanisms. It is likely that ALS-FRS would initially be used in conjunction with potential surrogate markers in clinical drug trials.
The biomarker pipeline consists of discovery, verification, validation and qualification phases (Figure 3). Candidate biomarkers are filtered out at each stage, resulting in only one or two biomarkers reaching FDA or EMA approval each year across all diseases. There are bottlenecks at each stage of the pipeline, some of which are discussed in this section.
The discovery stage focuses on the generation of candidate biomarkers from a ‘training set’ of samples, including those with the disease of interest and controls. Controls may include both healthy and disease states at this early-discovery stage. Since disease-specific protein or metabolic-based biomarkers may be of low abundance in body fluids or tissue, the sensitivity of existing technologies is a major confounding factor during the discovery phase. In addition, the large dynamic range in biological fluids is problematic with the highly abundant proteins making up a large percentage of the total proteome. Correlation between DNA alterations or mRNA copy number and protein abundance is imperfect. Therefore, candidates generated from genomic technologies may not be translated to the protein level. Owing to limitations in biofluid availability, tissues have been used for biomarker discovery, but the results may not translate to similar changes in biofluids. Lastly, discovery efforts are typically underpowered with respect to sample size and, therefore, may not provide an accurate assessment of biomarker sensitivity or specificity.
The verification stage includes a replication of results generated in the discovery phase using a separate set of samples and, ideally, replicated in a different laboratory. The purpose of the verification stage is to demonstrate sufficient evidence to merit further biomarker development in a larger validation study. Different platforms may be utilized at this stage and the selection of an assay platform will affect the regulatory pathway and utility of the biomarker. Another bottleneck is the often limited resources or clinical samples, prohibiting the verifications of all candidate biomarkers. Therefore, priority is often given to those that may, ultimately, affect clinical care decision-making processes. Another option is to prioritize candidates based on the potential mechanism by which the biomarker may contribute to disease. Unfortunately, our biological knowledge for most neurological diseases, including ALS, is too incomplete to solely rely on this.
A successful biomarker must perform responsibly and economically during a given clinical scenario. The requirements are different for each clinical scenario and may, therefore, impact the required level of sensitivity and specificity for the biomarker assay. For example, overall sensitivity and specificity of biomarkers to be utilized in general population screening is typically different to those in a specialized neuromuscular clinic to differentiate between patients presenting with neurological symptoms. A large validation study to determine specificity, sensitivity, and false-positive and false-negative rates is required during this phase. The biomarker assay must demonstrate a robust, reproducible, analytical performance that is economically viable in a clinical environment. The validation stage is costly and, therefore, priority should only be given to the most highly credential candidates .
Technological bottlenecks remain during this phase with respect to technologies that can be used in a typical clinical setting and provide the necessary sensitivity for measuring low-abundant analytes.
The qualification phase is for regulator approval of the biomarker and involves a large, prospective study performed in the same clinical setting that will ultimately be used upon regulatory approval. Qualification studies are organized in a similar manner to Phase III clinical trials, using multiple sites, and incur similar high costs. Regulatory agencies should be involved at this time to ensure acceptance of the results for the regulatory process. To date, no prospective validation studies have been performed for candidate protein biomarkers for motor neuron disease. A coordinated effort involving multiple centers is required.
The field of oncology has pioneered the use of biomarkers in the clinic. A panel of predictive biomarkers (HER2 and estrogen receptor expression) was used to predict which patients would benefit from adjuvant chemotherapy . Gene-expression profiling can also be used to predict outcome. A DNA microarray-based diagnostic kit approved by the FDA measures the gene expression level of 70 genes in breast tissue biopsies, generating a score used to determine risk . The Oncotype DX® assay has been validated in women with lymph node-negative, estrogen receptor-expressing breast cancer. It is a 21-gene panel assay associated with chemotherapy response. Predictive biomarkers can help stratify patients entering into clinical drug trials .
Pharmacodynamic biomarkers can help determine the on-target activity and can, therefore, guide the selection of drug dose and schedule. It allows the investigator to monitor whether the drug is reaching its desired target, followed by more quantitative information in terms of exposure, kinetics, inhibition of the target, percentage of inhibition of the target and others. An example in oncology was the use of imatinib mesylate to block the protein-kinase activity of BCR-ABL. It was demonstrated that the percentage of inhibition of BCR-ABL activity correlates with clinical outcome .
However, despite these early promises in oncology, significant challenges have impeded progress, even in this field. It is now recognized that a large research consortia, retrospective clinical trial data and population-based sample biomarkers are required to allow links to be made between biomarker-generated data and patient observational information or the relevant pharma cological phenotypes. Another requirement is standardized assay development, which includes sample collection, processing and storage. The standards used in the laboratory discovery phase may not be reproducible in the clinical setting. For example, even 20% of the HER2 assays had a different outcome when applied in a high-volume central laboratory .
For the clinician, biomarkers typically do not give black or white answers and are often used as a tool in conjunction with other clinical information. Each biomarker panel will have its own accuracy, sensitivity and specificity that will, in turn, influence the level of diagnostic or prognostic confidence given to it by the clinician.
Despite these limitations, the use of biomarkers in clinical drug trials allow for the development of drugs with an improved personalized efficacy:safety ratio that will still be very attractive to the pharmaceutical industry despite the reductions in the total market after patient stratification. Biomarkers will provide more sensitive measures of the competition between drugs or combination of drugs that will provide the best treatment strategy for patients. While, historically, there is little evidence of biomarkers having a great impact on drug development, current and future drug development will often be performed in combination with various biomarkers.
Over the next 5–10 years, high-throughput microarray sequencing systems will be developed to speed up the process of finding disease- and causative-related genes, as well as novel variants potentially associated with risk of disease . Similar technological improvements in antibody or protein microarrays, or mass spectrometry, with improved sensitivity to overcome the large dynamic range seen in biological fluid will allow investigators to dig deeper into the proteome to identify new and more disease-specific biomarkers.
The last 10 years has seen many potential biomarker candidates identified but, unfortunately, these have often failed to be replicated in the verification and validation stages of biomarker development. Collaboration amongst key interest groups is required to agree on standardized protocols for collection, processing and storage of samples. Standard operating procedures common across technology platforms will permit results to be replicated in numerous laboratories. Large bio-banks for storage of biological samples collected using standardized protocols will permit verification and initial validation studies to be completed more rapidly using retrospective samples, containing critical clinical information. Furthermore, participants must agree upon standards for clinical information collected for each subject. Similar efforts are being seen in the field of AD through the AD Neuroimaging Initiative (ADNI) funded by the NIH Foundation.
It has been argued that if surrogate markers correlate with ALS-FRS, then ALS-FRS could be used as it is cheaper and easier to interpret. However, stratification of ALS cases is key to the design of both biomarker studies and clinical trials since a heterogeneous patient population may mask potentially good biomarker candidates/drugs for subgroups of ALS. Diagnostic biomarkers may potentially identify patients at an earlier or preclinical disease state and potentially aid the stratification of patient subpopulations, in contrast to ALS-FRS, which is only useful after clinical diagnosis and currently takes up to 12–14 months after symptom onset. We have seen examples of stratification in the oncology field, for example, as when Herceptin® alone or in combination with chemotherapy was being tested in HER2-positive metastatic breast cancer .
We should not be disheartened by the lack of success thus far in biomarker development for motor neuron disease. Technological improvements, as highlighted in this article, will yield a large amount of data. The challenge will be in their careful analysis and downstream validation studies. A highly standardized criterion should be required for publication accept-ability, where potential candidates must be demonstrated with a ‘proof of concept’.
At the other end of the pipeline, the use of candidates in the clinic must be carefully considered, particularly where they are being used in the clinical decision-making process. While statistical analyses are carried out on large subject groups to generate statistical significance in the early stages of development, it is important to illustrate that similar trends are also observed at the individual level, demonstrating the ability to diagnose or monitor disease progression within individual patients. It is likely that a panel of biomarkers across different platforms will be required in order to achieve the high level of sensitivity, specificity and accuracy demanded at a clinical level. Finally, the use of biomarkers to monitor pharmacokinetics and pharmacodynamics of candidate drugs will surely streamline drug development and reduce costs by reducing the drug attrition rate in late-phase clinical trials. Biomarker development in conjunction with drug development and personalized medicine is entering an exciting phase, which should yield more promising new therapies over the next 5–10 years.
R Bowser is a cofounder of Knopp Neurosciences, a therapeutics company for amyotrophic lateral sclerosis. R Bowser has also received funding support from the NIH grant NS061867.
Financial & competing interests disclosure
The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.
No writing assistance was utilized in the production of this manuscript.
Papers of special note have been highlighted as:
■ of interest
■■ of considerable interest