Home | About | Journals | Submit | Contact Us | Français |

**|**Front Neuroengineering**|**v.3; 2010**|**PMC2856627

Formats

Article sections

- Abstract
- Introduction
- Materials and Methods
- Results
- Discussion
- Conclusions
- Conflict of Interest Statement
- References

Authors

Related links

Front Neuroengineering. 2010; 3: 2.

PMCID: PMC2856627

Edited by: Ulrich Hofmann, University of Lubeck, Germany

Reviewed by: Nabeel Anwar, Tokyo Institute of Technology, Japan; Martin Schurmann, University of Nottingham, UK; Mehrnaz Hazrati, University of Lubeck, Germany; Lisa Marshall, University of Lubeck, Germany

*Correspondence: Antti K. O. Paukkunen, Helsinki University of Technology, Department of Electronics, Applied Electronics Unit, Otakaari 7B, Espoo 02150, Finland. e-mail: if.kkt@nenukkuap.ittna

Received 2009 December 21; Accepted 2010 February 25.

Copyright © 2010 Paukkunen, Leminen and Sepponen.

This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.

This article has been cited by other articles in PMC.

Reliable measurements are mandatory in clinically relevant auditory event-related potential (AERP)-based tools and applications. The comparability of the results gets worse as a result of variations in the remaining measurement error. A potential method is studied that allows optimization of the length of the recording session according to the concurrent quality of the recorded data. In this way, the sufficiency of the trials can be better guaranteed, which enables control of the remaining measurement error. The suggested method is based on monitoring the signal-to-noise ratio (SNR) and remaining measurement error which are compared to predefined threshold values. The SNR test is well defined, but the criterion for the measurement error test still requires further empirical testing in practice. According to the results, the reproducibility of average AERPs in repeated experiments is improved in comparison to a case where the number of recorded trials is constant. The test-retest reliability is not significantly changed on average but the between-subject variation in the value is reduced by 33–35%. The optimization of the number of trials also prevents excessive recordings which might be of practical interest especially in the clinical context. The efficiency of the method may be further increased by implementing online tools that improve data consistency.

The averaging of the evoked potentials is a convenient way to reduce the level of interference when highlighting event-locked electroencephalographic (EEG) responses (Picton et al., 1995). Thus, in spite of the increasing interest in single-trial analysis, averaging is still the most commonly used method to recover event-related potentials (ERPs) from noise (Davila and Mobin, 1992). The average ERPs, however, are distorted by, for example, latency jitter (e.g. Gibbons and Stahl, 2007), amplitude variations, external interference, physiological artifacts, noise, and measurement errors during the experiment (Leonowicz et al., 2005). In addition to that, the within-subject and the between-subject variations in the responses also affect the measurement outcome (e.g. Pikvik et al., 1993; Lang et al., 1995). Even though the average level of interference is reduced in this process (Picton et al., 1995), the test-retest reliability of the responses on a single-subject level is too low for clinical applications (Lang et al., 1995; Sinkkonen and Tervaniemi, 2000; Dalebout and Fox, 2001; Beauchemin and De Beaumont, 2005; Kujala et al., 2007; Duncan et al., 2009). The mismatch negativity, for example, has been estimated to have a test-retest reliability of just over 0.5 (Sinkkonen and Tervaniemi, 2000), 0.46–0.71 (Frodl-Bauch et al., 1997), or 0.56–67 (Pekkonen et al., 1995).

The replicability of the responses depends on the test subject, type and presentation of the stimulus, the response studied, and the electrode location (Pekkonen et al., 1995; Frodl-Bauch et al., 1997; Sinkkonen and Tervaniemi, 2000). The test-retest reliability of the N100 response, for example, is higher than that of the mismatch negativity (MMN) because of the better latency stability (Pekkonen et al., 1995). In addition, the variation in the remaining level of interference also affects the test-retest reliability (e.g. Sinkkonen and Tervaniemi, 2000). This explains why the replicability of the response to the standard sound tends to be superior to the less frequent ones (Pekkonen et al., 1995). The test-retest reliability can be influenced by careful experiment design. However, as the initial amount of noise and the level of interference will vary anyway, the quality variation should also be compensated.

The basic problem is that the number of trials recorded is typically constant, while the initial quality of the signal changes between and during experiments. Consequently, the remaining measurement error is subject to variation which affects the comparability of the responses and the test-retest reliability. Several methods have been suggested for cleaning the data (e.g. Effern et al., 2000; Quiroga, 2000; He et al., 2004) and to adjust the averaging process to cope with the alternating signal quality (e.g. Woody, 1967; Davila and Mobin, 1992; Woldorff, 1993; Jaskowski and Verleger, 1999; Wang et al., 2001; Leonowicz et al., 2005; Gibbons and Stahl, 2007). However, even though the average signal quality could be improved, this does not compensate the insufficiency of the data. Alternatively, it would also be possible to increase the length of the experiments. In practice, however, fatigue, for example, may affect the neuropsychological phenomena (Picton et al., 1995; Ding and Ye, 2003; Boksem et al., 2005; Muller-Gass et al., 2005; Thornton, 2008) and the risks of equipment-related errors also increase with time (Mühler and von Specht, 1999; Rahne et al., 2008). As the amount of artifacts and sudden changes in the signal quality are not predictable, the sufficient number of trials cannot be reliably predefined (Möcks et al., 1988; Sinkkonen and Tervaniemi, 2000). Thus, the length of the experiment should rather be defined online with respect to the concurrent data quality.

In this study, the implementation of a novel autoadaptive recording procedure for auditory event-related potentials (AERP) is suggested. It aims at optimizing the length of the experiment with respect to the initial quality of the data recorded. The qualities being monitored are the contribution of noise and the remaining measurement error. The idea is to keep on recording until they meet a predefined threshold. This way, the quality of the results is guaranteed and the length of the experiment is optimized. The objectives of the study are to define appropriate estimators for the quality tests and to study the influence of the compensation on the test-retest reliability in a simulated AERP experiment. Feasibility, technical requirements and developmental aspects will be discussed on the basis of the results.

The procedure involved cycling three phases (Figure (Figure1):1): preprocessing, quality estimation, and decision making. The analysis was performed trial-by-trial and the decision as to whether to continue the experiment or not was made at the end of each cycle. First, the trial being analyzed was filtered and tested to detect possible artifacts and the contaminated trials were rejected. Then, the maturity of the accumulating average was estimated on the basis of the contribution of noise and the remaining measurement error. They were compared to predefined threshold values and the experiment was concluded if both the criteria were met.

The data used in the demonstration was taken from an MMN study (Pakarinen et al., 2007). Each trial lasted from −50 to +450ms from the stimulus onset. The data was lowpass filtered (0–30Hz) and the baseline was corrected on the basis of the 50ms prestimulus interval. These are typical filter settings for this kind of data (Pekkonen et al., 1995; Pakarinen et al., 2007; Duncan et al., 2009). Artifacts were detected by testing the amplitude of the response and trials exceeding±40μV were rejected.

Additionally, variation in the physiological response may also affect the accumulating average. Latency variation, in particular, is harmful because it flattens the amplitude of the peaks in the average waveform (e.g. Thornton, 2008). Several methods have been suggested to correct the error (e.g. Woody, 1967; Woldorff, 1993; Wang et al., 2001). None of these, however, were considered to be feasible online and, thus, were not included into the procedure.

The purpose of the noise test was to evaluate the mean level of interference in the accumulating average waveform. In particular, the test was supposed to indicate the contribution of white noise so that the significance of the responses could be assessed.

*T*-test is commonly used to test the statistical significance of noise over the single trials in AERP studies (e.g. Picton et al., 1995). By definition, it estimates the probability of the null hypothesis that the value being tested is a random sample from a normal distribution with a mean of zero and unknown variance, against the alternative that the mean is different from zero. Typically, if the probability of the null hypothesis is smaller than 0.05, it is likely that the contribution of noise is negligible or small (Ewens and Grant, 2005).

In the current context, the one-sample *t*-test could be applied by testing a peak in the average response, the statistical significance of which would represent the general quality of the whole waveform. The auditory N100 peak in the average waveform, for example, is relatively stable and typically easy to obtain. Thus, it could be a well-suited option for the purpose. On the other hand, the application of the *t*-test requires exact knowledge of the latency of the peak to be tested and the localization may prove laborious because the latency varies between trials. Thus, a simpler estimate would be preferable in terms of computational efficiency.

Alternatively, the statistical significance of noise may also be estimated from the respective signal-to-noise ratio (SNR) (Sackett, 2001). Compared to the one-sample *t*-test, the computation of the SNR is simpler and it can also be applied to test the whole waveform at once. Although the accuracy of the estimation might be worse, it may still be capable of providing the number of trials required to obtain significant responses with reasonable accuracy. Derived from Möcks et al. (1988), the effective SNR can be estimated from the difference of the consecutive trials by eqs 1–3.

$${P}_{\text{avg},n}=\frac{1}{2\left(N-1\right)}{\displaystyle \sum _{i=1}^{N-1}P\left({x}_{i+1}-{x}_{i}\right)}$$

(1)

$${P}_{\text{avg},s}=P\left({\text{\mu}}_{N}\right)-{P}_{\text{avg},n}/N$$

(2)

$$SNR=\frac{N\cdot {P}_{\text{avg},s}}{{P}_{\text{avg},n}}$$

(3)

where

$$P({x}_{i})=\frac{1}{T-{t}_{0}}{\displaystyle \underset{t={t}_{0}}{\overset{t=T}{\int}}{x}_{i}^{2}(t)}$$

(4)

Here, *P*_{avg,n} represents the average power of the noise, *P*_{avg,s} represents the power of the average response, μ_{N} is the average of *N* trials, *x _{i}* represents the

The purpose of testing the remaining measurement error was to be able to estimate the stability of the accumulating results and the magnitude of the change that could still be expected. In particular, the test was supposed to indicate the magnitude of the remaining interference peaks and the uncertainty in the average waveform resulting from variations in the physiological response. In order to estimate the measurement error, two alternative estimators were considered: a convergence-based estimator and a direct estimator of the measurement error. Both alternatives derive from comparing partial averages of the data and the computation of the estimates is simple.

The convergence rate represents the magnitude of change in the average waveform as new trials are included into the sum. It can be calculated online by comparing the consecutive averages. The comparison is made with respect either to the amplitude or the form of the responses. The similarity of the form of the responses can be estimated on the basis of the correlation. The calculation of the parameter is straightforward and it provides a reasonable estimate of the similarity of the whole waveform. On the other hand, the form of the whole curve is often irrelevant and it would be more interesting to estimate the maximal error in the peak amplitudes. This could be assessed better from the differences in the consecutive averages. Thus, the convergence rate-based error estimate ε_{c} was calculated according to eqn. 5.

$${\epsilon}_{c}=\mathrm{max}\left[\frac{1}{N}{\displaystyle \sum _{i=1}^{N}\left({x}_{i}\right)-}\frac{1}{N-1}{\displaystyle \sum _{i=1}^{N-1}\left({x}_{i}\right)}\right]$$

(5)

where *N* is the total number of averaged trials and *x _{i}* represents the

Alternatively, the measurement error can also be estimated directly by computing the difference of two independent averages extracted from the data. Following Schimmel (1967), the direct estimate of the measurement error ε_{d} was calculated from the difference of the average of the odd and the even trials according to eqn. 6.

$${\epsilon}_{d}=\frac{1}{2}\cdot \mathrm{max}\left\{\left|\frac{2}{N}{\displaystyle \sum _{i=1,3,N-1}\left({x}_{i}\right)-}\frac{2}{N}{\displaystyle \sum _{i=2,4,N}\left({x}_{i}\right)}\right|\right\}$$

(6)

where *N* is the total number of averaged trials and x_{i} represents the *i*th trial included into the sum. The difference of the even and the odd average represented the remaining error in the response. Thus, the maximum of the waveform could be considered to be representative of the maximal error.

The methods applied in this study were evaluated by simulating their performance with real data. With the authors’ permission, the data used were taken from Pakarinen et al. (2007). In that study, the multi-feature MMN paradigm was applied to obtain an auditory discrimination profile by using harmonic sinusoidal sounds. The standard tones were composed of three sinusoidal partials (523, 1046 and 1569Hz). They were presented at an intensity of 60dB above the subject's hearing threshold and the length of the stimuli was 75ms (including 5ms rise and fall times). The deviant tones differed from the standard tones in frequency, intensity, duration, or perceived sound-source location and the magnitude of the deviation varied across six levels. The probability of the standard tone was 0.5 and the probability of each deviant tone at each level was 0.125/6. The length of each recording session was 90min and the number of trials recorded was 5472 for the standard stimulus and 225 for each deviant stimulus (Pakarinen et al., 2007).

For the concurrent study, data from eight test subjects was available. These data were used to simulate a series of repeated experiments with each test subject and they were each permuted 100 times to create artificial test runs. However, in order to avoid creating permutations that were too similar, datasets that were too small had to be rejected. The expected length of a single simulated experiment was about 200–600 trials. Therefore, only the standard datasets were applied. In addition, the evaluation was further limited to only the one channel having the largest SNR with respect to the physiological response (F_{z} referenced to the mean of the mastoids). Thus, the test data accepted for the study included eight datasets with over 5000 trials, from each of which 5% to 19% was still rejected because of artifacts.

In short, the quality estimators were tested first in order to verify the plausibility of the tests and to be able to define appropriate thresholds to be applied in the online procedure. Then, the procedure was finalized on the basis of the results and the benefits from using the active compensation were studied by comparing the application with the use of a fixed number of 200 trials. The tests prepared for the evaluation are summarized in Table Table11.

The plausibility of the SNR estimator was evaluated by comparing the application with the use of a one-sample *t*-test. The *t*-test was made for the N1 component of the average AERP waveform. N1 was parameterized using an average time window of 40ms placed at the peak maximum of an individual waveform at 80–140ms. First, the SNR was computed as a function of the statistical significance of noise in order to determine a criterion that would correspond to the *t*-test (*p*_{noise} <0.05). Then, both parameters were applied to estimate the required number of trials to obtain a statistically significant response in order to see how they match. In order to justify the use of the SNR test, the correlation was supposed to be high. The limit for approval was chosen to be 0.8, which indicates high correlation.

The measurement error estimators were evaluated by studying their relevance to the validity of the measurement outcome. First, the estimators were computed as a function of the validity of the respective partial averages. Then they were compared on the basis of the sensitivity, distortion and the amount of the calculus required. Validity was estimated by the correlation of the partial averages and the expected outcome. The expected outcome was estimated by computing the sorted average (Rahne et al., 2008) of each dataset. The trials were sorted on the basis of the respective interference level and they were averaged starting from the best one, until the SNR of the average started to decrease. The remaining trials were discarded and the average with the maximum SNR was considered to be the optimal outcome. Sorted averaging maximizes the SNR of the average curve (Rahne et al., 2008). Thus, it was considered to be a well-suited reference for the test.

The last parts of the evaluation dealt with the actual application of the procedure. First the use of the selected measurement error estimator was studied with respect to the feasibility in order to define two alternative threshold values for the test. Two such values were defined: one that would lead to recording 200 trials on average and one that represented the most stringent requirements still plausible in most cases (*p*=0.6). Then, the application was simulated again with the concluded thresholds and the resulting test-retest reliability was compared with the results obtained when the total number of trials was fixed at 200. The feasibility was estimated by the probability of the requirements being reached with fewer than 300 trials on average. Test-retest reliability was computed using the one-way intra-class correlation coefficient (ICC), because it provides more accurate results than the simple Pearson's correlation coefficient (Farahat et al., 2003).

According to the results (Figure (Figure2A),2A), in general, the SNR estimation indicated a smaller significance than the *t*-test and the correlation of the single observations was only moderate (*p*≈ 0.5). Aprobable reason for this was that the SNR test assessed the average significance of noise in the whole waveform, while the *t*-test focused on a single peak in the response. On the other hand, the SNR estimator could still be applied to verify the realization of the significance criterion (*p*_{noise} <0.05). Generally, it was met when the SNR exceeded 0.69. Using this value as a detection threshold, the required number of trials only exceeded the results from the *t*-test by 4±17 (mean ±SD) trials (Figure (Figure2B).2B). The correlation of the estimations was about 0.86, which suggests that the SNR estimator could be considered to be plausible.

According to the results (Figure (Figure3),3), the convergence-based estimate was a linear function of the validity, while the direct estimate had a quadratic relation. Thus, the convergence-based estimator would be simpler to apply. On the other hand, both estimators were quite distorted and the direct estimator was more sensitive (3.6μV/unit) than the other one (0.42μV/unit). In this respect, the direct estimate would be a better choice. In terms of computation time, the difference between the alternatives was considered to be insignificant. They both included an update operation, difference calculation and finding the maximum. The direct estimator also included an additional absolute value operation which, however, did not cause significant difference to the computation time. Thus, as a whole, the direct estimator was considered to be preferable and was chosen to be implemented into the procedure.

Figure Figure4A4A presents the required number of trials, the feasibility of the test, and the test-retest reliability with respect to the remaining measurement error as given by the direct error estimate. The criterion that is achieved with 200 trials on average is 1.5μV and the criterion that has a feasibility of 0.6 is 1.2μV. These values were applied in the last phase of the evaluation. The resulting test-retest reliabilities in the simulated repeated experiments are presented in Figure Figure44B.

On the basis of the results (Table (Table2),2), the variation in the test-retest reliability between test subjects was reduced by 0.1 and the average test-retest reliability was improved by 0.01–0.06, depending on the measurement error threshold applied. The required number of trials in the novel procedure was 209±60 (mean ±SD) when the threshold was 1.5μV and about 310±78 (mean± SD) when the threshold was 1.2μV. Thus, the variation between the test subjects was improved, although the average number of trials recorded was not greatly altered. The increase in the mean test-retest reliability, on the other hand, was achieved by increasing the number of trials. Thus, the feasibility dropped with the improved test-retest reliability.

The results of this study show that the compensation for the variation in the data quality has significant relevance for the remaining measurement error and the reproducibility of the average AERPs. In the simulated series of repeated experiments, the test-retest reliability was improved by 2–9%, on average, and the variation in the value between the test subjects was reduced by 33–35%. In addition, the compensation also improved the measurement efficiency. When the number of trials recorded was a constant 200, the average resulting test-retest reliability was about 0.69. In the novel procedure, the number of recorded trials was optimized and similar results could be achieved with about 209±60 (mean± SD) trials, depending on the data quality. Thus, the length of the experiment could be decreased when the signal quality was high enough (about 40 % of simulation runs).

The findings of the study might be of practical interest particularly in the clinical context. On the one hand, the method suggested will simplify the comparison of results from repeated measurements. The level of distortion is minimized, while the number of trials is optimized. Thus, the differences between repeated measurements are more likely to occur because of the differences in the neuropsychological condition of the test subject. On the other hand, the improvement of the measurement efficiency is also an important factor. If the number of recorded trials is constant and the quality of the outcome is supposed to be high, the number of trials recorded must also be high in order to minimize the possibility of error. Some groups of patients, such as children or acutely ill patients, may not tolerate long experiments. The optimization of the number of trials prevents excessive recording, while the sufficiency of the data to obtain meaningful AERPs is still guaranteed. Thus, application of this kind of active compensation might reduce the negative impact of the investigation on them.

Regarding future developments, the efficiency of the method might be increased if the cleaning procedure and the artifact processing scheme were improved. Although most of the results were in line with each other, the results obtained using dataset 8 seem to deviate from the others. The estimation of the statistical significance was less accurate and the test-retest repeatability deviated from the others, particularly when the number of trials included was kept constant. This suggests that the variation in the quality of the data was higher than in the other datasets. Thus, the implementation of advanced means for signal processing might improve the quality of the results.

The findings of the study indicate that the method suggested could be applied to compensate for the changes in the quality between repeated AERP measurements. Practical investigations, however, are needed to confirm the results and to gain further information about application of the method, the possible issues and the impact in practice. From the practical perspective, the predefinition of the measurement error criterion is particularly interesting. The suitable threshold is application-specific and depends on the response studied. Thus, the recommendations have to be prepared case sensitively. In general, the successful implementation of the method requires high signal quality, a reliable method for online artifact rejection, and a device capable of both recording and analyzing the EEG data. The efficiency may be further improved by implementing an advanced tool for online data processing.

The authors declare that the research was conducted in the absence of any commercial or financial relationship that could be construed as giving rise to a potential conflict of interest.

We thank M. Linnavuo and P. Eskelinen for valuable discussions. The work was supported in part by the Graduate School of Electrical and Communications Engineering, the Society of Electronics Engineers and the Finnish Centre of Excellence in Interdisciplinary MusicResearch.

- Beauchemin M., De Beaumont L. (2005). Statistical analysis of the mismatch negativity: to a dilemma, an answer. Tutorials Quant. Methods Psychol. 1, 18–24
- Boksem M. A. S., Meijman T. F., Lorist M. M. (2005). Effects of mental fatigue on attention: an ERP study. Cogn. Brain Res. 25, 107–11610.1016/j.cogbrainres.2005.04.011 [PubMed] [Cross Ref]
- Dalebout S. D., Fox L. G. (2001). Reliability of the mismatch negativity in the responses of individual listeners. J. Am. Acad. Audiol. 12, 245–253 [PubMed]
- Davila C. E., Mobin M. S. (1992). Weighted averaging of evoked potentials. IEEE Trans. Biomed. Engry 39, 338–34510.1109/10.126606 [PubMed] [Cross Ref]
- Ding H. Y., Ye D. T. (2003). The extraction of MMN modulated by attention. IEEE Int. Conf. neural signal processing 2003, Dec 14–17, China.
- Duncan C. C., Barry R. J., Connolly J. F., Fischer C., Michie P. T., Näätänen R., Polich J., Reinvang I., Van Petten C. (2009). Event-related potentials in clinical research: guidelines for eliciting, recording, and quantifying mismatch negativity, P300, and N400. Clin. Neurophysiol. 120, 1883–190810.1016/j.clinph.2009.07.045 [PubMed] [Cross Ref]
- Effern A., Lehnertz K., Fernandez G., Grunwald T., David P., Elger C. E. (2000). Single trial analysis of event related potentials: non-linear de-noising with wavelets. Clin. Neurophysiol. 111, 2255–226310.1016/S1388-2457(00)00463-6 [PubMed] [Cross Ref]
- Ewens W. J., Grant G. (2005). Statistical Methods in Bioinformatics. New York, Springer-Verlag: 10.1007/b137845 [Cross Ref]
- Farahat M. F., Rohlman D. S., Storzbach D., Ammerman T., Anger W. K. (2003). Measures of short-term test-retest reliability of computerized neurobehavioral tests. Neurotoxicology 24, 513–52110.1016/S0161-813X(03)00079-2 [PubMed] [Cross Ref]
- Frodl-Bauch T., Kathmann N., Möller H.-J., Hegerl U. (1997). Dipole localization and test-retest reliability of frequency and duration mismatch negativity generator processes. Brain Topogr. 10, 3–810.1023/A:1022214905452 [PubMed] [Cross Ref]
- Gibbons H., Stahl J. (2007). Response-time corrected averaging of event related potentials. Clin. Neurophysiol. 118, 197–20810.1016/j.clinph.2006.09.011 [PubMed] [Cross Ref]
- He P., Wilson G., Russel C. (2004). Removal of ocular artifacts from electro-encephalogram by adaptive filtering. Med. Biol. Eng. Comput. 42, 407–41210.1007/BF02344717 [PubMed] [Cross Ref]
- Jaskowski P., Verleger R. (1999). Amplitudes and latencies of single-trial ERP's estimated by a maximum-likelihood method. IEEE Trans. Biomed. Eng. 46, 987–99310.1109/10.775409 [PubMed] [Cross Ref]
- Kujala T., Tervaniemi M., Schröger E. (2007). The mismatch negativity in cognitive and clinical neuroscience: Theoretical and methodological considerations. Biol. Psychol. 74, 1–1910.1016/j.biopsycho.2006.06.001 [PubMed] [Cross Ref]
- Lang A. H., Eerola O., Korpilahti P., Holopainen I., Salo S., Aaltonen O. (1995). Practical issues in the clinical application of mismatch negativity. Ear. Hear. 16, 118–13010.1097/00003446-199502000-00009 [PubMed] [Cross Ref]
- Leonowicz Z., Karvanen J., Shishkin S. (2005). Trimmed estimators for robust averaging of event-related potentials. J. Neurosci. Methods 142, 17–2610.1016/j.jneumeth.2004.07.008 [PubMed] [Cross Ref]
- Möcks J., Gasser T., Köhler W. (1988). Basic statistical parameters of event-related potentials. J. Psychophysiol. 2, 61–70
- Mühler R., von Specht H. (1999). Sorted averaging – principle and application to auditory brainstem responses. Scand. Audiol. 28, 145–14910.1080/010503999424716 [PubMed] [Cross Ref]
- Muller-Gass A., Stelmack R. M., Campbell K. B. (2005). “…and were instructed to read a self-selected book while ignoring the auditory stimuli”: the effects of task demands on the mismatch negativity”. Clin. Neurophysiol. 116, 2142–215210.1016/j.clinph.2005.05.012 [PubMed] [Cross Ref]
- Pakarinen S., Takegata R., Rinne T., Huotilainen M., Näätänen R. (2007). Measurement of extensive auditory discrimination profiles using mismatch negativity (MMN) of the auditory event-related potential (ERP). Clin. Neurophysiol. 118, 177–18510.1016/j.clinph.2006.09.001 [PubMed] [Cross Ref]
- Pekkonen E., Rinne T., Näätänen R. (1995). Variability and replicability of the mismatch negativity. Electroenceph. Clin. Neurophysiol. 96, 546–55410.1016/0013-4694(95)00148-R [PubMed] [Cross Ref]
- Picton W. T., Lins O. G., Scherg M. (1995). The recording and analysis of event-related potentials. In Handbook of Neuropsychology, Boller F., Grafman J., editors. eds (Amsterdam, Elsevier Science B.V.), pp. 3–73
- Pikvik R. T., Broughton R. J., Coppola R., Davidson R. J., Fox N., Nuwer M. R. (1993). Guidelines for the recording and quantitative analysis of electroencephalographic activity in research contexts. Psychophysiology 30, 547–55810.1111/j.1469-8986.1993.tb02081.x [PubMed] [Cross Ref]
- Quiroga R. Q. (2000). Obtaining single stimulus evoked potentials with wavelet denoising. Physica D 145, 278–29210.1016/S0167-2789(00)00116-0 [Cross Ref]
- Rahne T., von Specht H., Mühler R. (2008). Sorted averaging – application to auditory event-related responses. J. Neurosci. Methods 172, 74–7810.1016/j.jneumeth.2008.04.006 [PubMed] [Cross Ref]
- Sackett D. L. (2001). Why randomized controlled trials fail but needn't: 2. Failure to employ physiological statistics, or the only formula a clinician-trialist is ever likely to need (or understand!). CMAJ 165, 1226–1237 [PMC free article] [PubMed]
- Schimmel H. (1967). The (±) reference: accuracy of estimated mean components in average response studies. Science 157, 92–9410.1126/science.157.3784.92 [PubMed] [Cross Ref]
- Sinkkonen J., Tervaniemi M. (2000). Towards optimal recording and analysis of the mismatch negativity. Audio Neurootol. 5, 235–24610.1159/000013885 [PubMed] [Cross Ref]
- Thornton A. R. D. (2008). Evaluation of a technique to measure latency jitter in event-related potentials. J. Neurosci. methods 168, 248–25510.1016/j.jneumeth.2007.09.031 [PubMed] [Cross Ref]
- Wang K., Begleiter H., Porjesz B. (2001). Warp-averaging event-related potentials. Clin. Neurophysiol. 112, 1917–192410.1016/S1388-2457(01)00640-X [PubMed] [Cross Ref]
- Woldorff M. G. (1993). Distortion of ERP averages due to overlap from temporally adjacent ERPs: analysis and correction. Psychophysiology 30, 98–119 [PubMed]
- Woody C. D. (1967). Characterization of an adaptive filter for the analysis of variable latency neuroelectric signals. Med. Biol. Eng. Comput. 5, 539–553

Articles from Frontiers in Neuroengineering are provided here courtesy of **Frontiers Media SA**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |