# Related Articles

Background

In biomedical research, response variables are often encountered which have bounded support on the open unit interval - (0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models.

Methods

In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided.

Results

If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the response data are generated from a discrete multinomial distribution with support on (0,1).

Conclusions

The linear regression model, the variable-dispersion beta regression model and the fractional logit regression model all perform well across the simulation experiments under consideration. When employing beta regression to estimate covariate effects on (0,1) response data, researchers should ensure their dispersion sub-model is properly specified, else inferential errors could arise.

doi:10.1186/1471-2288-14-14

PMCID: PMC3999882
PMID: 24461057

Regression modelling; Linear regression; Beta regression; Variable-dispersion beta regression; Fractional Logit regression; Beta distribution; Multinomial distribution; Monte Carlo simulation

Background

The spatial and space-time scan statistics are commonly applied for the detection of geographical disease clusters. Monte Carlo hypothesis testing is typically used to test whether the geographical clusters are statistically significant as there is no known way to calculate the null distribution analytically. In Monte Carlo hypothesis testing, simulated random data are generated multiple times under the null hypothesis, and the p-value is r/(R + 1), where R is the number of simulated random replicates of the data and r is the rank of the test statistic from the real data compared to the same test statistics calculated from each of the random data sets. A drawback to this powerful technique is that each additional digit of p-value precision requires ten times as many replicated datasets, and the additional processing can lead to excessive run times.

Results

We propose a new method for obtaining more precise p-values with a given number of replicates. The collection of test statistics from the random replicates is used to estimate the true distribution of the test statistic under the null hypothesis by fitting a continuous distribution to these observations. The choice of distribution is critical, and for the spatial and space-time scan statistics, the extreme value Gumbel distribution performs very well while the gamma, normal and lognormal distributions perform poorly. From the fitted Gumbel distribution, we show that it is possible to estimate the analytical p-value with great precision even when the test statistic is far out in the tail beyond any of the test statistics observed in the simulated replicates. In addition, Gumbel-based rejection probabilities have smaller variability than Monte Carlo-based rejection probabilities, suggesting that the proposed approach may result in greater power than the true Monte Carlo hypothesis test for a given number of replicates.

Conclusions

For large data sets, it is often advantageous to replace computer intensive Monte Carlo hypothesis testing with this new method of fitting a Gumbel distribution to random data sets generated under the null, in order to reduce computation time and obtain much more precise p-values and slightly higher statistical power.

doi:10.1186/1476-072X-9-61

PMCID: PMC3018432
PMID: 21167043

Concentrations of metabolites of illicit drugs in sewage water can be measured with great accuracy and precision, thanks to the development of sensitive and robust analytical methods. Based on assumptions about factors including the excretion profile of the parent drug, routes of administration and the number of individuals using the wastewater system, the level of consumption of a drug can be estimated from such measured concentrations. When presenting results from these ‘back-calculations’, the multiple sources of uncertainty are often discussed, but are not usually explicitly taken into account in the estimation process. In this paper we demonstrate how these calculations can be placed in a more formal statistical framework by assuming a distribution for each parameter involved, based on a review of the evidence underpinning it. Using a Monte Carlo simulations approach, it is then straightforward to propagate uncertainty in each parameter through the back-calculations, producing a distribution for instead of a single estimate of daily or average consumption. This can be summarised for example by a median and credible interval. To demonstrate this approach, we estimate cocaine consumption in a large urban UK population, using measured concentrations of two of its metabolites, benzoylecgonine and norbenzoylecgonine. We also demonstrate a more sophisticated analysis, implemented within a Bayesian statistical framework using Markov chain Monte Carlo simulation. Our model allows the two metabolites to simultaneously inform estimates of daily cocaine consumption and explicitly allows for variability between days. After accounting for this variability, the resulting credible interval for average daily consumption is appropriately wider, representing additional uncertainty. We discuss possibilities for extensions to the model, and whether analysis of wastewater samples has potential to contribute to a prevalence model for illicit drug use.

Highlights

•Analysis of wastewater allows estimation of illicit drug consumption.•However, it is crucial to formally acknowledge the many sources of uncertainty.•The simple and flexible Monte Carlo simulation approach allows this.•There are many software options: we provide an Excel spreadsheet and R code.•Bayesian modelling using Markov chain Monte Carlo allows interesting extensions.

doi:10.1016/j.scitotenv.2014.02.101

PMCID: PMC4039139
PMID: 24636801

Sewage epidemiology; Monte Carlo simulation; Uncertainty propagation; Bayesian modelling; Illicit drugs

Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.

doi:10.1016/j.apradiso.2011.09.015

PMCID: PMC3242326
PMID: 21992844

Monte Carlo; correlated sampling; efficiency; uncertainty; bootstrap

A new method for reconstruction of the interatomic distance distribution, P(r), directly from two-dimensional detector images of solution scattering data is developed and tested. This method employs Bayesian inference and a Markov chain Monte Carlo method to simultaneously estimate indirect transform coefficients and beam and detector parameters, while also evaluating the covariance among all parameters.

The interatomic distance distribution, P(r), is a valuable tool for evaluating the structure of a molecule in solution and represents the maximum structural information that can be derived from solution scattering data without further assumptions. Most current instrumentation for scattering experiments (typically CCD detectors) generates a finely pixelated two-dimensional image. In continuation of the standard practice with earlier one-dimensional detectors, these images are typically reduced to a one-dimensional profile of scattering intensities, I(q), by circular averaging of the two-dimensional image. Indirect Fourier transformation methods are then used to reconstruct P(r) from I(q). Substantial advantages in data analysis, however, could be achieved by directly estimating the P(r) curve from the two-dimensional images. This article describes a Bayesian framework, using a Markov chain Monte Carlo method, for estimating the parameters of the indirect transform, and thus P(r), directly from the two-dimensional images. Using simulated detector images, it is demonstrated that this method yields P(r) curves nearly identical to the reference P(r). Furthermore, an approach for evaluating spatially correlated errors (such as those that arise from a detector point spread function) is evaluated. Accounting for these errors further improves the precision of the P(r) estimation. Experimental scattering data, where no ground truth reference P(r) is available, are used to demonstrate that this method yields a scattering and detector model that more closely reflects the two-dimensional data, as judged by smaller residuals in cross-validation, than P(r) obtained by indirect transformation of a one-dimensional profile. Finally, the method allows concurrent estimation of the beam center and D
max, the longest interatomic distance in P(r), as part of the Bayesian Markov chain Monte Carlo method, reducing experimental effort and providing a well defined protocol for these parameters while also allowing estimation of the covariance among all parameters. This method provides parameter estimates of greater precision from the experimental data. The observed improvement in precision for the traditionally problematic D
max is particularly noticeable.

doi:10.1107/S002188981300109X

PMCID: PMC3627411
PMID: 23596342

structure analysis; small-angle X-ray scattering; small-angle neutron scattering; Bayesian inference; Markov chain Monte Carlo methods

Background

An important step in strain optimization is to identify reactions whose activities should be modified to achieve the desired cellular objective. Preferably, these reactions are identified systematically, as the number of possible combinations of reaction modifications could be very large. Over the last several years, a number of computational methods have been described for identifying combinations of reaction modifications. However, none of these methods explicitly address uncertainties in implementing the reaction activity modifications. In this work, we model the uncertainties as probability distributions in the flux carrying capacities of reactions. Based on this model, we develop an optimization method that identifies reactions for flux capacity modifications to predict outcomes with high statistical likelihood.

Results

We compare three optimization methods that select an intervention set comprising up- or down-regulation of reaction flux capacity: CCOpt (Chance constrained optimization), DetOpt (Deterministic optimization), and MCOpt (Monte Carlo-based optimization). We evaluate the methods using a Monte Carlo simulation-based method, MCEval (Monte Carlo Evaluations). We present two case studies analyzing a CHO cell and an adipocyte model. The flux capacity distributions required for our methods were estimated from maximal reaction velocities or elementary mode analysis. The intervention set selected by CCOpt consistently outperforms the intervention set selected by DetOpt in terms of tolerance to flux capacity variations. MCEval shows that the optimal flux predicted based on the CCOpt intervention set is more likely to be obtained, in a probabilistic sense, than the flux predicted by DetOpt. The intervention sets identified by CCOpt and MCOpt were similar; however, the exhaustive sampling required by MCOpt incurred significantly greater computational cost.

Conclusions

Maximizing tolerance to variable engineering outcomes (in modifying enzyme activities) can identify intervention sets that statistically improve the desired cellular objective.

doi:10.1186/1752-0509-7-29

PMCID: PMC3626866
PMID: 23548040

Enzyme activity modification; Flux capacity; Uncertainty; Chance-constrained optimization

This paper describes an extension of the perturbation Monte Carlo method to model light transport
when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow
perturbation of both the scattering and absorption coefficients, however, the phase function can not
be varied. The more complex method we develop and test here is not limited in this way. We derive a
rigorous perturbation Monte Carlo extension that can be applied to a large family of important
biomedical light transport problems and demonstrate its greater computational efficiency compared
with using conventional Monte Carlo simulations to produce forward transport problem solutions. The
gains of the perturbation method occur because only a single baseline Monte Carlo simulation is
needed to obtain forward solutions to other closely related problems whose input is described by
perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte
Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many
tumors originate. The tissue model has parameters for the number density and average size of three
classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small
particles such as ribosomes or large protein complexes. When these parameters or the wavelength is
varied the scattering coefficient and the phase function vary. Perturbation calculations give
accurate results over variations of ∼15–25% of the scattering
parameters.

doi:10.1364/BOE.4.001946

PMCID: PMC3799658
PMID: 24156056

(170.0170) Medical optics and biotechnology; (170.3660) Light propagation in tissues; (170.6510) Spectroscopy, tissue diagnostics; (170.6935) Tissue characterization

Classification studies with high-dimensional measurements and relatively small sample sizes are increasingly common. Prospective analysis of the role of sample sizes in the performance of such studies is important for study design and interpretation of results, but the complexity of typical pattern discovery methods makes this problem challenging. The approach developed here combines Monte Carlo methods and new approximations for linear discriminant analysis, assuming multivariate normal distributions. Monte Carlo methods are used to sample the distribution of which features are selected for a classifier and the mean and variance of features given that they are selected. Given selected features, the linear discriminant problem involves different distributions of training data and generalization data, for which 2 approximations are compared: one based on Taylor series approximation of the generalization error and the other on approximating the discriminant scores as normally distributed. Combining the Monte Carlo and approximation approaches to different aspects of the problem allows efficient estimation of expected generalization error without full simulations of the entire sampling and analysis process. To evaluate the method and investigate realistic study design questions, full simulations are used to ask how validation error rate depends on the strength and number of informative features, the number of noninformative features, the sample size, and the number of features allowed into the pattern. Both approximation methods perform well for most cases but only the normal discriminant score approximation performs well for cases of very many weakly informative or uninformative dimensions. The simulated cases show that many realistic study designs will typically estimate substantially suboptimal patterns and may have low probability of statistically significant validation results.

doi:10.1093/biostatistics/kxp001

PMCID: PMC2697341
PMID: 19234308

Biomarker discovery; Experimental design; Generalization error; Genomic; Pattern recognition; Proteomic

Abstract

High computational requirements restrict the use of Monte Carlo algorithms for dose estimation in a clinical setting, despite the fact that they are considered more accurate than traditional methods. The goal of this study was to compare mean tumor absorbed dose estimates using the unit density sphere model incorporated in OLINDA with previously reported dose estimates from Monte Carlo simulations using the dose planning method (DPMMC) particle transport algorithm. The dataset (57 tumors, 19 lymphoma patients who underwent SPECT/CT imaging during I-131 radioimmunotherapy) included tumors of varying size, shape, and contrast. OLINDA calculations were first carried out using the baseline tumor volume and residence time from SPECT/CT imaging during 6 days post-tracer and 8 days post-therapy. Next, the OLINDA calculation was split over multiple time periods and summed to get the total dose, which accounted for the changes in tumor size. Results from the second calculation were compared with results determined by coupling SPECT/CT images with DPM Monte Carlo algorithms. Results from the OLINDA calculation accounting for changes in tumor size were almost always higher (median 22%, range −1%–68%) than the results from OLINDA using the baseline tumor volume because of tumor shrinkage. There was good agreement (median −5%, range −13%–2%) between the OLINDA results and the self-dose component from Monte Carlo calculations, indicating that tumor shape effects are a minor source of error when using the sphere model. However, because the sphere model ignores cross-irradiation, the OLINDA calculation significantly underestimated (median 14%, range 2%–31%) the total tumor absorbed dose compared with Monte Carlo. These results show that when the quantity of interest is the mean tumor absorbed dose, the unit density sphere model is a practical alternative to Monte Carlo for some applications. For applications requiring higher accuracy, computer-intensive Monte Carlo calculation is needed.

doi:10.1089/cbr.2011.0965

PMCID: PMC3189007
PMID: 21939358

radiation dosimetry; radioimmunotherapy; SPECT; Monte Carlo dosimetry

Background

To use a Monte Carlo simulation to predict postoperative results with the AcrySof® Toric lens, evaluating the likelihood of over- or under-correction using various toric lens selection criteria.

Methods

Keratometric data were obtained from a large patient population with preoperative corneal astigmatism <= 2.50D (2,000 eyes). The probability distributions for toric marking accuracy, surgically induced astigmatism and lens rotation were estimated using available data. Anticipated residual astigmatism was calculated using a Monte Carlo simulation under two different lens selection scenarios.

Results

This simulation demonstrated that random errors in alignment, surgically induced astigmatism and lens rotation slightly reduced the overall effect of the toric lens. Residual astigmatism was statistically significantly higher under the simulation of surgery relative to an exact calculation (p < 0.05). The simulation also demonstrated that more aggressive lens selection criteria could produce clinically significant reductions in residual astigmatism in a high percentage of patients.

Conclusion

Monte Carlo simulation suggests that surgical variability and lens orientation/rotation variability may combine to produce small reductions in the correction achieved with the AcrySof® Toric® IOL. Adopting more aggressive lens selection criteria may yield significantly lower residual astigmatism values for many patients, with negligible overcorrections. Surgeons are encouraged to evaluate their AcrySof® Toric® outcomes to determine if they should modify their individual lens selection criteria, or their default surgically induced astigmatism value, to benefit their patients.

doi:10.1186/1471-2415-8-22

PMCID: PMC2586009
PMID: 18954445

The Monte Carlo method provides the most accurate dose calculations on a patient computed tomography (CT) geometry. The increase in accuracy is, at least in part, due to the fact that instead of treating human tissues as water of various densities as in analytical algorithms, the Monte Carlo method allows human tissues to be characterized by elemental composition and mass density, and hence allows the accurate consideration of all relevant electromagnetic and nuclear interactions. On the other hand, the algorithm to convert CT Hounsfield numbers to tissue materials for Monte Carlo dose calculation introduces uncertainties. There is not a simple one to one correspondence between Hounsfield numbers and tissue materials. To investigate the effects of Hounsfield number conversion for proton Monte Carlo dose calculations, clinical proton treatment plans were simulated using the Geant4 Monte Carlo code. Three Hounsfield number to material conversion methods were studied. The results were compared in forms of dose volume histograms of gross tumor volume and clinical target volume. The differences found are generally small but can be dosimetrically significant. Further, different methods may cause deviations in the predicted proton beam range in particular for deep proton fields. Typically, slight discrepancies in mass density assignments play only a minor role in the target region, whereas more significant effects are caused by different assignments in elemental compositions. In the presence of large tissue inhomogeneities, for head and neck treatments, treatment planning decisions could be affected by these differences because of deviations in the predicted tumor coverage. Outside the target area, differences in elemental composition and mass density assignments both may play a role. This can lead to pronounced effects for organs at risk, in particular in the spread-out Bragg peak penumbra or distal regions. In addition, the significance of the elemental composition effect (dose to water vs. dose to tissue) is tissue-type dependent and is also affected by nuclear reactions.

doi:10.1118/1.2715481

PMCID: PMC2292645
PMID: 17500475

Geant4; Monte Carlo; proton therapy; CT Hounsfield conversion

Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15) for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ = 0 and ε = ±1), were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15 > N14 > N8.

doi:10.1155/2014/746451

PMCID: PMC3967640
PMID: 24737992

Botta, F | Mairani, A | Hobbs, R F | Vergara Gil, A | Pacilio, M | Parodi, K | Cremonesi, M | Coca Pérez, M A | Di Dia, A | Ferrari, M | Guerriero, F | Battistoni, G | Pedroli, G | Paganelli, G | Torres Aroche, L A | Sgouros, G
Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image-based dosimetry in nuclear medicine.

doi:10.1088/0031-9155/58/22/8099

PMCID: PMC4037810
PMID: 24200697

In recent years, the Monte Carlo method has been used in a large number of research studies in radiation therapy. For applications such as treatment planning, it is essential to validate the dosimetric accuracy of the Monte Carlo simulations in heterogeneous media. The AAPM Report no 105 addresses issues concerning clinical implementation of Monte Carlo based treatment planning for photon and electron beams, however for proton-therapy planning, such guidance is not yet available. Here we present the results of our validation of the Monte Carlo model of the double scattering system used at our Proton Therapy Center in Houston. In this study, we compared Monte Carlo simulated depth doses and lateral profiles to measured data for a magnitude of beam parameters. We varied simulated proton energies and widths of the spread-out Bragg peaks, and compared them to measurements obtained during the commissioning phase of the Proton Therapy Center in Houston. Of 191 simulated data sets, 189 agreed with measured data sets to within 3% of the maximum dose difference and within 3 mm of the maximum range or penumbra size difference. The two simulated data sets that did not agree with the measured data sets were in the distal falloff of the measured dose distribution, where large dose gradients potentially produce large differences on the basis of minute changes in the beam steering. Hence, the Monte Carlo models of medium- and large-size double scattering proton-therapy nozzles were valid for proton beams in the 100 MeV–250 MeV interval.

doi:10.1088/0031-9155/53/16/016

PMCID: PMC4131262
PMID: 18670050

Background

Multivariate assays (MVAs) for assisting clinical decisions are becoming commonly available, but due to complexity, are often considered a high-risk approach. A key concern is that uncertainty on the assay's final results is not well understood. This study focuses on developing a process to characterize error introduced in the MVA's results from the intrinsic error in the laboratory process: sample preparation and measurement of the contributing factors, such as gene expression.

Methods

Using the PAM50 Breast Cancer Intrinsic Classifier, we show how to characterize error within an MVA, and how these errors may affect results reported to clinicians. First we estimated the error distribution for measured factors within the PAM50 assay by performing repeated measures on four archetypal samples representative of the major breast cancer tumor subtypes. Then, using the error distributions and the original archetypal sample data, we used Monte Carlo simulations to generate a sufficient number of simulated samples. The effect of these errors on the PAM50 tumor subtype classification was estimated by measuring subtype reproducibility after classifying all simulated samples. Subtype reproducibility was measured as the percentage of simulated samples classified identically to the parent sample. The simulation was thereafter repeated on a large, independent data set of samples from the GEICAM 9906 clinical trial. Simulated samples from the GEICAM sample set were used to explore a more realistic scenario where, unlike archetypal samples, many samples are not easily classified.

Results

All simulated samples derived from the archetypal samples were classified identically to the parent sample. Subtypes for simulated samples from the GEICAM set were also highly reproducible, but there were a non-negligible number of samples that exhibit significant variability in their classification.

Conclusions

We have developed a general methodology to estimate the effects of intrinsic errors within MVAs. We have applied the method to the PAM50 assay, showing that the PAM50 results are resilient to intrinsic errors within the assay, but also finding that in non-archetypal samples, experimental errors can lead to quite different classification of a tumor. Finally we propose a way to provide the uncertainty information in a usable way for clinicians.

doi:10.1186/2043-9113-1-37

PMCID: PMC3275466
PMID: 22196354

Multivariate Assays; PAM50; Monte Carlo Simulations; Breast Cancer

This paper presents results of Monte Carlo modeling of the SRP-68-01 survey meter used to measure exposure rates near the thyroid glands of persons exposed to radioactivity following the Chernobyl accident. This device was not designed to measure radioactivity in humans. To estimate the uncertainty associated with the measurement results, a mathematical model of the SRP-68-01 survey meter was developed and verified. A Monte Carlo method of numerical simulation of radiation transport has been used to calculate the calibration factor for the device and evaluate its uncertainty. The SRP-68-01 survey meter scale coefficient, an important characteristic of the device, was also estimated in this study. The calibration factors of the survey meter were calculated for 131I, 132I, 133I, and 135I content in the thyroid gland for six age groups of population: newborns; children aged 1 yr, 5 yr, 10 yr, 15 yr; and adults. A realistic scenario of direct thyroid measurements with an “extended” neck was used to calculate the calibration factors for newborns and one-year-olds. Uncertainties in the device calibration factors due to variability of the device scale coefficient, variability in thyroid mass and statistical uncertainty of Monte Carlo method were evaluated. Relative uncertainties in the calibration factor estimates were found to be from 0.06 for children aged 1 yr to 0.1 for 10-yr and 15-yr children. The positioning errors of the detector during measurements deviate mainly in one direction from the estimated calibration factors. Deviations of the device position from the proper geometry of measurements were found to lead to overestimation of the calibration factor by up to 24 percent for adults and up to 60 percent for 1-yr children. The results of this study improve the estimates of 131I thyroidal content and, consequently, thyroid dose estimates that are derived from direct thyroid measurements performed in Belarus shortly after the Chernobyl accident.

doi:10.1016/j.apradiso.2011.12.032

PMCID: PMC3430078
PMID: 22245289

Chernobyl; Thyroid; Measurement; Survey meter; Monte Carlo

Background

Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature.

Methods

Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices) in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression). The eigenfunction values from the spatial configuration matrices were then used to define expectations for prior distributions using a Markov chain Monte Carlo (MCMC) algorithm. A set of posterior means were defined in WinBUGS 1.4.3®. After the model had converged, samples from the conditional distributions were used to summarize the posterior distribution of the parameters. Thereafter, a spatial residual trend analyses was used to evaluate variance uncertainty propagation in the model using an autocovariance error matrix.

Results

By specifying coefficient estimates in a Bayesian framework, the covariate number of tillers was found to be a significant predictor, positively associated with An. arabiensis aquatic habitats. The spatial filter models accounted for approximately 19% redundant locational information in the ecological sampled An. arabiensis aquatic habitat data. In the residual error estimation model there was significant positive autocorrelation (i.e., clustering of habitats in geographic space) based on log-transformed larval/pupal data and the sampled covariate depth of habitat.

Conclusion

An autocorrelation error covariance matrix and a spatial filter analyses can prioritize mosquito control strategies by providing a computationally attractive and feasible description of variance uncertainty estimates for correctly identifying clusters of prolific An. arabiensis aquatic habitats based on larval/pupal productivity.

doi:10.1186/1475-2875-8-216

PMCID: PMC2760564
PMID: 19772590

Objective

Develop fast sequential Bayesian inference for disease outbreak counts.

Introduction

Development of effective policy interventions to stem disease outbreaks requires knowledge of the current state of affairs, e.g. how many individuals are currently infected, a strain’s virulence, etc, as well as our uncertainty of these values. A Bayesian inferential approach provides this information, but at a computational expense. We develop a sequential Bayesian approach based on an epidemiological compartment model and noisy count observations of the transitions between compartments.

Methods

For simplicity, consider an SIR epidemiological compartment model where compartments exist for susceptible, infected, and recovered individuals. Transitions between compartments occur in discrete time with transitions numbers given by Poisson random variables, the tau-leaping approximation, whose means depend on the current compartment occupancy and some unknown fixed parameters, e.g. virulence. Binomial observations, with possible unknown sampling proportion, are made on these transitions.

The standard sequential Bayesian updating methodology is sequential Monte Carlo (SMC), a.k.a. particle filtering. The original bootstrap filter is effective when the system has no fixed parameters, but exhibits marked degeneracy otherwise [1]. An approach based on resampling the fixed parameters from a kernel density estimate provides a generic approach with less degeneracy [2].

We build methodology based on a particle learning framework [3]. In this framework, each particle carries a set of parameter-specific sufficient statistics and samples parameter values whenever necessary. In addition, the methodology promotes a resample-move approach based on the predictive likelihood that reduces degeneracy in the first place.

An improvement on the particle learning framework in this model is that some fixed parameters can be integrated out of the predictive likelihood. This Rao-Blackwellization provides an SMC methodology with reduced Monte Carlo variance.

Results

For a fixed number of particles or computational expense, we show improvements in accuracy relative to the kernel density approach and an alternative approach based on sufficient statistics [4] where compared with a gold-standard Markov chain Monte Carlo analysis.

Conclusions

Many surveillance systems collect counts of adverse events related to some disease. These counts are expected to be a fraction of the true underlying disease extent. The methodology developed here allows a fully Bayesian analysis that uncovers the true number of infected individuals as well as disease virulence based on these count data. This statistical approach can be combined with an optimal policy map to help public health officials react effectively to initial disease reports.

PMCID: PMC3692928

surveillance; Bayesian; sequential Monte Carlo; particle learning

Background

There is considerable uncertainty in the disease rate estimation for aggregated area maps, especially for small population areas. As a consequence the delineation of local clustering is subject to substantial variation. Consider the most likely disease cluster produced by any given method, like SaTScan, for the detection and inference of spatial clusters in a map divided into areas; if this cluster is found to be statistically significant, what could be said of the external areas adjacent to the cluster? Do we have enough information to exclude them from a health program of prevention? Do all the areas inside the cluster have the same importance from a practitioner perspective?

Results

We propose a method to measure the plausibility of each area being part of a possible localized anomaly in the map. In this work we assess the problem of finding error bounds for the delineation of spatial clusters in maps of areas with known populations and observed number of cases. A given map with the vector of real data (the number of observed cases for each area) shall be considered as just one of the possible realizations of the random variable vector with an unknown expected number of cases. The method is tested in numerical simulations and applied for three different real data maps for sharply and diffusely delineated clusters. The intensity bounds found by the method reflect the degree of geographic focus of the detected clusters.

Conclusions

Our technique is able to delineate irregularly shaped and multiple clusters, making use of simple tools like the circular scan. Intensity bounds for the delineation of spatial clusters are obtained and indicate the plausibility of each area belonging to the real cluster. This tool employs simple mathematical concepts and interpreting the intensity function is very intuitive in terms of the importance of each area in delineating the possible anomalies of the map of rates. The Monte Carlo simulation requires an effort similar to the circular scan algorithm, and therefore it is quite fast. We hope that this tool should be useful in public health decision making of which areas should be prioritized.

doi:10.1186/1476-072X-10-1

PMCID: PMC3024210
PMID: 21214924

Background

All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision.

Discussion

We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States.

Summary

Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

doi:10.1186/1471-2288-3-9

PMCID: PMC166164
PMID: 12892568

Phylogenetic comparative methods may fail to produce meaningful results when either the underlying model is inappropriate or the data contain insufficient information to inform the inference. The ability to measure the statistical power of these methods has become crucial to ensure that data quantity keeps pace with growing model complexity. Through simulations, we show that commonly applied model choice methods based on information criteria can have remarkably high error rates; this can be a problem because methods to estimate the uncertainty or power are not widely known or applied. Furthermore, the power of comparative methods can depend significantly on the structure of the data. We describe a Monte Carlo based method which addresses both of these challenges, and show how this approach both quantifies and substantially reduces errors relative to information criteria. The method also produces meaningful confidence intervals for model parameters. We illustrate how the power to distinguish different models, such as varying levels of selection, varies both with number of taxa and structure of the phylogeny. We provide an open-source implementation in the pmc (“Phylogenetic Monte Carlo”) package for the R programming language. We hope such power analysis becomes a routine part of model comparison in comparative methods.

doi:10.1111/j.1558-5646.2011.01574.x

PMCID: PMC3448289
PMID: 22759299

Comparative method; phylogenetics; model choice; information criteria; parametric bootstrap

Background

Task group number 40 (TG-40) of the American Association of Physicists in Medicine (AAPM) has recommended calibration of any brachytherapy source before its clinical use. GZP6 afterloading brachytherapy unit is a 60Co high dose rate (HDR) system recently being used in some of the Iranian radiotherapy centers.

Aim

In this study air kerma strength (AKS) of 60Co source number three of this unit was estimated by Monte Carlo simulation and in air measurements.

Materials and methods

Simulation was performed by employing the MCNP-4C Monte Carlo code. Self-absorption of the source core and its capsule were taken into account when calculating air kerma strength. In-air measurements were performed according to the multiple distance method; where a specially designed jig and a 0.6 cm3 Farmer type ionization chamber were used for the measurements. Monte Carlo simulation, in air measurement and GZP6 treatment planning results were compared for primary air kerma strength (as for November 8th 2005).

Results

Monte Carlo calculated and in air measured air kerma strength were respectively equal to 17240.01 μGym2 h−1 and 16991.83 μGym2 h−1. The value provided by the GZP6 treatment planning system (TPS) was “15355 μGym2 h−1”.

Conclusion

The calculated and measured AKS values are in good agreement. Calculated-TPS and measured-TPS AKS values are also in agreement within the uncertainties related to our calculation, measurements and those certified by the GZP6 manufacturer. Considering the uncertainties, the TPS value for AKS is validated by our calculations and measurements, however, it is incorporated with a large uncertainty.

doi:10.1016/j.rpor.2010.10.002

PMCID: PMC3863161
PMID: 24376948

GZP6 HDR system; Air kerma strength; MCNP 4C Monte Carlo code; In air measurement

Background

Cronbach's alpha is widely used as the preferred index of reliability for medical postgraduate examinations. A value of 0.8-0.9 is seen by providers and regulators alike as an adequate demonstration of acceptable reliability for any assessment. Of the other statistical parameters, Standard Error of Measurement (SEM) is mainly seen as useful only in determining the accuracy of a pass mark. However the alpha coefficient depends both on SEM and on the ability range (standard deviation, SD) of candidates taking an exam. This study investigated the extent to which the necessarily narrower ability range in candidates taking the second of the three part MRCP(UK) diploma examinations, biases assessment of reliability and SEM.

Methods

a) The interrelationships of standard deviation (SD), SEM and reliability were investigated in a Monte Carlo simulation of 10,000 candidates taking a postgraduate examination. b) Reliability and SEM were studied in the MRCP(UK) Part 1 and Part 2 Written Examinations from 2002 to 2008. c) Reliability and SEM were studied in eight Specialty Certificate Examinations introduced in 2008-9.

Results

The Monte Carlo simulation showed, as expected, that restricting the range of an assessment only to those who had already passed it, dramatically reduced the reliability but did not affect the SEM of a simulated assessment. The analysis of the MRCP(UK) Part 1 and Part 2 written examinations showed that the MRCP(UK) Part 2 written examination had a lower reliability than the Part 1 examination, but, despite that lower reliability, the Part 2 examination also had a smaller SEM (indicating a more accurate assessment). The Specialty Certificate Examinations had small Ns, and as a result, wide variability in their reliabilities, but SEMs were comparable with MRCP(UK) Part 2.

Conclusions

An emphasis upon assessing the quality of assessments primarily in terms of reliability alone can produce a paradoxical and distorted picture, particularly in the situation where a narrower range of candidate ability is an inevitable consequence of being able to take a second part examination only after passing the first part examination. Reliability also shows problems when numbers of candidates in examinations are low and sampling error affects the range of candidate ability. SEM is not subject to such problems; it is therefore a better measure of the quality of an assessment and is recommended for routine use.

doi:10.1186/1472-6920-10-40

PMCID: PMC2893515
PMID: 20525220

In-vivo measurement of bone lead by means of K-X ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyses. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by use of Monte Carlo simulations, results obtained using EIV regression models versus those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF.

doi:10.1016/j.envres.2010.10.011

PMCID: PMC3026095
PMID: 21092947

Lead; KXRF; measurement error; errors-in-variables model; simulations

Background

Tight glycemic control (TGC) in critical care has shown distinct benefits but has also proven to be difficult to obtain. The risk of severe hypoglycemia (<40 mg/dl) raises significant concerns for safety. Added clinical burden has also been an issue. Continuous glucose monitors (CGMs) offer frequent automated measurement and thus the possibility of using them for early detection and intervention of hypoglycemic events. Additionally, regular measurement by CGM may also be able to reduce clinical burden.

Aim

An in silico study investigates the potential of CGM devices to reduce clinical effort in a published TGC protocol.

Methods

This study uses retrospective clinical data from the Specialized Relative Insulin Nutrition Titration (SPRINT) TGC study covering 20 patients from a benchmark cohort. Clinically validated metabolic system models are used to generate a blood glucose (BG) profile for each patient, resulting in 33 continuous, separate BG episodes (6881 patient hours). The in silico analysis is performed with three different stochastic noise models: two Gaussian and one first-order autoregressive. The noisy, virtual CGM BG values are filtered and used to drive the SPRINT TGC protocol. A simple threshold alarm is used to trigger glucose interventions to avert potential hypoglycemia. The Monte Carlo method was used to get robust results from the stochastic noise models.

Results

Using SPRINT with simulated CGM noise, the BG time in an 80–110 mg/dl band was reduced no more than 4.4% to 45.2% compared to glucometer sensors. Antihypoglycemic interventions had negligible effect on time in band but eliminated all recorded hypoglycemic episodes in these simulations. Assuming 4–6 calibration measurements per day, the nonautomated clinical measurements are reduced from an average of 16 per day to as low as 4. At 2.5 min per glucometer measurement, a daily saving of ∼25–30 min per patient could potentially be achieved.

Conclusions

This paper has analyzed in silico the use of CGM sensors to provide BG input data to the SPRINT TGC protocol. A very simple algorithm was used for early hypoglycemic detection and prevention and tested with four different-sized intravenous glucose boluses. Although a small decrease in time in band (still clinically acceptable) was experienced with the addition of CGM noise, the number of hypoglycemic events was reduced. The reduction to time in band depends on the specific CGM sensor error characteristics and is thus a trade-off for reduced nursing workload. These results justify a pilot clinical trial to verify this study.

PMCID: PMC2901040
PMID: 20513329

alarm; blood glucose; continuous glucose monitor; glycemic control; hypoglycemia; sensor