Home | About | Journals | Submit | Contact Us | Français |

**|**Springer Open Choice**|**PMC2904461

Formats

Article sections

Authors

Related links

Journal of Medical Systems

J Med Syst. 2010 August; 34(4): 579–590.

Published online 2009 April 28. doi: 10.1007/s10916-009-9271-z

PMCID: PMC2904461

Wheyming Tina Song, Email: wt.ude.uhtn.ei@gnimyehw.

Received 2008 November 5; Accepted 2009 March 2.

Copyright © The Author(s) 2009

This article has been cited by other articles in PMC.

The objective of our project was to improve the efficiency of the physical examination screening service of a large hospital system. We began with a detailed simulation model to explore the relationships between four performance measures and three decision factors. We then attempted to identify the optimal physician inquiry starting time by solving a goal-programming problem, where the objective function includes multiple goals. One of our simulation results shows that the proposed optimal physician inquiry starting time decreased patient wait times by 50% without increasing overall physician utilization.

**Electronic supplementary material** The online version of this article (doi:10.1007/s10916-009-9271-z) contains supplementary material, which is available to authorized users.

Physical examination (PE) services are commonly used for routine annual screening examinations and pre-employment check-ups in various institutional settings throughout Taiwan. These services are time-consuming for both the patients and the clinical personnel at the medical clinics. As such, maximal patient throughput and physician efficiency are critically important.

This analysis considers a routine medical physical examination consisting of three ordered stages: (1) registration, (2) a series of diagnostic sub-stages (xray, ultrasound, blood analysis, and electrocardiogram) not in any particular order, and finally (3) physician inquiry. The three ordered stages are illustrated in Fig. 1. We define “the confined constraint” to be the inclusion of all three stages in the PE service; that is, removing any stage from the PE service is not an option.

The intake procedures at the studied PE service is as follows: Up to 25 patients (required to refrain from eating or drinking after 9:00 pm the day before their clinical exam) are scheduled per morning; all patients are asked to arrive at the hospital by 8:00 am. The physician inquiry starting time (pist), for stage 3 is scheduled for 10:30 am. Throughout this paper, we use “original” PE service or policy to refer to the aforementioned PE service and policy.

We quantify the impact of these 4 measures with respect to the three decision factors for the original system: (i) patient dispatching rules, (ii) physician inquiry starting time and (iii) scheduled patient arrival time. We first built a discrete-event simulation model to generate corresponding data. We then constructed a meta-model from our regression model (based on simulation input and output data). Finally, we adopted goal-programming to reach an integrated objective, which included multiple goals: minimizing the patient wait time, the probability of the patient experiencing a prolonged (longer than 150 min) wait time, and the mean physician total inquiry time; and maximizing physician utilization.

Most studies regarding PE services focus on the improvement of physical diagnosis skills [4, 28, 32] and examination reliability [11, 21, 24, 29]. The scant attention paid to improving the efficiency of PE services may be due to PE patients having pre-scheduled arrival times, unlike walk-in outpatients who have random arrival times without specific lower and upper limits. Studies which examine the efficiency of health care systems generally (i.e., not focused on PE systems) include [1–3, 5, 13, 18, 23, 26, 27, 36]. All of the aforementioned papers discuss optimal policy in terms of a single performance measure, such as physician utilization or patient wait time; none discuss combining multiple measures, such as simultaneously maximizing physician utilization and minimizing both the physicians’ total shift time and patient wait time, as the objective function.

To our knowledge, all PE services in Taiwan include a physician inquiry stage. Further, no existing papers discuss the advantage of the proposed approach of omitting or postponing the physician inquiry stage. Although the original motivation for this study was to improve the efficiency of a specific PE service, the insights gained via the detailed simulation model and goal-programming are of more general value. The overall objectives of this paper are to share these insights and to propose improved strategies that have wide applicability.

We constructed a discrete-event simulation (DES) model and attempted to identify the optimal pist to reach multiple goals. Specifically, two types of goals are simultaneously considered in the objective function. From the physician/clinic point of view, we would like to maximize physician utilization and minimize physician total shift time; from the patients’ point of view, we would like to minimize patient wait time and the prolonged wait rate. The integrated measure that combines these four measures is adopted in the goal-programming problem in this paper.

A DES model was constructed to develop a general PE flow model.

The four measures are: (a) physician utilization, *p*_{u}, which is defined as the probability that the doctor in the inquiry stage is busy; (b) the mean of physician total shift time in the inquiry stage, E(*D*), where *D* is the time that the doctor serves in the inquiry stage; (c) the mean of a randomly selected patient wait time in the system, E(*W*), where *W*=*W*_{i}, with probability 1/20, *i*=1, 2, ..., 20; and (d) the patient prolonged wait rate, which is measured by the probability that a patient will stay in the clinic more than 150 min, P(*W*> 150). We remind readers that wait time should be defined carefully because patient wait time is not always in a steady state. We define wait time “*W*” to be the composition of all patient wait times in this paper. That is, we use the notation “*W*” as a “randomly selected patient wait time” rather than any particular patient’s wait time because the *i*^{th} patient wait time, *i*=1, 2, ..., 20, is in a transient, rather than steady, state.

The decision variables are: (i) three patient dispatching rules; (ii) 31 physician inquiry starting times *t*=0, 5, 10, 15, 20, ..., 150, where *t*=0 denotes a physician inquiry starting time of 8:00 am; and (iii) two types of patient arrival policies. Therefore, there are 3 ×31 ×2=186 possible scenarios for the simulation experiments.

Simulation modeling first requires the collection of actual data. For each patient, we recorded registration time and the times that the patient began and ended each stage of their examination. These data were used to fit distributions for the patient inter-arrival times; and processing times for registration, x-ray, ultrasound, blood draw, electrocardiogram, and physician inquiry.

The hypothesized probability density functions considered in this paper are functions of uniform or beta distributions because these two types of distributions have a finite range. Chi-square tests for goodness-of-fit (Montgomery and Runger [22] p.316), a commonly used measurement of how well the sample data fit a hypothesized probability density function, were used to evaluate the fits of various probability distributions to the observed data. Maximum-likelihood-estimators are used to estimate the distribution parameters.

There are two types of random processes involved in the studied PE service: the arrival process and service processes. For the arrival process, we fit the patient inter-arrival times as 0.01 + 28.13 beta(0.74, 3.4) with a mean time of 5.05 min. For service processes, the fitted distributions for registration, x-ray, ultrasound, blood draw, electrocardiogram, and physician inquiry are listed in Table 1. The corresponding mean and variance for each fitted distribution are also shown.

We executed 5000 replications of the simulation experiments for each of 186 scenarios. One replication generates one estimator of the performance measure. Based on 5000 replications, we can obtain the estimates and the standard error of each performance measure, such as the patient mean wait time.

To minimize sampling error in our simulation experiments, common random numbers (CRN, which is a variance reduction technique [19, 33]) are used to investigate the performance under different scenarios. In our study, there are eight random streams in a set for each of 186 replications, i.e.,

where are random number streams used for generating the seven distributions, as shown in Column 3 in Table 1; and is the stream used for generating each registered patient’s next sub-stage in Stage 2. To carry out CRN, we have to use the same random stream set for all 186 combinations of scenarios in a replication; that is, .

For this paper we used SIGMA [31] to create our discrete event model. SIGMA is based on an intuitive event graph [30] approach to simulation modeling. We used SIGMA [31] rather than another simulation software because it can be automatically translated into C code. Implementing CRN in C code is easier than in other simulation software. Other simulation software includes Arena [14], Extend [16], Expertfit [20], Flexsim [25]. A thorough review of simulation software can be found in Swain [35].

We verified and validated the simulation logic and model from three perspectives. First, the program logic was verified by an expert programmer and medical practitioner. Second, we applied the single-step run mode, which is a SIGMA feature, for model verification. Specifically, the run mode permits a user to monitor variable changes and the list of scheduled events to verify the logic of a simulation. Finally, we compared observed data and simulated estimates for the means of patient waiting times in the system and physician utilization (see Table 2) and concluded that the simulation model reliably mimics observed performance.

The numerical values corresponding to the column of means in Table 2 show that the observed and simulated mean patient wait time differ by 1.7%, and that the observed and simulated utilization differ by 2%. All reported simulation values in Table 2 are correct to within one or two units of the last reported digit [34].

This section discusses the selection of the best pist via solving a goal-programming problem, where the objective function needs to be estimated. The multiple goals we considered simultaneously are the maximization of physician utilization and the minimization of the other three performance measures.

A prototype multiple goals programming problem is defined as (P1) (Hillier and Lieberman [10] p.332):

where the four performance measures *p*_{u}, , , and are functions of *t*; *c*_{1} and *c*_{2} are pre-specified values; *g*_{1} is the upper bound of utilization; and *g*_{2}, *g*_{3}, *g*_{4} are the lower bounds of , , and , respectively; and and represent the amount beyond *g*_{i} and below *g*_{i} respectively.

The values *c*_{1}, *c*_{2} are used as the penalty costs if the response values do not satisfy the goal. We set *c*_{1}=2 to indicate that one unit of physician benefit (such as utilization) is as important as 2 units of patient benefit (such as prolonged wait rate). We set *c*_{2}=5 to indicate that one unit (one percent) of physician utilization is as important as 5 units (minutes) of total shift time, and similarly, one unit (percent) of patient prolonged wait rate is counted as important as 5 units (minutes) of mean wait time. We set the upper bound of utilization as *g*_{1}=1. We set *g*_{2}=*g*_{3}=*g*_{4}=0 because the true lower bounds for the mean patient wait time and the mean physician total shift time are unknown, and setting the values *g*_{2}, *g*_{3}, *g*_{4} lower than their true lower bounds will not change the solution of problem (P1).

Inserting *c*_{1}=2, *c*_{2}=5, *g*_{1}=1, *g*_{2}=*g*_{3}=*g*_{4}=0 into the problem and unifying the objective function and constraints of (P1), we can rewrite problem (P1) as (P2):

where we call *z*(*t*) the integrated performance measure which includes our multiple goals. It is noted that *z*(*t*) needs to be estimated via simulation.

We can solve problem (P2) using one of three methods: the interactive approach (IA), ranking and selection (R&S), or multiple comparison procedures (MCPs) [9]. The concept of standard error underlies all three approaches, but they differ in that R&S and MCPs need to determine the sample size to ensure a probability guarantee, but IA does not.

We adopted the IA approach to select the optimal pist. That is, we first simulate 31 scenarios with respect to 31 values of *t*, then select the best *t* in terms of the integrated performance measure. Moreover, we apply leading-digit rules [34] to report all estimates. That is, we report the point estimate through the leading digit of its standard error.

The simulation results demonstrate that the studied PE service is not a stationary system. Further, among the three decision variables, only physician inquiry starting time has a significant impact on the four performance measures.

The transient behavior of the *i*^{th}, *i*=1, 5, 11, 20, patient wait times in the original PE service is illustrated in Fig. 2, in which the estimated mean and variance are stated below each of the four plots. As expected, the distributions of the *i*^{th}, *i*=1, 5, 11, 20, patient wait times of the PE service are not identical. Further, neither the means nor variances of patient wait time of the PE services converge on a fixed value. Specifically, the mean and variance for the first patient are 157.3 min and 2.8 min^{2} (see Fig. 2a); and for the last patient are 198.3 min and 529 min^{2} (see Fig. 2d), respectively. That is, patient wait times are in a transient state, not steady state.

We investigate whether different dispatching rules affect patients and clinic performance. Recall that the original patient dispatching rule is random: patients randomly choose the next stage for service. The alternative dispatching rules we considered were longest processing time (LPT) and shortest processing time (SPT).

The results from the four plots in Fig. 3 show that the three patient dispatching rules have a negligible influence on physician utilization, physician total shift time in the inquiry stage, patient wait time, or the patient prolonged wait rate. The *x*-axis of the four plots in Fig. 3 is pist and the *y*-axis for plots (a), (b), (c), (d) are *p*_{u}, E(*D*), E(*W*), and P(*W*> 150), respectively. If pist is 9:00 am (corresponding to 60 on the *x*-axis), then the LPT policy provides a shorter E(*W*) than both Random and SPT policies with respect to both E(*W*) and *p*_{u} performance. Although this difference is statistically significant, there is no practical difference.

We investigated whether two types of patient arrival policies affect on four previously described measures, where (1) the original (one-group) policy has a fixed registration time at 8:00 am for all patients, and (2) the staggered policy adopts two registration times, 8:00 am and 9:00 am. Figure 4 is similar to Fig. 3 with the same *x*-axis and *y*-axis, except that the two plots in Fig. 4 represent the two different patient arrival schedules.

The two plots in Fig. 4a and b are almost identical; the two patient arrival policies did not affect any of the four measures. Although the two plots in Fig. 4c and d are not identical, their differences are not practically significant.

We investigate whether physician inquiry starting time affects patients and clinic performance. The plots in Figs. 3 and and44 clearly demonstrate that the physician inquiry starting time has a strong impact on performance. For example, the mean physician utilization *p*_{u} rises from 0.70 to 0.98 to 1.0 as the pist increases from 8:00 am to 9:00 am to 9:10 am, as seen in Fig. 4a. The mean physician total shift time decreases from 200 min to 147 min as the pist increases from 8:00 am to 9:00 am, seen in Fig. 4b. The mean patient wait time increases from 90 min to 178 min as the pist increases from 9:00 am to 10:30 am. The patient prolonged wait rate increases from 0 to 0.96 as the pist increases from 9:00 am to 10:30 am.

Because pist has a strong impact on all four performance measures, we constructed regression models for the four measures as functions of pist. Such regression models are also referred to as meta-models [8, 15] because they are models based on simulation models. The advantage of a meta-model is its functional form which can be used to approximate performance values as a function of any physician inquiry starting time. We did not intend to replace the unknown performance measures with the corresponding meta-models into the goal-programming problem (P2) and solve it for optimal solution.

For each performance measure, we consider two types of meta-models: quadratic and mixed (linear and quadratic). That is, the physician utilization, *p*_{u}; the mean physician total shift time, E(*D*); the mean patient wait time, E(*W*); and the patient prolonged wait rate, P(*W*>150) are written as functions of *t*, pist in two types of functional forms. The value of *R*^{2} inside the parentheses is a commonly used quality measure to indicate the fitness of the corresponding meta-model.

- Fitted physician utilization, :
- Quadratic (
*R*^{2}=0.98): - Mixed (
*R*^{2}=0.99):

- Fitted physician total shift time, :
- Quadratic (
*R*^{2}=0.98): - Mixed (
*R*^{2}=0.99):

- Fitted waiting time in system, :
- Quadratic (
*R*^{2}=0.99): - Mixed (
*R*^{2}=0.99):

- The fitted patient prolonged wait rate, :
- Quadratic (
*R*^{2}=0.99): - Mixed (
*R*^{2}=0.99):

Although the values of *R*^{2} for all quadratic and mixed meta-model are above 0.98, the mixed models fit better than the quadratic models (see Fig. 5). Note that *R*^{2} cannot measure the appropriateness of the models. For example, in Fig. 5a, we observe that the quadratic model suggests some values of *p*_{u} larger than the feasible upper bound 1, and in Fig. 5d, the quadratic model suggests some values of P(*W*>150) lower than the feasible lower bound 0. That is, the quadratic models suggest some infeasible values for *p*_{u} and P(*W*>150), although the corresponding *R*^{2} is as high as 0.98.

In this section, we propose two policies to improve the PE services. Policy 1 provides an optimal pist under the confined constraint and policy 2 is an innovative approach that relaxes the confined constraint.

Policy 1 suggests the pist to be 9:00 am, which is the optimal solution for the goal program (P1) using the IA approach. We estimate the objective function, shown in problem (P1), with respect to 31 possible pist (every 5 min between 8:00 am to 10:30 am).

The comparison between the original policy and Policy 1 is given in Table 3. The two policies differ in that the pist is 10:30 vs. 9:00 am; both policies apply random dispatching rules and have no staggered arrival rules. Policy 1 reduces the mean patient wait time by half (179 min to 90 min) (Column 7 of Table 3) and decreases the patient prolonged wait rate from 0.96 to 0.003 (Column 8), but the physician utilization (1 vs. 0.98) (Column 5) and the mean physician total service time (145 vs. 148 min) (Column 6) essentially remain the same.

In addition to the above, we believe that we have identified a further improvement to Policy 1. In this section, we explore an approach that changes the original PE system by either postponing (after all diagnostic studies have been completed) or omitting the physician inquiry stage (if there are no abnormalities to discuss) from the original PE service.

In the original PE service, there is a trade-off between decreasing the mean patient wait time and increasing physician utilization. Therefore, the optimal solution proposed was a compromise instead of an ideal solution for both patients and physicians. Recognizing this potential conflict and realizing that most of the laboratory and radiology results for each patient are not available in the inquiry stage, Policy 2 proposes omitting or postponing the physician inquiry stage from the current PE service. After patients receive their complete PE results via postal mail (normally one week after the patient receives the PE services), they would only be asked to return to the clinic for physician inquiry service if abnormal results are identified.

From the patient’s point of view, the mean patient wait time would decrease by more than half (from 179 min to 65 min; see Column 7 of Table 3). Furthermore, the patient would receive more complete advice from the clinic as subsequent referral and scheduling of a return visit (if necessary) would be directed to the appropriate specialist. Thus, return visits would be more time-efficient for patients.

We recognize the importance and diagnostic value of the physician inquiry and acknowledge that some serious medical problems may not be apparent if replying on laboratory tests alone. However, we believe that for physical examination situations in which the goal is to screen a large number of people for common medical problems, delaying or omitting the physician inquiry stage will lead to overall improved efficiencies while still providing quality medical care.

Increasing the efficiency of PE services in terms of patient wait time and physician utilization is an important topic, especially when PE services are commonly used for routine pre-employment and annual screening examinations for employees in various institutional settings throughout Taiwan. This paper proposed a discrete-event simulation model, then adopted goal programming to reach an integrated objective, which included multiple goals.

Many studies have applied computer simulation for optimization problems in health care systems. For example, Kropp and Carlson [17] adopted a recursive optimization-simulation approach to ambulatory health care settings. Wullink et al. [37] developed a discrete-event simulation model to investigate the optimal policy for reserving operating room capacity. In the last few years, several articles have been devoted to comprehensive surveys of applying such simulations to health care systems. England and Roberts [6] provided a framework for computer simulation in health care. Jun et al. [12] published a review of such applications, covering the early 1960s to the late 1990s. Fone et al. [7] conducted a systematic review evaluating the extent, quality, and value of computer simulation modeling for population health and health care delivery.

To our knowledge, all of the abovementioned papers discuss optimal policy in terms of a single performance measure, such as physician utilization or patient wait time; none discuss combining multiple measures, such as simultaneously maximizing physician utilization and minimizing both the physicians’ total shift time and patient wait time, as the objective function. In addition to addressing multiple goals via goal-programming methodology, this paper implements common random numbers in its simulation experiments to reduce sampling error, and also discusses meta-models to establish a mathematic relationship among performance measures.

In our model we considered various feasible, optimal, and ideal solutions. A feasible solution achieved the minimum requirement for clinical efficiency, whereas the optimal solution was obtained among all feasible solutions within confined constraints. The ideal solution, however, was reached via relaxing existing confined constraints. Once a potential solution was reached, we then considered whether it was a compromise solution between two performance measures and how we could solve any conflicts in order to identify an ideal solution. We solved any identified conflicts by thinking beyond the existing confined constraints.

Figure 6 illustrates the performance measures, decision variables, methods, and solution types adopted in this paper. The framework shown in Fig. 6 could also be used as a generalized framework for any clinic service system.

The simulation results illustrate that while the three patient dispatching rules and two types of patient arrival policies have a negligible impact on any of the four outcome measures, the physician inquiry starting time has a strong impact on performance. We propose two improved policies. Policy 1 proposes changing the physician inquiry starting time from 10:30 am to 9:00 am. Policy 1 decreases patient wait time by 50% and decreases the patient prolonged wait rate from 0.96 to 0.003 without increasing overall physician utilization or total physician shift time. Policy 2 suggests the postponement or omission of the physician inquiry stage, with further examination reserved for patients with abnormal results. Following Policy 2, the mean patient wait time decreases by more than half (from 179 min to 65 min) without the requirement of additional clinic resources. Furthermore, the patient will receive more complete advice from the clinic, as subsequent referral and scheduling of a return visit (if necessary) will be directed to the appropriate specialist.

Below is the link to the electronic supplementary material

SIGMA (DOC 642 kb)^{(643K, doc)}

This research is supported by the National Science Council of the Republic of China under Grant No. NSC-97-2221-E-007-097. We thank Davis Senior High School students Michael Chuang and Samping Chuang for helping collect data.

** Open Access** This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

1. Aharonson-Daniel L, Paul RJ, Hedley AJ. Management of queues in out-patient departments: The use of computer simulation. J. Manage. Med. 1996;10:50–58. doi: 10.1108/02689239610153212. [PubMed] [Cross Ref]

2. Babes M, Sarma GV. Out-patient queues at the Ibn-Rochd Health Centre J. Oper. Res. Soc. 1991. 42845–855.8550729.90765

3. Clague JE, Reed PG, Barlow J, Rada R, Clarke M, Edwards RHT. Improving outpatient clinic efficiency using computer simulation. Int. J. Health Care Qual. Assur. 1997;10:197–201. doi: 10.1108/09526869710174177. [PubMed] [Cross Ref]

4. Dull P, Haines DJ. Methods for teaching physical examination skills to medical students. Fam. Med. 2003;35:343–348. [PubMed]

5. Edwards RH, Clague JE, Barlow J, Clarke M, Reed PG. Operations research survey and computer simulation of waiting times into medical outpatient clinic structures. Health Care Anal. 1994;2:164–169. doi: 10.1007/BF02249741. [PubMed] [Cross Ref]

6. England W, Roberts SD. Applications of computer simulation in health care. In: Nielsen NR, Hull LG, Highland HJ, editors. Proceedings of the 1978 Winter Simulation Conference. Piscataway, NJ: Institute of Electrical and Electronics Engineers; 1978. pp. 665–677.

7. Fone D, Hollinghurst S, Temple M, Round A, Lester N, Weightman A, Roberts K, Coyle E, Bevan G, Palmer S. Systematic review of the use and value of computer simulation modeling in population health and health care delivery. J. Public Health Med. 2003;25:325–335. doi: 10.1093/pubmed/fdg075. [PubMed] [Cross Ref]

8. Friedman LW, Pressman I. The metamodel in simulation analysis: Can it be trusted? J. Oper. Res. Soc. 1988. 39939–948.9480709.62553

9. Goldsman D, Nelson BL, Schmeiser B. Methods for selecting the best system. In: Nelson BL, Kelton WD, Clark GM, editors. Proceedings of the 1991 Winter Simulation Conference. Piscataway, NJ: Institute of Electrical and Electronics Engineers; 1991. pp. 177–186.

10. Hillier FS, Lieberman GJ. Introduction to operations research. 7. New York: McGraw-Hill; 2001.

11. Indra R, Patil SS, Joshi R, Pai M, Kalantri SP. Accuracy of physical examination in the diagnosis of hypothyroidism: A cross-sectional double blind study. J. Postgrad. Med. 2004;50:7–10. [PubMed]

12. Jun JB, Jacobsen SH, Swisher JR. Application of discrete-event simulation in health care clinics: A survey J. Oper. Res. Soc. 1999. 50109–123.1231054.90597

13. Kalton AG, Singh MR, August DA, Parin CM, Othman EJ. Using simulation to improve the operational efficiency of a multidisciplinary clinic. J. Soc. Health Syst. 1997;5:43–62. [PubMed]

14. Kelton WD, Sadowski RP, Sturrock DT. Simulation with Arena. 4. Boston, MA: McGraw-Hill; 2006.

15. Kleijnen JPC. DASE: Design and analysis of simulation experiments. 1. New York, NY: Springer; 2007.

16. Krahl D. Extend: An interactive simulation tool. In: Chick S, Sánchez PJ, Ferrin D, Morrice DJ, editors. Proceedings of the 2003 Winter Simulation Conference. New York, NY: Institute of Electrical and Electronics Engineers; 2003. pp. 188–196.

17. Kropp DH, Carlson RC. Recursive modeling of ambulatory health care settings. J. Med. Syst. 1977;1:123–135. doi: 10.1007/BF02285280. [PubMed] [Cross Ref]

18. Kwak NK, Kuzdrall PJ, Schmitz HH. The GPSS simulation of scheduling policies for surgical patients. Manage. Sci. 1976;22:982–989. doi: 10.1287/mnsc.22.9.982. [Cross Ref]

19. Law AM. Simulation modeling and analysis. 4. Boston, MA: McGraw-Hill; 2007.

20. Law AM. Simulation modeling and analysis with expertfit software. 4. Boston, MA: McGraw-Hill; 2006.

21. Miller D, Garza J, Tuggle D, Mantor C, Puffinbarger N. Physical examination as a reliable tool to predict intra-abdominal injuries in brain-injured children. Am. J. Surg. 2006;192:738–742. doi: 10.1016/j.amjsurg.2006.08.036. [PubMed] [Cross Ref]

22. Montgomery DC, Runger GC. Applied statistics and probability for engineering. 3. New York: Wiley; 2006.

23. Nardone DA, James KE, Finck DL. Impact of gowning on visit length and physical examinations. J. Gen. Intern. Med. 1998;13:489–490. doi: 10.1046/j.1525-1497.1998.00140.x. [PMC free article] [PubMed] [Cross Ref]

24. Nomden JG, Slagers AJ, Bergman GJD, Winters JC, Kropmans TJB, Dijkstra PU. Interobserver reliability of physical examination of shoulder girdle. Man. Ther. 2009;14:152–159. doi: 10.1016/j.math.2008.01.005. [PubMed] [Cross Ref]

25. Nordgren WB. Flexsim simulation environment. In: Chick S, Sánchez PJ, Ferrin D, Morrice DJ, editors. Proceedings of the 2003 Winter Simulation Conference. New York, NY: Institute of Electrical and Electronics Engineers; 2003. pp. 197–200.

26. Podgorelec V, Kokol P. Towards more optimal medical diagnosing with evolutionary algorithms. J. Med. Syst. 2001;25:195–219. doi: 10.1023/A:1010733016906. [PubMed] [Cross Ref]

27. Raouf A, Ben-Daya M. Outpatient clinic’s staffing: A case study. Int. J. Health Care Qual. Assur. 1997;10:229–232. doi: 10.1108/09526869710185012. [PubMed] [Cross Ref]

28. Reilly BM. Physical examination in the care of medical inpatients: An observational study. Lancet. 2003;362:1100–1105. doi: 10.1016/S0140-6736(03)14464-9. [PubMed] [Cross Ref]

29. Salerno DF, Franzblau A, Werner RA, Chung KC, Schultz S, Becker MP, Armstrong TJ. Reliability of physical examination of the upper extremity among keyboard operators. Am. J. Ind. Med. 2000;37:423–430. doi: 10.1002/(SICI)1097-0274(200004)37:4<423::AID-AJIM12>3.0.CO;2-W. [PubMed] [Cross Ref]

30. Schruben LW. Simulation modeling with event graphs. Commun. Assoc. Comput. Mach. 1983;26:957–963.

31. Schruben LW. Simulation graphical modeling and analysis (SIGMA) tutorial. In: Balci O, Sadowski RP, Nance RE, editors. Proceedings of the 1990 Winter Simulation Conference. Piscataway, NJ: Institute of Electrical and Electronics Engineers; 1990. pp. 158–161.

32. Schwind CJ, Boehler ML, Folse R, Dunnington G, Markwell SJ. Development of physical examination skills in a third-year surgical clerkship. Am. J. Surg. 2001;181:338–340. doi: 10.1016/S0002-9610(01)00573-6. [PubMed] [Cross Ref]

33. Shechter SM, Schaefer AJ, Braithwaite RS, Roberts MS. Increasing the efficient of Monte Carlo cohort simulation with variance reduction techniques. Med. Decis. Mak. 2006;26:550–553. doi: 10.1177/0272989X06290489. [PubMed] [Cross Ref]

34. Song W-MT, Schmeiser BW. Omitting meaningless digits in point estimates: The probability guarantee of leading-digit rules Oper. Res. 2009. 571109–111.1111181.62019255559010.1287/opre.1080.0529 [Cross Ref]

35. Swain JJ. “Gaming” reality: Biennial survey of discrete-event simulation software tools. OR/MS Today. 2005;32:44–55.

36. Vogt W, Braun SL, Hanssmann F, Liebl F, Berchtold G, Blaschke H, Eckert M, Hoffmann GE, Klose S. Realistic modeling of clinic laboratory operation by simulation. Clin. Chem. 1994;40:922–928. [PubMed]

Articles from Springer Open Choice are provided here courtesy of **Springer**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |