Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Stud Health Technol Inform. Author manuscript; available in PMC 2013 June 26.
Published in final edited form as:
PMCID: PMC3693465

Clinical Breast Examination Simulation: Getting to Real


Verschuren and Hartog’s six-stage methodology for design-oriented research is a process that is ideally suited to the development of artifacts that meet a desired outcome. We discuss the methodology and its relevance to simulation development for establishing a wide variety of realistic models that can be used for assessment.


For over 30 years, the American Cancer Society has put forth guidelines for early detection of breast cancers [1]. Patients who present for screening, or more commonly, for symptoms such as pain, or a palpable mass/lump require a clinical breast examination by a physician who is competent in the skill. While there is good evidence that clinician training on breast models generalizes to performance with patients, there is a paucity of realistic models that simulate the wide range of clinical presentations necessary for training [24].

Verschuren and Hartog’s six-stage methodology for design-oriented research provides explicit guidance on evaluating processes and criteria that inform design [5]. An iterative approach is employed in incorporating stakeholder needs in the conception, research, development and evaluation of design. The end result is a design outcome that is compliant with stakeholder expectations. In this paper, we apply Verschuren and Hartog’s six-stage methodology to design a set of clinical breast examination (CBE) simulators. We discuss the methodology and its relevance to establishing a wide variety of realistic models that can be used for assessment.


Defining Goals

The first stage of Verschuren and Hartog’s (VH) methodology was to define a small set of goals to be realized with each simulator. We defined the goals for three pilot test groups: Group 1, 2 and 3. For Group 1, we employed a “first hunch”, initiative-driven approach to identifying what is required for each simulator. For Group 2, we refined the goals based on the Group 1 feedback and any additional initiatives. For Group 3, we continued with goal refinement based on Group 2 feedback and any other initiatives.

Design-Centered Development

We designed three sets of clinical breast exam simulators with the intent to simulate a specific clinical presentation of a female left breast. The first set included a breast with cyst (Simulator A), a breast with a cancer (Simulator B), and a breast with a cancer and ribs (Simulator C), Figure 1(a)–(c). The second set comprised of a breast with a cyst (Simulator D), and two breasts with cancers (Simulator E and F), Figure 1(d)–(f). The third set included a breast with a cyst (Simulator G), and two breasts with deep tissue cancers (Simulator H and I), Figure 1(g)–(i).

Figure 1
Shown are textbook-defined clinical presentations and their prototype implementation, (a), (b) and (c) (Stage 2 through 5). A total of three sets of three prototypes for each pilot test group are designed based on iterative executions of the methodology. ...

Having defined the goals (VH-Stage 1), we captured initial design requirements (VH-Stage 2) and translated them into detailed structural specifications (VH-Stage 3) to develop the functioning simulators (VH-Stage 4). Having completed the simulators, we gathered feedback from end users for further simulator and presentation refinement (VH-Stage 5). We iteratively cycled through the six VH-stages until the simulators met the design requirements. The simulators were presented to their pilot test group, only when we felt ready for formal evaluation (VH-Stage 6).

The Group 1 venue was held at the 2010 Lynn Sage Breast Cancer Symposium. The conference comprised of individuals involved in diagnostic and therapeutic radiology, oncology, surgery, gynecology, family practice, and genetics. The Group 2 and 3 venue occurred on two different days at the annual meeting of the American Society of Clinical Oncology (ASCO).

We presented three simulators with a brief patient history at each venue. Volunteer examiners were asked to provide demographic information, read the history, and perform a clinical breast examination on the simulators. After completing the examination, the volunteers were asked to indicate a probable diagnosis by drawing the location of the diagnosis on a diagram of the left breast, and by selecting one of the three choices on a survey: Fibroadenoma, Cyst or Cancer. An “Other” category was provided for volunteers who chose to give an alternate diagnosis. In addition, volunteers provided specific comments on the usefulness of the simulators as an assessment tool.

Design-Centered Evaluation

The sixth stage in the VH-design methodology was the “Evaluation Stage”. Our main focus during this stage was to refine the simulators and the way that they were presented in response to feedback from each group.

In addition, we analyzed differences in clinical background and compared probable diagnosis results between Group 1, and Group 2 and 3 combined. We discussed how clinical background might affect probable diagnosis. Finally, we summarized our assessment of the methodology in light of our desire to build a wide variety of realistic models that can be used for assessment.


Table 1 lists the set of goals for the three pilot test groups. For Group 1, our “first hunch” technology development initiative was based on researching cases in standard medical textbooks. This search helped us to define the most commonly articulated clinical breast presentations. Once defined, we translated the textbook cases into three clinical scenarios:

Table 1
A small set of goals is established (Stage 1 of the Verschren and Hartog methodology)
  • A 35-year-old female with a recurrent, painful cyst.
  • A 40-year-old female with a large, palpable breast mass.
  • An 80-year-old female with a large, palpable breast mass.

In preparation for Group 2, we modified the original three simulators and patient histories based on feedback from Group 1. In addition, we developed a new breast skin to use with the modified models for Group 2. For Group 3, we refined our simulators based on feedback from Group 2. The goal was to make the models more difficult. In addition, we placed real-life photographs of a representative patient next to each simulator.

Figure 1 shows a pictorial chart of the technology development process that was used to create models for each of the three Pilot Groups. For Group 1, simulator A and C were successful in achieving close to 80% clinician agreement on the clinical presentation. For Group 2, Simulator D, we attempted to fixate the previously mobile cyst (Group 1- Simulator A). This modification resulted in 37% of Group 2 participants interpreting the cyst as a cancer. For Group 2 – Simulator E, we also fixated the tumor (previously Group 1-Simulator B). This modification resulted in an increase in clinician agreement on the clinical presentation. For Group 2 Simulator F, we used a new breast skin. There were no significant changes in clinician agreement when comparing Group 1 - Simulator C with Group 2 - Simulator F. Using feedback from Group 2, the three breast models were modified once again. This time the goal was to make the clinical presentations more subtle, hence more difficult to diagnose. For Simulator G (previously Simulator D) there was an increase in the diagnosis of cancer. As such, our final conclusion is to go back to the original cyst design (Simulator A) and find a way to fixate the cyst without making it more firm or cancer-like. For Simulators H and I, we made the cancer masses more subtle. As such, more clinicians in Group 3 missed the diagnosis.

In the final evaluation of our design process for the nine breast models, we now have a good understanding of what it takes to “get to real”. The models we will use for testing basic clinical breast examination skills include Simulators A, E, C and F. The remaining models will be used for higher order testing such as palpation techniques and perception studies.

Table 2 shows the frequency and percentage breakdown of the three Pilot Groups by clinician type. Physicians were the highest represented clinicians in Group 1, 2 and 3 at 62%, 88% and 92% respectively. Table 3 shows the frequency and percentage breakdown for each group by specialty. Surgery and surgical oncology topped this list at 36% for Group 1. The latter two groups were overwhelmingly represented by medical oncology at 86% (Group 2) and 88% (Group 3). Table 4 shows predominant practice differences between pilot test groups. These factors may also play a role diagnosing the breast models.

Table 2
Frequency and percentage breakdown for each Pilot Test Group by type of clinician
Table 3
Frequency and percentage breakdown for each Pilot Test Group by specialty
Table 4
Predominant practice differences between pilot test groups


Design-Centered Development: Simulation Refinement

Following Stage 1 of the Verschuren and Hartog six-stage methodology, we defined the simulator design goals. Verschuren and Hartog note that, “Experience has shown that users often do not know what they want, which makes validation of user requirements ex ante very difficult if not impossible”. As such, a “first hunch” initiative became the primary bases for defining the goals for Group 1. Once we collected and evaluated the feedback from Group 1, we iteratively cycled though the VH-stages and made strategic design modifications for Groups 2 and 3. The iterative approach of defining the goals, implementing the design, collecting feedback and evaluating the results successfully established a framework for continuous design refinement of the simulators and their presentation.

Simulator Refinement

The volunteers in Group 1 said that the masses on Simulators A and B should be less mobile. In response to the feedback, we made changes for Group 2 to immobilize the masses, hence the new Simulators D, E. While these changes resulted in an improvement in clinician agreement for the diagnosis of Simulator B (Now Simulator E), there was a different interpretation for Simulator A.

Presentation Refinement

In response to the feedback from Group 1, we removed the identification of the mass as a cyst in the patient history of Simulator A. In addition to fixating the mass, changing the clinical presentation history may have had an effect on participants’ diagnosis of Simulator D. Our next step is to go back to the original design for Simulator A, fix the cyst without making it firm and use a history does that not state the word cyst. This will enable us to determine whether the language of the initial history contributed to the 78% agreement. At this stage, it is not clear if the real-life photos had an effect on diagnosis. This will need to be tested separately.


The iterative approach of the Verschuren and Hartog methodology is key to successfully establishing a framework for continuous user-centered design refinement. It explicitly establishes achievable goals though end-user feedback and initiative-driven approaches that can be evaluated such that continuous refinement is practically and incrementally realized.

A review of medical device development has shown that the stakeholders are heterogeneous in several aspects, such as needs, skills and working environments [7]. While simulator differences affect probable diagnosis, practice differences and treatment patterns may also be a contributing factor when considering motivations. Hence, the design and evaluation of CBE simulators on a wide variety of groups is critically important to the ultimate understanding of effective clinical breast examination skills. Designing a wide variety of realistic models that can be used for assessment is an achievable goal if an explicitly outlined methodology such as Verschuren and Hartog’s is thoughtfully implemented.


We would like to thank Abby Kaye, Brandon Andrews and Adam Cohen for assisting us with our research.


1. Smith RA, et al. Cancer screening in the United States, 2011: A review of current American Cancer Society guidelines and issues in cancer screening. CA Cancer J Clin. 2011;61(1):8–30. [PubMed]
2. Barton MB, Harris R, Fletcher SW. The rational clinical examination. Does this patient have breast cancer? The screening clinical breast examination: should it be done? How? JAMA. 1999;282(13):1270–80. [PubMed]
3. Fletcher SW, O’Malley MS, Bunce LA. Physicians’ abilities to detect lumps in silicone breast models. JAMA. 1985;253(15):2224–8. [PubMed]
4. Pennypacker HS, et al. Why can’t we do better breast examinations? Nurse Pract Forum. 1999;10(3):122–8. [PubMed]
5. Verschuren P, Hartog R. Evaluation in Design Oriented Research. Quality & Quantity. 39(6):733–762.
6. Marsh SK, Archer TJ. Accuracy of general practitioner referrals to a breast clinic. Ann R Coll Surg Engl. 1996;78(3 Pt 1):203–5. [PMC free article] [PubMed]
7. Shah SG, Robinson I. User involvement in healthcare technology development and assessment: structured literature review. Int J Health Care Qual Assur Inc Leadersh Health Serv. 2006;19(6–7):500–15. [PubMed]