|Home | About | Journals | Submit | Contact Us | Français|
For many years various charities have been sending doctors to developing countries to perform cleft lip and palate surgery. In the course of these missionary trips, visiting doctors can operate on only a limited number of patients. It has become clear that empowering the local surgeon to solve the problem is the solution. The SmileTrain commissioned our laboratory to created a series of CD roms that would instruct surgeons, using a combination of three-dimensional animation and live surgical footage. During the course of production for these CDs, we programmed custom plug-ins into commercial animation software (Maya® Alias) that would allow us to perform surgery techniques on a digital model of a unilateral and bilateral patient. The next phase of the project was to create a surgical simulator. From the animation project, we had developed a work flow for virtual surgery simulation. Using many of the concepts from the animation project, we were able to program a real-time virtual surgical simulator using C++. The deformer-based surgery simulator is a stand-alone application that can allow a doctor to practice, record, and review a surgery in a safe digital environment.
For many years various charities have been sending doctors to developing countries to perform cleft lip and palate surgery. In the course of these missionary trips, doctors can operate on only a limited number of patients. One of us (CC) participated in several of these surgical cleft missions. In a 2-week period, nearly 200 children were operated on. Although the intention was always to teach the local surgeons, the language barrier, lack of effective teaching materials, and the need to operate as quickly as possible to maximize the number of children treated usually resulted in very little effective education for the local surgeons. Although the 200 children were certainly helped, a more global perspective forces us to realize that this is no long-term answer to the problem. In China alone, tens of thousands of children are born with clefts every year.
It became clear to several of us that empowering the local surgeon to solve the problem in his or her own country was the only definitive solution. The Smile Train charity arose on the basis of that concept. Its motto was “give a man a fish and he'll eat for a day—teach a man to fish and he'll eat for a lifetime.” Since its inception, Smile Train has supplied free training, education, equipment, and financial support to developing countries across the globe. This new strategy has resulted in more than 110,000 successful repairs in 55 countries since 1999. “This is three times the number of surgeries completed in the last 20 years by conventional charities,” says DeLois Greenwood, The Smile Train's vice president. One cornerstone of the Smile Train effort is education.
Smile Train decided to employ high-technology tools to maximize information transfer to local surgeons. The Smile Train has been closely linked to a major computer software firm. Computer-based multimedia educational tools seemed a logical choice. A cleft surgeon with extensive computer graphics experience was sought. For the past 25 years, Dr. Cutting has been doing research using three-dimensional (3D) computer graphics in the study of craniofacial malformations. Along with Smile Train, it was reasoned that the best way to communicate the 3D concepts necessary to teach cleft lip and palate surgery would be through 3D animation and simulation. Three years later, the Smile Train Virtual Surgery Cleft Training CD set was released. It is available at little or no cost on the internet through www.smiletrain.org.
This project marks the first time 3D computer animation has been used to illustrate traditional surgical techniques.1 The advantages of using 3D animation is clear; however, textbook illustrations continue to be used. Three-dimensional visualization is particularly useful for teaching cleft lip and palate surgery, which involves many flaps in a complicated 3D environment. The representation of motion allows modeling of anatomical mechanics that cannot be done with static illustrations. At the beginning of this project, we assumed that the use of the current generation of animation software (Maya 2.5) would be straightforward for surgical animation. It became clear several months into the project that the tools supplied in the standard animation programs were insufficient to illustrate surgery.
To develop the animations contained in the CD set, our group had to adapt the 3D animation software. The creation of surgical animations is done in several steps, which include modeling, texturing, animation, and composition of rendered files.
To create accurate models, we used laser light and computed tomography (CT) scans of two Chinese children with unrepaired clefts (one unilateral and one bilateral). Each of the anatomic reference models was created using software previously developed in this laboratory (Fig. 1). Three-dimensional dense surface models were constructed from slice data.2,3,4 These reference models were imported into an animation package (Maya by Alias, Toronto, Ontario, Canada). The CT models were used only as a reference because of a high polygonal face count and artifacting. The next step of the process is to reconstruct the models into lightweight or “clean” models using several techniques developed in our laboratory (Figs. 1 and and22).5
It quickly became clear that even the best commercial animation software never had to alter the topological structure of its characters before. We quickly developed the realization that “you don't cut Mickey Mouse.” Animation characters deform frequently, and often dramatically, but the way the surfaces that make up the character are connected is never changed. Unfortunately, the essence of surgery is topological change. Surfaces are cut, flaps are created, and tissue is transposed and connected to another part of the character to which it has never before been connected. We soon realized that we would need to write several software “plug-ins” for the animation software to enable it to simulate surgery.
An incision tool plug-in was programmed in C++ to make an incision on a single-layer model. This tool allowed us to create quickly incisions that were otherwise time consuming using Maya's standard split polygon tool (Fig. 3). The model is topologically changed with each new incision, which forces the animator to generate a new “scene” file. To create a continuous animation, each scene needed to be composited in postproduction. If there are 15 incisions, there have to be 15 scenes that must be linked together (Fig. 3).
A forceps tool software plug-in was created to mimic the deformation of soft tissue. When the animations were first produced, Maya did not have a realistic-looking deformer that could create the proper look for retraction of skin. To solve this problem, we programmed the “forceps tool” deformer. This plug-in allows the user to give a selected flap the appearance of surgically transposed skin (Fig. 4). Retraction of tissue is an important surgical activity. When we first created our simulations, the animation software provided a set of standard deformers that were useful for character and commercial animation work but were not capable of replicating tissue retraction for surgery. The deformers available in Maya 2.5 were unable to duplicate the biophysics of folding back a skin flap. It was necessary to create two custom deformer plug-ins to approach realistic-looking tissue retraction. Currently, Maya 6.0 has solved this problem with the new soft-modification deformer (Fig. 4).
Originally, we began our surgical simulation work on dual-layer models. Over time, we learned that dual-surface, “solid” models were difficult to manipulate and did not produce a desirable final animation. CT scans were used to provided all of the surface reference information for the external skin (the top layer). In our first attempt to create a surface model, the skin was represented by a block of tissue composed of a top and a bottom layer connected by a cut edge. Much time was wasted on using this modeling structure. Manipulation of either the outer surface or inner surface of the skin caused interpenetration and rendering artifacts. Finally, we realized that the bottom and skin layers of a flap always followed the same motion as the top layer. We currently model the skin as a single layer and automatically create the underlying surface and edge just before rendering the animation. It was necessary to program a fat tool plug-in to create an underlying layer of fat on single-layer skin models. The fat tool automatically applies “fat” to any part of the model described to create the illusion of full-thickness skin. The advent of the fat tool greatly improved the look and realism of the animation. The fat simply follows the motion of the model's deformation (Fig. 5).
Several animation tools were useful to demonstrate various surgical concepts. The virtual animation “camera” made it possible to visualize anatomy from angles that are impossible to obtain during live surgery. The cameras give the viewer the best vantage point for each maneuver. Another useful surgical animation feature is transparency. The skin and fat obscuring the view of deeper anatomy can be animated from opaque to transparent, allowing us to see underlying dynamic motion of the anatomy. The use of animated motion allows the modeling of anatomical mechanics that cannot be done with still illustrations. The simulation of the pump mechanism of the eustachian tube is an excellent example of using animation to describe a dynamic system (Fig. 3). Computer graphic representations of anatomy allow the selective removal of objects usually present in a surgical environment. For example, in a surgical video, the surgeon's hands, blood, drapes, instruments, and so forth often obscure the surgery (Fig. 6). Animated surgery reduces the operating field to its essential elements. Surgical video also tends to be two and a half dimensional. The surgical camera is usually fixed in position, diminishing the 3D nature of the subject. There are numerous advantages in using 3D animations rather than 2D illustrations or intraoperative video. Animated cameras can use different rotations and positions to show the three-dimensionality of a procedure. Virtual cameras can zoom in to very small areas that are impossible to view intraoperatively (Fig. 6).
Compositing is a field of digital imaging that allows pointers, words, and images to be overlaid onto animations and surgical footage. For example, it is possible to composite the surgical footage onto the animation to show a particular concept. Another advantage of using digital surgery is the ability to illustrate inferior techniques. For example, the triangle repair for the unilateral cleft lip is still a widely used procedure in developing countries. Using our animations, we are able to show the consequences of this outdated repair, whereas it would be unethical to demonstrate this procedure on a live patient (Fig. 7).
The next step in the development of our technology has been the creation of interactive simulators. After successfully completing the surgical animations, we realized that the code we used for the deformer plug-ins could be used for a real-time surgical simulator. The training surgeon can now practice and interactively observe the procedure and make mistakes without putting patients at risk. Recovery from various disaster scenarios may also be practiced, as is commonly done with airline pilots learning to fly a new plane on a simulator.
In contrast to other virtual surgical simulators that employ a finite element approach, this one makes extensive use of deformers with a minimal finite element foundation.6 Combining deformers and a mass spring network to limit movements of these deformers allows real-time performance on common personal computers.7
The custom deformer plug-ins were the principal tools used to create the morphed forms required for the surgical animation. Limitation of soft tissue movement was an artistic task rather than a computational-scientific one.
Following the spirit of the animation project, our group worked in the realm of medical illustration rather than the science. This ideology provided a new freedom in the development of deformers that looked like, but did not exactly mimic, reality. It was natural for us to export these deformers into a stand-alone surgical simulator program. Movements of the points of these deformers are determined by a sparse spring network, which is created and adjusted dynamically as more points are added. The simulator produces a realistic output of sequential forms that can be used as a teaching tool for cleft lip and palate surgery.
The software structure used takes much of its foundation from the animation section in this article. Tissues are represented using the single-layer model technique. We found that it was further necessary to reduce the polygon count of the original surfaces by once again resurfacing the reference models. The models were then exported into the surgical simulator environment. The target machine is the Microsoft Windows® personal computer with an OpenGL™ capable graphics card. The simulator was programmed in the C++ language using the Microsoft Visual C++.net™ compiler on an OpenGL™ graphics window. The user interface was implemented using Microsoft foundation classes. Surgical simulation is performed using four tools.
The incision tool creates an incision on the surface of the model. The user creates the incision by clicking the mouse along the path of the desired cut. A new incision is created when the enter key is pressed. The incision is represented both graphically and topologically.
The hook tool allows the user to create a virtual hook for the purposes of retraction and tissue mobilization. These “hooks” serve as control points for the underlying deformers and are linked by a spring network to the underlying bone, which controls the limits of their mobility. The hooks can also be deleted by the user (Fig. 8).
The suture tool creates a virtual suture between two selected points on any incision edge. This tool creates and attaches two hooks (from the hook tool) with a powerful spring that binds them together to create a new suture. The suture will come together only if the biological spring network allows the connection to be made.
The undermine tool disassociates an area of soft tissue from its underlying connectivity to the model's topology. A “flap” is created by making an incision to form a peninsula of skin or muscle and undermining this tissue to a NURBS plane connecting two base points. The flap deformer is used to simulate movements of this peninsula resulting from movements of hooks placed along its cut edge. A “deep surface” of a flap is generated using a system similar to the fat tool described in the animation section. The deep surface is projected back along the flap surface normals, producing a “flap bottom.” Along with the incision edge that already bounds it, the flap bottom produces a realistic-looking flap for the user. Large volumes of composite soft tissues can also be dissociated from the underlying bone using a process similar to the undermine tool. This composite undermining or “bone undermining” means that hooks placed on overlying soft tissues will no longer be connected to the area of bone as the spring connections are created (Fig. 9).
Two types of deformers are used to create the simulation, the “jello” deformer and the “flap” deformer. The simulator automatically generates either a jello or flap deformer when a hook is placed on the model. A jello deformer is created if the point selected by the user is not undermined. Conversely, a flap deformer is created if the selected point has been undermined with the undermine tool.
The jello deformer's algorithm works on the affected volume of tissue much like a gelatinous mass. This deformer acts on the vertices that neighbor a single control point on nonundermined tissue. When the user places a hook on a nonundermined point in the model, the jello deformer is generated. The formula for creating the jello deformer is determined by the distance from the selected skin or muscle point to the nearest bone point. The jello deformer analyses the relationship between the bone and soft tissue models to create a realistic deformation when a hook is manipulated by the user. The overall volume of the jello deformer is also determined by the relationship between the selected point on the soft tissue surface and bone. A single jello control vertex can affect multiple soft tissues that share a common bone foundation. In this way, a hook on overlying skin will move not only connected neighboring skin vertices but also muscle and cartilage vertices that may lie underneath.
The flap simulates the movement of a surgically undermined flap. When a user places a hook on an undermined area of tissue, a flap deformer is generated by the simulator. The flap deformer simulates the folding of a peninsula of a single undermined tissue when a hook is placed on a cut edge and moved. In a simple flap hook all incision edge points are functions of two (occasionally one) vertices along the flap base. The deformation behavior of the flap deformer is determined by the shape of the peninsula of tissue that has been undermined (Fig. 10).
To limit the movement of any hook with a flap or jello deformer, a sparse spring network is used to connect and activate control points. Spring connections are very simple and are made when the user attaches a hook to the model. There are no a priori spring connections in the model.
If the user creates a jello deformer by placing a hook on a nonundermined area of tissue, three spring connections are made to that selected hook point position. One spring is connected to the nearest bone point. The second “skyhook” spring is connected to a point well above the vertex along the line connecting the bone point and the vertex. A third stabilizing “homing” spring is also connected which attempts to bring the vertex back to its initial position. If two jello hooks have an overlapping deformation area, an additional spring is connected between the two control vertices (Fig. 10).
If the hook has been assigned to a flap, a spring is connected to the nearest base point of the flap and a jello deformer and spring setup previously described is attached to the base point. If the hook is on a subflap, a spring is connected to each of the two base points of the subflap, which are in turn handled as hook points on the parent flap. In a complete surgical simulation the interconnected active spring network can become progressively complex (Fig. 11). All springs are “biological,” which recreates the nonlinear attributes of stretched skin. User hooks are not connected directly to deformer vertices on the model but are connected to them using strong springs that use simple Hooke's law behavior. If the user moves the hook beyond the stretch limits of the spring, this “hook spring” simply stretches after the vertex reaches its limit (Fig. 11).
To combine jello and flap deformers smoothly, we developed a deformer blender system. The jello and flap create different displacements of overlapping points or vertices. It is necessary to blend the effects of each deformer to create a realistic simulation. In a complex scene with many jello and flap deformers, it is necessary to trigger updates as various hooks are manipulated for the surgery. This deformer blender system is carefully optimized for real-time performance.
It is possible to create a variety of different modules for the simulator. For example, it is possible to create a simulator module for a palate repair or a unilateral lip repair. The simulator has been developed and tested on a model of a unilateral cleft lip and a cleft palate. Figures Figures99 and and1010 demonstrate elevating simple and compound flaps, respectively, on the cleft lip model. Figure Figure1111 illustrates several stages of a lateral port control pharyngeal flap procedure. A complete demonstration of the program is available at www.smiletrain.org. The file system for each module consists of binary triangle objects, a texture map. The module is contained in files with a “.sim” extension. The .sim file describes the properties of each triangle object and its texture maps for each of skin, bone, cartilage, muscle, eyes, hair, and so on. When the file is opened in the simulator, a system is established to create the appropriate relationships between the tissues listed in the .sim file module. The great advantage of using this module system is that it is very flexible. A new .sim file can easily be modified to create more complex modules as the internal underlying structure of the simulator evolves.
Recording physical movement, such as the performance of a great ballet dancer, has been impossible to this point. When a great dancer can no longer perform, his conceptual knowledge may still be available but the exact 3D nuances of his movements and subtle rhythms are lost. With computers, it is possible to track the motion of a great dancer's motion and truly preserve all aspects of his or her work. In the same spirit, when a master surgeon is no longer able to perform a surgery, the surgeon's precise technique may be lost. We have written a recorder and player that allow the user to make a recording of the virtual surgery of a master surgeon.
A “history” file (.hst suffix) contains the recorded sequence of users' actions as they perform a surgery. In record mode, the program creates or appends to a history file. It records the currently loaded module as well as all of the user's actions and camera movements. In play mode, the simulator loads a previously recorded history file. The user can step through each of the actions recorded by the simulator. History allows a student surgeon to study three-dimensionally the technique of an expert surgeon. The user can set loop points to rewind back to any point to replay a part of a simulation previously not understand. The user can turn off the recorded camera motion and examine the simulation from any desired viewpoint. A history file may branch to alternate history files from a loop point to illustrate different surgical approaches or intentionally illustrate a surgical mishap. History files also trigger audio and video files in the context of the simulation. The program keeps its audio and titling files in language-specific directories. This multilingual capability is useful for distribution of teaching materials to non–English-speaking countries. The history file is a simple text file and is a “surgical scripting language.” These files can be hand edited to share incisions, flap definitions, hook movements, and suture placements between multiple history files. For the programmer, history files have become useful in finding errors in the simulator. User actions that trigger programming bugs can be recorded to find, duplicate, and correct algorithmic mistakes.
The simulator also has internet capabilities that allow distance learning and real-time training. The system we have created uses a one teacher–multiple student model. All participants first need to connect to a central server site. Each person must have an installed version of the simulator running on the local machine. The teacher sends out control signals to the students describing the operation using the simulator. A student can request becoming the teacher to gain temporary control of the simulator to show a point or demonstrate a step in the procedure. This system creates the possibility of a real-time surgical symposium that can be performed with multiple participants at distant locations all operating on the same virtual patient. These internet simulator capabilities can have significant advantages for teaching surgeons in developing countries if used in conjunction with video conferencing software.
We believe that the next step for our surgical simulation project is to increase steadily the realism of tissue deformation by incorporating more finite element code into the structure of the simulator. We have already begun work on a fast nonlinear finite element system that will replace the current biological spring system. As computers become more powerful, we can further lessen the deformer base and increase the finite elements. Our final goal is to have a scientifically accurate finite element simulator that will work in real time.
The work presented in this article has been supported with contributions from SmileTrain, NFFR, and Alias.