Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Presurgical planning is a broad term, encompassing everything from simple visualization of two-dimensional (2D) radiographic images to a true rehearsal of complex three-dimensional (3D) surgical movements based on computed tomographic (CT) or magnetic resonance imaging (MRI). Over the last few decades, physical model surgery has given way to digital techniques, with 3D reconstruction of CT images now commonplace and the availability of complex software capable of simulating osteotomies and bony repositioning. These presurgical plans can then be accurately translated to the operating room via a variety of 3D-printed patient-specific guides, templates, and/or implants ( Fig. 24.1 ).
These techniques of presurgical planning have gained widespread acceptance and can be considered the gold-standard for many areas of craniomaxillofacial surgery, including orthognathic surgery and microvascular jaw reconstruction. However, there remain some challenges with the current techniques and workflows:
Software complexity. Currently available software is multifunctional and quite powerful, but as a result, presents a complicated control interface with a steep learning curve. This is a “barrier to entry” for most surgeons, with most presurgical planning sessions therefore being driven by a software engineer rather than the surgeon.
Lack of depth perception and feel . Digitally reconstructed 3D models of CT images provide a significantly improved means of visualizing the 3D anatomy of complex structures. However, when viewed on a typical flat-screen monitor these can best be described as a “pseudo-3D” image, lacking the depth perception of a physical object. As a result, fine manipulation or measurement of the object can be more difficult and/or imprecise.
Limited software tools. The software tools used to create digital osteotomies share many features with those found in more traditional 3D engineering software. As a result, they work well for linear or geometric cuts, such as a Le Fort I osteotomy. However, the limitations of these tools become quite apparent when attempting to simulate a more complex curved osteotomy, such as that required for a fronto-orbital advancement ( Fig. 24.2 ).
Inability to adapt to intraoperative findings. A variety of surgical splints, guides, templates, or custom implants can be manufactured based on the outputs of presurgical planning and can help to ensure the reproducibility of the plan on the day of surgery. However, the accuracy of surgery based on these guides assumes the patient’s anatomy is unchanged from the preoperative imaging and, if intraoperative findings necessitate a change in the surgical plan, these pregenerated aids can therefore be rendered useless.
Since 2006, the Division of Plastic Surgery and the Department of Industrial and Mechanical Engineering at the University of Illinois, Chicago, have been working in collaboration to address the abovementioned limitations. The outcome of this collaboration is a purpose-built virtual-reality-based software platform named ImmersiveView (ImmersiveTouch, Inc, Chicago). ImmersiveView provides a solid foundation and comprehensive virtual toolkit, enabling surgeon-led presurgical visualization and true 3D-VR planning of a wide variety of surgical procedures. In addition, the digital outputs of VR-based planning have been designed according to industry standards, facilitating the production of familiar “gold-standard” surgical guides/splints, as well as integration with existing, new, and emerging technologies, such as intraoperative navigation and the rapidly developing field of augmented reality (AR). These intraoperative guidance modalities can potentially provide an ideal complement to the state-of-the-art presurgical planning afforded by the VR environment, and the marrying of these technologies provides a completely digital workflow, from imaging and diagnosis to the completion of surgery.
For an emerging technology such as VR or AR to be clinically useful, it must meet the following criteria:
Tangible improvement in at least one aspect of surgical workflow
Wide availability
Cost-effective
Can be utilized in timely fashion
The benefits of VR are well established and include improved visualization, environmental immersion, and, importantly, depth perception. These benefits have been realized in a wide variety of fields, including aviation, military training, video gaming, and entertainment.
The concepts behind stereoscopic photography and animation can be traced back as far as 1832, and true head-mounted VR displays were first developed in 1960. There were steady improvements over the next several decades, but the introduction of low-cost high-performance consumer devices, such as the Oculus Rift in 2012 (Meta Platforms Inc, Menlo Park CA, formerly OculusVR) marked a watershed in the popularity and availability of the technology. The rapid surge in the market for VR videogaming has helped to drive the cost of entry down, and the continual improvement in hardware performance now produces high-resolution images with a rapid refresh rate, which is well suited to the display of anatomical images.
Surgical applications for VR range from simple preoperative anatomical visualization, and resident and student education, to realistic simulation of complex osteotomies and bony movements, placement of implants, and reconstruction planning. In this chapter, we will describe these applications in detail.
The depth perception afforded by the virtual reality environment is one of the most important benefits of VR because it allows the user to reach out and touch or grab the virtual object as they would in real life. This, in turn, allows for simplification of the user interface, which is controlled by motion-controlled handpieces with minimal requirement for button presses. The end result of this is a very shallow learning curve, allowing the surgeon (or resident, or student) to control the simulation or planning session without the need for an intermediary technician. This puts the control of surgical planning back into the hands of the surgeon.
Another key requirement relates to the time required for preparation of images for surgical planning or simulation purposes. This is especially true for nonelective surgeries, such as facial trauma, where surgery often takes place within 24 hours after patient presentation. In such cases, the recently developed ability to work directly with raw digital imaging and communications in medicine (DICOM) data means that the surgeon can take images directly from the scanner and load into the VR space for manipulation and immediate planning with no unnecessary delays. This has important implications not only for the trauma setting, but also potentially in the setting of intraoperative changes to the surgical plan and the need to adapt to unexpected anatomy or pathology (ImmersiveView, ImmersiveTouch Inc, Chicago IL).
The variety of procedures that can benefit from VR and AR visualization and planning is broad, including trauma and elective bony reconstruction, both within the craniofacial skeleton and elsewhere in the body. However, the bony anatomy only tells part of the story, and for many procedures, the ability to visualize the soft tissues can be equally important. To that end, MRI image compatibility and the option to overlay images from different modalities potentially opens up VR-based planning for tumor resections, vascular surgeries, and more (ImmersiveTrauma 4.2, ImmersiveTouch, Inc).
AR provides the potential to take surgical planning (performed in VR or otherwise) into the operating room. AR, by definition, is the fusion of computer-generated data and the environment in real time. This allows a surgeon to view information while simultaneously viewing the operative field. The pinnacle of AR potential is to provide patient-specific anatomic or imaging data, linked via anatomic points to the patient to allow a surgeon to “see through” the patient and identify deep anatomic structures and treatment targets. However, significant hurdles still remain in workflow, device availability, fidelity, and ease of use in order to extend immersive AR into a technology to be used routinely for a wide range of cases.
For workflow for VR see eFig. 24.1: Virtual reality workflow.ppt.
For workflow for AR see eFig. 24.2: Augmented reality workflow.ppt.
The hardware requirements for VR have remained fairly constant since its introduction, and include the following:
A separate display device for each eye.
Motion tracking of the user’s head movements.
A control mechanism.
Dual display devices provide separate images, which are then interpreted by the user’s brain and “reconstructed” into a true 3D image with accurate depth perception. The motion tracking is used to adjust the user’s viewpoint according to head position, providing the natural sensation of moving around a physical object.
Although VR headsets resembling current hardware have been commercially available for several decades, the constant improvement in computer processing power and graphic displays has resulted in high-resolution, high-refresh-rate images with extremely precise motion tracking of both the user’s viewpoint and hand movements . The latest generation of headsets have incorporated motion tracking sensors into the headset and controllers, negating the need for external sensors, and even functioning without the need for a separate computer (Oculus Quest, Meta Platforms Inc, Menlo Park CA). While currently available VR surgical planning solutions continue to use a high-performance laptop to drive the software, continued improvement in the performance of these stand-alone VR headsets is expected, and they are likely to become predominant in the near future.
Perhaps the most important aspect of the hardware for most surgeons relates to the control mechanism, since the means of physical interaction with the VR environment will dictate the achievable simplicity of the software interface. The most common controllers are deceptively simple handheld triggers with just a few buttons for each hand ( Fig. 24.3 ). However, the handpieces also incorporate internal gyroscopes, magnetometers, and accelerometers, which are capable of translating the user’s natural hand movements into the virtual environment, which limits the need for complex button use. The user can simply reach out toward the VR anatomy, “grab” it by squeezing the controller, and move it as they would a physical object ( ).
Unlike VR, AR can be experienced through several different mechanisms. At its core, AR requires a few critical elements:
A sensor to capture images in real time.
Registration to link sensor images to computer-generated data.
A display or display(s) to display the merged image.
At its simplest, all-in-one solutions are available, such as those on a smartphone, which utilize an onboard camera and light detection and ranging (LiDAR) scanner as sensors, combined with in-device registration computations, and display through the phones screen. Although this type of more rudimentary 2D display can have uses in the operating room, technology that allows stereoscopic vision has more clinical potential. Stereoscopic vision allows a surgeon to visualize deep structures in their correct anatomic position simultaneously with the operative field in three dimensions. This type of system is significantly more complex, requiring appropriate registration and visualization in three dimensions. The majority of these systems utilize an optical see-through head-mounted display (HMD) that allows the user to see the world around them directly, while images are projected into the users view. Only recently have all-in-one HMDs been widely available, such as HoloLens (Microsoft, Redmond, WA) and Magic Leap One (Magic Leap, Plantation, FL). These products have significantly improved the ability to access and utilize AR for surgeons as well as the general population. HoloLens has been more commonly utilized in surgery thus far.
The natural application of AR to the operating room provides additional challenges. Maintaining the surgeon’s visualization of the operative field requires that the HMD does not obstruct vision and has a wide field of view. It must also be light enough to wear while operating, especially for extended periods of time. Both weight and field of view are improving with each successive generation of AR devices.
Further, registration presents an essential challenge in AR, especially in cases where the operative field is somewhat mobile and/or the targets are soft tissue and not bony. High levels of precision are required. There are several techniques that can be utilized based on the type of procedure being performed. For facial procedures, occlusion-based registration guides can be used, but are not practical to be in place during the entire surgery – instead these would be in place for critical portions of the operation to improve fidelity. Fiducial registration via a bone-anchored skull post or a facemask may be used, as in standard intraoperative navigation. In cases where the operative field can be completely immobilized, such as in a pinned skull, registration is least complicated and can typically be accomplished through standard fiducial registration. Even more challenging is the utilization of AR for deformable soft structures, such as the liver. Registration in these organs is challenging because the structures themselves are soft and mobile, and change shape and location with respirations. Utilizing AR for these types of structures likely will require complex physics-based algorithms.
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here