Augmented Reality as an Aid in Neurosurgery


This chapter includes an accompanying lecture presentation that has been prepared by the authors: .

Key Concepts

  • Augmented reality enhances the neurosurgeon’s perception of the surgical field with information of a virtual nature that is relevant to the procedure.

  • It represents an advanced form of image guidance, integrated into the physical-world surgical scene and enabling the surgeon to visualize hidden anatomy.

  • This technology aids in tailoring surgeries to individual patients’ needs and anatomies, and can help to avoid complications.

  • It also allows continuous monitoring of the system’s accuracy and reliability.

Introduction

Augmented reality refers to a physical world environment, enhanced—or augmented —by information of a synthetic, computer-generated, and virtual nature. It is therefore distinct from virtual reality, where the environment is wholly artificial. By meaningfully enriching its user’s perception of the world, this mode of visualization can help overcome limitations encountered while performing tasks. The recognition of this potential has led to increased interest in the past years in investigating the application of augmented reality to neurosurgical procedures. In neurosurgery, augmented reality has the possibility of providing a more intuitive surgical image guidance than traditional neuronavigation. The basic principle is to use the world as a reference map, and to associate and deliver information in order to improve perception and understanding.

Neuronavigation functions as a coordinate transformation coupling the three-dimensional (3D) virtual space of the preoperative imaging study and the physical space of the patient’s anatomy, thereby greatly aiding in intraoperative orientation and identification of relevant anatomy. Although one of the most useful tools in the neurosurgical armamentarium, traditional neuronavigation has the significant setback of being point-based, relying on the use of a neuronavigation probe. The spatial information conveyed by this device is translated onto two-dimensional (2D) planes on the neuronavigation station’s screen—classically, transverse, sagittal, and coronal planes, supplemented by “in-line” planes orthogonal to the axis of the navigation probe. The surgeon therefore needs to (1) divert attention away from the surgical field in order to (2) engage in the mental task of integrating the surgical view with the fragmented information on the neuronavigation screen. Augmented reality neuronavigation, on the other hand, obviates the need for the former and greatly helps in the latter, by spatially integrating select virtual information from the imaging study directly into the surgical view (see Fig. 30.1 ). Furthermore, this also allows assessing the accuracy of the virtual model and the registration with the physical world.

Figure 30.1, Augmented view through a navigated microscope of the left frontotemporal region, prior to surgical draping, in a patient positioned for clipping of a left middle cerebral artery (MCA) aneurysm.

It is contemporary technological advances that allowed the advent of frameless stereotaxy (i.e., neuronavigation) in the mid-1980s and 1990s, providing not only a response to the limitations of frame-based stereotaxy but also wholly novel applications that have since become standard. So, in the same way today, augmented reality not only addresses certain limitations specific to neuronavigation but also carries the potential for a paradigm shift in the way neurosurgical procedures will be carried out in the future.

Prerequisites

Three requirements are recognized for a system based on augmented reality: (1) as already mentioned, virtual models need to be generated and merged with a physical environment; (2) the virtual models need to be registered in 3D to the physical environment; and (3) projections of the virtual models, corresponding to the user’s viewpoint, need to be calculated and displayed in real time. ,

In neurosurgical terms, point 2 refers to the spatial registration of radiologic images to the patient’s anatomy. This is customarily performed using a neuronavigation station. For this, a reference array attached to the patient is necessary to compute the spatial coordinates from the physical environment to the virtual space of the image data sets. Virtual models are obtained by segmenting structures of interest from these image data sets using dedicated software. Because they are created in the same virtual 3D space as the imaging study, the models are themselves also registered to the patient.

In order to augment the operation with these models, the actual means of viewing the surgical field also needs to be registered to the patient’s 3D space. Of note, the process of displaying digital information on a screen on top of a real-time video stream of the surgical field is, in fact, image merging and not augmented reality. In an image-merging system, the surgeon is disconnected from the physical world by a digital device. It therefore exposes the surgeon to the risk of system delays or even to the risk of displaying digital content unrelated to the ongoing surgical action. For these reasons pertaining to safety, it is important that the surgeon always physically visualize the surgical field. As a consequence, optical devices used to display overlays require being registered and tracked. In Fig. 30.1 , for example, the operating microscope is equipped with a reference star of its own, as well as a calibrated optical apparatus. The microscope can thereby be tracked in space. Because the microscope is connected to the neuronavigation station, the calculated projections of the virtual models, as seen from the surgeon’s viewpoint, can be injected into the microscope’s eyepiece. The shape and size of the vessel model in Fig. 30.1 are recalculated in real time in accordance with the microscope’s trajectory of view, point of focus, and degree of zoom, fulfilling point 3.

Overview of Current Systems

There is significant variability in the modes of application of the augmented rendering. Nonetheless, all are attempts at simplifying the merging of imaging data to the surgical field, although not all setups adhere to the three defining points for augmented reality stated previously.

In the absence of dedicated hardware, an augmented image can be obtained by using a personal computer and freely accessible software to overlay 3D magnetic resonance imaging (MRI) reconstructions on digital photographs of patients’ heads and cortex. Another report makes use of a projector during surgery to cast an MRI slice onto the patient’s head. However, this setup is confronted with the problem of image distortion inherent to the superimposition of a 2D image over the curved 3D surface of a head, in addition to parallax error. Using a semitransparent mirror has also been described for superimposition of autostereoscopic 3D images during cranial procedures, as well as of axial 2D computed tomography (CT) and MRI slices to guide spinal percutaneous needle procedures.

The setback of having to resort to additional material for the purpose of augmented reality navigation underlies the interest in integrating hardware that is already a part of a neurosurgical operating theatre; moreover, the precise calibration of the operating microscope’s optical apparatus appears inherently suited for such a role. In 1982, pioneering work by Kelly and colleagues demonstrated the potential of computer linkage of imaging data with the operating microscope. , Four years later, Roberts and coworkers introduced the concept of frameless stereotaxy—or neuronavigation—with their development of a microscope coupled to CT image data through scalp fiducial registration, and whose spatial position and focal plane are tracked in real time, allowing for image injection into the microscope’s eyepiece. The injected models of the segmented structures, however, are limited to outlines or filled-in planes.

In an attempt to fully exploit the potential of 3D registration, Edwards, King, and associates developed a system dubbed MAGI (microscope-assisted guided interventions) that enhances the augmented visual experience with stereoscopic 3D-rendered model overlays. They tackle the crucial problem of depth perception of the virtual models and of their fluid integration into the view of the surgical field. They appreciate that their system requires a high degree of reproducible accuracy for its advantages to supersede standard neuronavigation, and this is reached through automatized microscope calibration, but also at the price of implanting bone-anchored fiducial markers for the purpose of registration.

Advances in imaging technique and registration algorithms have since allowed current commercially available neuronavigation systems to achieve a clinically acceptable accuracy not only with skin-fiducial paired-point registration, but also with the more recent and straightforward method of laser surface-based registration. It is with similar microscope-based setups that our group and others have worked to investigate the usefulness of intraoperative augmented reality guidance during neurosurgical interventions. The advantage of these systems is that they augment not only the surgical field but, more importantly, the surgeons’ view of the surgical field, obviating the need to look away toward a separate screen for this information and thereby optimizing the surgical workflow. Figs. 30.1, 30.2A, 30.4, 30.8A, 30.8C, 30.8E, 30.9, 30.10B, 30.11, 30.12A, 30.13–30.15, 30.16B, and 30.16C all show augmented surgical views through the operating microscope as seen by the surgeon.

Although our group uses the microscope augmented overlay prior to draping for preincisional orientation (see Figs. 30.1, 30.4A-B, 30.8A, and 30.13A ), others have argued that the microscope is too bulky a tool for this purpose. Accordingly, systems providing a real-time augmented 3D rendering and relying on lighter hardware have been developed. Use was made of tracked handheld or head-strapped cameras, calibrated and registered to a neuronavigation station. These devices, however, convey the augmented images onto a separate screen, requiring the surgeon to look away from the surgical field. The camera’s direction of view is not necessarily that of the surgeon, requiring additional mental adjusting to these two scenes; furthermore, when the camera is not in line with the surgeon’s trajectory of view, the surface projection of the augmented deep target model can be inferred to a point on the skin different from that in the surgeon’s line of vision.

Augmenting the video stream from the back-facing camera of a handheld portable tablet has also been reported, and, like the microscope, presents the advantage of augmenting the surgical field in the user’s line of view. Intuitively convenient for the predraping phase of a procedure, it loses its edge after draping. Indeed, although it is conceivable to drape a tablet in sterile fashion, having to hold one while operating, or having it held by an assistant, is less than ideal.

Similarly, head-mounted displays have also been explored as less cumbersome alternatives to the microscope, in particular for surgeries in which the microscope is typically not used. These systems are still in developmental phase and have not reached true clinical application, primarily due to the inherent difficulty of calibrating such a device to the individual user’s visual parameters. Uncertainty with regard to accuracy, especially for deep-seated pathology, is also a current limitation. The relative bulkiness of older generation headsets and the fact that the augmented view is not “shared” with the rest of the surgical team can be seen as additional deterrents to their practical implementation. Nonetheless, solutions to both these points are imaginable. Additional note can be made here of recent reports describing commercialized head-up displays that provide an inset, in a corner of the surgeon’s visual field, of the neuronavigation or fluoroscopy screen, for catheter insertion during ventriculoperitoneal shunting and for spine instrumentation. , Although these are not systems that truly augment the surgical field, they do provide information in the field of view that improves surgical understanding. Hand-eye coordination and surgical ergonomics are thereby enhanced, as the surgeon does not need to repetitively look away toward the neuronavigation screen.

The endoscope is a well-established channel of vision into deep-seated neurosurgical fields and, like the microscope, has a precisely tuned optical apparatus amenable to the integration of augmented reality technology. In contrast to the microscope, however, the endoscope has been the object of seemingly less interest in this regard so far. A system providing impressive 3D neuronavigation reconstructions on an autostereoscopic display has recently been reported for transsphenoidal approaches, but this information is nonetheless still visualized on a separate screen. In 2002, Kawamata and colleagues published their clinical experience with an endoscopic augmented reality navigation system that provides real-time, patient-registered, 3D-rendered anatomic segmentations onto the endoscope’s screen. Moreover, the system corrects the virtual overlay to the lens distortion effect of endoscopic views. Although a pioneering setup, its augmented rendering does not truly integrate into the surgical field, and the 3D virtual models are visualized monoscopically on a 2D screen. The advent of 3D screens, combined with the application of appropriate visual cues to computer graphics, could significantly help in addressing this setback.

Current Applications

With the increase in the amount of pre- and perioperative information requiring consideration during a neurosurgical procedure, the neurosurgical paradigm is progressively shifting toward the customization of well-established, standardized approaches to the specific needs of the individual patient. Achieving ways of translating these data to the surgical setting can help improve surgical ergonomics and increase the likelihood of achieving the operative goals set for a given case. Accordingly, augmented reality has the potential to meaningfully integrate into the surgical field those data that can be virtually modeled in the current state of technology. Nonetheless, significant forethought must be given to each individual case in order to anticipate (1) which information may prove useful during surgery and at what stage of surgery (so as not to overload the surgical view); and (2) in which form to augment this information for it to best serve its purpose.

Augmented Reality in Craniotomy Planning

Skin incision and the craniotomy ( Fig. 30.2 ) can be customized to the individual patient’s anatomy, through foresight of underlying key structures. The goal is to create a targeted surgical corridor tailored to the individual intraoperative needs, while avoiding unnecessary tissue exposure and disruption. This is illustrated by Cutolo and coworkers in their mannequin-based task of accessing an intra-axial lesion adjacent to an eloquent region with and without augmented reality, and by our group’s reported experience with progressively smaller craniotomies for aneurysm clipping ( Fig. 30.3 ) and for extracranial-to-intracranial (EC-IC) bypass surgeries since the introduction of augmented reality into our workflow. , Furthermore, the possibility to visualize underlying anatomy beforehand can help avoid complications through foreknowledge, for example, of the position of venous sinuses ( Fig. 30.4B and C ), cortical draining veins, or of cranial sinuses and air cells.

Figure 30.2, Use of augmented reality in skin incision and craniotomy planning.

Figure 30.3, Mini–pterional craniotomy performed with augmented reality guidance for the clipping of an unruptured right middle cerebral artery bifurcation aneurysm.

Figure 30.4, Augmented microscope views in a patient undergoing a left retrosigmoid approach for resection of a vestibular schwannoma.

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here