Microscope Integration and Heads-Up Display in Brain Surgery


Key Concepts

  • Heads-up display is a type of augmented reality in which digital representations of patient anatomy are cast over real anatomy. It is currently available for neurosurgeons through microscope integration.

  • Segmentation software allows the clinician to create a three-dimensional representation of patient-specific anatomy for use in augmented and virtual reality environments.

  • Both heads-up display and three-dimensional models help neurosurgeons translate two-dimensional preoperative imaging into a working understanding of the anatomic relationships relevant for surgical interventions.

  • Registration errors create inaccuracies in image-guided surgery through scalar and translational shifts in the patient’s anatomy relative to the locations registered at the beginning of the procedure.

  • Segmentation errors create inaccuracies in image-guided surgery by depicting anatomic relationships that do not correctly describe reality.

Augmented reality (AR) platforms are a promising technique for translating the voluminous and complex information from preoperative imaging into digital representations that can be used to guide neurosurgical intervention. AR can enhance the user experience by applying known information to a real-world situation so that both real and digital information are immediately available for interpretation. Most current applications involve displaying digital information alongside or within the video screen, headset, or eyepieces of an operating microscope or endoscope. Benefits of this process are that it expedites the process of discovery or can influence decision making, but does not force particular actions. This is different from virtual reality (VR), in which real-world elements are present, and while the real world may be simulated, the user has no direct influence over events that occur within it. This chapter describes how current integrated AR platforms work and includes a proposed workflow for AR implementation, case-based examples of successful implementation, and areas where further development is needed.

What Does Augmented reality Add to Neurosurgery?

Well-done surgery has a logical progression. It begins with an appreciation of normal anatomy and physiology. Surgery progresses as we use the framework of normal anatomy and physiology to elucidate exactly how pathology has occurred, and we use this framework again to correct it. Sometimes, these relationships are simple or obvious: an epidural hematoma causes neurological dysfunction through mass effect on the brain that increases as the hematoma grows. Other times, these relationships are cryptic: a glioblastoma causes dysfunction through mass effect and epileptogenesis but also through brain invasion and destruction. In either case, we anchor the surgical plan in what we understand about the pathologic relationships (e.g., that an epidural hematoma will exist between the dura and the skull) and use that to answer questions that may not be as obvious (e.g., the location of the blood vessel responsible for the hematoma development). Every time we bring a navigation probe into the surgical field, use a stimulation probe to find a facial nerve, or verify arterial blood flow with a Doppler probe, we are establishing the “ground truth” for a particular question.

Establishing the ground truth describes the intermediate step in the surgical decision-making process and when AR is most beneficial. The hypotheses that we generate about pathology must be verified by real-world observations to design a solution. We identify an epidural hematoma as such because of its appearance as a blood clot and its location between the dura and the skull. In the operating room, we must verify these two ground truths about the pathology to appropriately frame our search for the culprit blood vessel. For example, if we identify a blood clot but it is instead in the subdural space, the identification of the responsible vessel will proceed much differently.

To understand the potential power of AR for the neurosurgeon, it is important to reflect on how successful operations are executed but also on how avoidable errors occur. A seasoned neurosurgeon draws on clinical experience to estimate the likelihood of operative success and to anticipate steps that will be undertaken to achieve a desired outcome. Favorable outcomes occur when this estimation is based on greater experience. Experience benefit happens primarily in two ways: refinement of motor skills for specific tasks and comprehension of the set of problems to be encountered during the case. We believe that problems can frequently be avoided with additional cognitive preparation. Surgery is most efficient when the surgeon can visualize the anatomy that is about to be encountered. Standard preoperative imaging may suggest lateral displacement of the optic nerves by midline suprasellar lesions, but the relationship of this displacement to the internal carotid arteries and chiasm may be less clear. AR allows the surgeon to understand these relationships in the plane of the surgical approach at the time when that knowledge is most critical. In our experience, most errors are related to inadequate preparation for the particular anatomic characteristics unique to the case. This is in contrast with VR, in which anatomic relationships can be viewed in any plane but not in the operative field.

In recognizing that intraoperative errors can also be systematic, we can also use AR to develop systems to prevent their occurrence. Even though the transverse sinus is not part of the critical anatomy of most posterior fossa cases, we include this structure in the AR for retrosigmoid approach cases to help improve our awareness of its position during drilling. Our AR workflow builds in detailed patient-specific anatomy, specific to the patient’s intended operation and their neurological condition. When it works well, it can reduce cognitive load on the surgeon, permitting greater procedural efficiency and technical accuracy.

What is Augmented Reality?

AR is an immersive environment that contains real and computer-generated elements. Digital information (e.g., visual, somatosensory, auditory) is used to enhance real-world experiences. HUD is a type of AR environment where visual information is overlaid on a real-world background. Coregistration of the microscope to produce HUD of preoperative imaging was identified early as a way to achieve AR in the operating room. , Other methods under development include head-mounted displays, half-silvered mirrors, projectors, and smart glasses. Early use of smart glasses for navigation utilized motion capture cameras and three-dimensional (3D) models that were created preoperatively. Modeling techniques and holographic navigation are currently using open source software with somewhat manual segmentation. , Accuracy measurements at this time depend on the depth and complexity of the lesion as well as MRI and CT data used to build the AR model. Although it is unclear if these techniques are clinically acceptable, the developing technology offers many hands-free workflow advantages and addresses some of the visual distraction concerns as integration of patient-specific models and navigation evolves.

Microscope coregistration is available for Brainlab and Medtronic navigation systems. Both systems use a fixed optical registration star on the microscope registered to the optical star fixed to the patient ( Fig. 32.1 ). The microscope is focused to a specific central point on the star so that the focal point in the eyepiece can be tracked in the same way as the stereotactic probes. A video display projection of the preoperative plan is injected into the eyepieces of the microscope. This digital overlay information is visualized on the normal confocal optical field. Because it is tracked with the navigation system, viewing perspective, focus depth, and zoom magnification are all depicted within the image and on the navigation screen ( Fig. 32.2 and ).

Figure 32.1, Microscope coregistration and registration array/camera.

Figure 32.2, Case 1.

Currently, preoperative imaging can be displayed in two ways: direct display (“picture-in-picture”) of preoperative imaging and heads-up display (HUD). Direct display brings digital two-dimensional (2D) radiology into the eyepiece of the operator (see Fig. 32.2H–I ). The display changes with the focus of the microscope so that only the planes intersecting with the focal point are represented. The advantages of this representation scheme include the ability to constantly access certain preoperative imaging (currently not all scan parameters are compatible with this display) without information clouding the focal point. The disadvantages are that the surgeon must look away from the operative field to the picture-in-picture, the quality of the digital image is relatively poor, and only one imaging sequence can be viewed at a time.

HUD is possible through 3D processing and volume rendering of the preoperative imaging (see Fig. 32.2E–L ). Segmentation is a technique of postprocessing radiologic data to recreate a 3D model of the anatomy to view specific structures rather than projection planes. Brainlab, Medtronic, Synaptive, and Surgical Theater have proprietary programs designed to automatically segment brain structures. Finally, all platforms also allow the user to manually paint as the simplest method of segmenting structures or areas of interest. Brainlab and Medtronic systems allow these segmented structures (manual or automatic) to be overlaid into the eyepieces of the microscope to create a HUD that is registered to the patient. The painted objects can be displayed in 2D with information about depth displayed in solid and dotted lines, where the solid line represents the plane of view of the microscope. More recent representations provide for a 3D holographic-like representation of the objects overlaid from video output into the eyepieces of the microscopic (see Fig. 32.2E–F ). The advantage of HUD is that the structure-based segmentation can be gathered from a variety of preoperative scan sequences (e.g., arterial anatomy from an angiogram and cranial nerve anatomy from a FIESTA sequence), allowing the user to synthesize the preoperative radiology to build a 3D representation of the most important features of the pathologic anatomy. This segmented model, rather than one particular sequence, is then projected into the eyepiece so that it is constantly accessible and overlaps with the real structures. The disadvantages include a learning curve to make use of the data, the need to optimize the setup to facilitate ease of use, visualization limitations, and sources of errors outlined in the following sections.

Implementation

Many of the elements required for implementation of HUD and navigation tracking in the operative microscope are already part of the standard preoperative workflow. Acquiring the correct scans, performing effective structure segmentation, and setting up the operating room in a way that the information can be constantly accessible are the main areas in which HUD planning diverges from the standard workup ( Fig. 32.3 ). Beyond this, the process of optimizing the HUD to enhance surgeon perception and understanding are user-dependent.

Figure 32.3, Workflow diagram of augmented reality implementation into clinical practice.

Preoperative Preparation

The first phase of our AR workflow involves gathering data to create a basic 3D VR rendering of the patient’s pathology. The VR model is then brought into the patient consultation as the foundation for patient education and to help elucidate understanding of the pathology or surgical plan ( Fig. 32.4 ). This step has helped us appreciate subtle (and sometimes not-so-subtle) neurological findings that were not the primary complaint but are important to the operative plan and patient’s recovery. These models can also be utilized in clinical teaching conferences and resident education. 3D reconstruction has been shown to enhance anatomic understanding for learners as well as nonclinicians, and we use this as a basis to engage the patient and family in a discussion about the pathology. Finally, the VR model is used to help guide the segmentation strategy for the AR model that will be used in HUD.

Figure 32.4, Three-dimensional simulation consultation performed by an advanced practice provider.

Scan Acquisition Parameters

Imaging studies play a crucial role in planning a surgical approach for resection of tumors and other space-occupying intracranial pathologies. High-resolution MR, small field of view (FoV) sequences provides a more detailed analysis of tumor location, including the presence of critical adjacent structures such as major vascular structures and eloquent brain parenchyma. These sequences can also be utilized directly by the interventionalist for in-procedure guidance.

Volumetric sequences are often T1 weighted and contrast enhanced, typically providing the best assessment of the margins of the lesion of interest, as well as any invasion of local structures. When evaluating vascular malformations, a volumetric axial T2-weighted sequence may be preferred because the major arterial and venous structures can be visualized as a result of flow voids that occur in both small and large vascular structures. Unlike some MR sequences, which may be acquired in a sagittal plane and then reformatted to create an axial view, a volumetric sequence is obtained as a true axial sequence of the full head and large FoV. Thus, all imaging data corresponds directly to the patient’s anatomy and does not suffer from artifacts that may occur when reconstructing an axial sequence.

Simulation sequences are obtained using the highest possible resolution from the MR magnet. The sequences obtain thin slices, less than 1 mm, and are isotropic. Like volumetric sequences, they are obtained as a true axial. Certain software can take advantage of the data from simulation sequences to produce 3D models that demonstrate the tumor, normal brain parenchyma, the ventricles, and the calvarium including the skull base. These models can be enhanced with dedicated CT angiography (CTA) of the head for improved vascular resolution and volumetric CT imaging of the head for improved bony resolution. Diffusion-tensor imaging, which takes advantage of the diffusion of water along white matter tracts to reconstruct probabilistic tract locations, can also enhance the 3D model. All of the data from these simulation sequences can be uploaded to certain software that can project the resulting models directly onto the patient during the surgery, improving the accuracy and confidence of the surgeon’s approach.

Choice of necessary sequences for segmentation depends on the patient’s pathology and is slightly influenced by the intended approach. For example, when planning a frontal approach to an anterior skull base parasellar lesion, it is helpful to identify the anterior communicating artery encountered in the surgical corridor and carotids (from the CTA), optic nerves (T2 high-resolution) and tumor (volumetric T1). Table 32.1 describes the most commonly ordered preoperative imaging for standard locations and pathologies.

TABLE 32.1
Three-Dimensional Reconstruction Scanning Parameters by Pathology Location
Pathology Location MRI CT/CTA Additional MRI Studies Other Studies
Anterior skull base Axial Volumetric CT head
CT sinus
CTA for vascular involvement
Pituitary: Small FOV T1 and T2 high-resolution (CISS, FIESTA)
Orbital: Small FOV T1 and T2 high-resolution (CISS, FIESTA) plus oblique T2 fat-saturation sequences
Middle skull base Axial Volumetric CT head
CTA for vascular involvement
IAC: Small FOV T1 and T2 high-resolution (CISS, FIESTA)
Posterior fossa Axial Volumetric CTA for vascular involvement Small FOV T1 and T2 high-resolution (CISS, FIESTA)
Aneurysm Axial Volumetric Full-head CTA 3D spin angiogram
AVM/AVF Axial Volumetric
Sagittal T2 Cube
MRV, MRA
Full-head CTA If cortical: include simulation protocol with DTI 3D spin angiogram
3D , Three dimensional; AVF, arteriovenous fistula; AVM, arteriovenous malformation; CT, computed tomography; CTA, CT angiography; DTI, diffusion tensor imaging; IAC, internal auditory canal; MRA, magnetic resonance angiography; MRV, magnetic resonance venography.

Surgical Planning

Determining which structures to include in the VR or AR model depends on the approach and requires the operator to have had enough experience with the approach to know what information is likely to be most helpful and what is likely to be less relevant. Table 32.2 displays the structures we commonly segment for the most commonly performed approaches and pathologies. In general, the pathology (e.g., tumor, aneurysm, arteriovenous malformation, fistulous point) is identified as a distinct structure. From there, critical adjacent structures are identified as necessary. Most final AR models contain between two and four separate structures (see Table 32.2 ). We have found that atlas-based segmentation (autosegmentation) is helpful and reliable for proximal arteries and intraorbital portions of the optic nerve. It is less reliable when these structures are significantly altered by pathology and not reliable for other cranial nerves or structures smaller than 2 mm. Semimanual segmentation is helpful in describing contrast-enhanced structures, bony anatomy, and tractography. These programs are useful in creating VR environments and in isolating specific functional tracts. Manual segmentation (“painting”) can be used to distinguish structures that have anatomic boundaries but similar radiologic characteristics to adjacent tissues. It is also the only way that structures can currently be represented in HUD for Brainlab and Medtronic systems.

TABLE 32.2
Critical Structures Outlined by Pathology Location
Location of Pathology Painted Tumor Painted Aneurysm Painted AVM Other Lesion Vessel Major Sinus Painted Nerve Total (179)
ACA 3 1 1 1 3
Anterior skull base 54 1 46 45 54
Cerebellar 3 2 1 1 2 5
Frontal 18 1 1 1 7 20
ICA 6 2 2 6
MCA 8 7 9
Middle skull base 32 2 1 1 28 19 34
Occipital 4 2 1 2 4
Parietal 3 1 2 4
PCA 1 1
PICA 3 3
Posterior skull base 5 4 4 5
SCA 1 1
Temporal 9 1 2 5 12
Other 14 3 1 7 9 18
ACA, Anterior cerebral artery; ICA, internal carotid artery; MCA, middle cerebral artery; PCA, posterior cerebral artery; PICA, posterior inferior cerebellar artery; SCA, superior cerebellar artery.

Before painting any particular structure, we review our preoperative data to find the sequences or series that are most sensitive and specific to that structure. Almost every simulation model involves one or more vascular structures. We prefer CTAs for most arterial segmentation, but it is frequently possible to segment venous anatomy based on this imaging as well. Very-high-quality vascular segmentation can be achieved if volumetric data from selective catheter angiograms are used. Brainlab and Stealth software allow the user to paint structures based on subjective assessment. We start by painting structures where the anatomic location is highly consistent among individuals (e.g., internal carotid artery) and use this to reason the location of the efferent branches.

Cranial nerve segmentation is best done with sequences that are sensitive for cerebrospinal fluid (e.g., T2, FIESTA, CISS). We begin painting in regions far enough away from the pathology that specific identification of the nerve is possible. Using this information as a framework, we then work toward the area of interest and identify the nerve at each step.

In many cases, we find that we eventually reach a limit in our ability to positively identify structures. At some point the vascular lumen becomes too narrow, the cranial nerve is no longer distinct from the surrounding anatomy, or the tracts cannot be further constrained. When significant uncertainty arises, we will not make predictions on the location of the structure. We feel that in these situations, the absence of information will help inform the surgeon of the degree of uncertainty, and we believe it helps mitigate the possibility that incorrect information will be used to establish ground truth decisions. This limit appears to be somewhat user-dependent, and team members who are less familiar with the anatomy or the interface tend to approach this limit sooner than more seasoned members.

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here