The future of robotic surgery


Introduction

Surgeons have been at the forefront of integrating robotic systems into routine clinical practice. Radical prostatectomy is now commonly performed using a surgical robot in the majority of large centers in developed nations. Surgeons continue to push the boundaries in integrating new technologies in their clinical practice and training to improve outcomes, safety, and quality in surgery. In this chapter the major new technologies that will drive further improvements in robotic surgery will be discussed.

Image guided robotic surgery

One of the major benefits of robotic systems, such as the da Vinci, is the improved visualization provided by the high definition three-dimensional (3D) camera ( Fig. 11.1 ). Indeed, a key area of development in the newer generations of the da Vinci system has been the optics of the endoscope, providing higher quality images and greater magnification. Advances in optics have been especially important for current robotic systems given the absence of haptic feedback and the need for the surgeon to rely on visual feedback to judge force and tension. Alongside ongoing improvements in the quality of the camera image and the size of the endoscope, image guided surgery (IGS) offers the opportunity to further drive improvements by combining preoperative imaging with the intraoperative endoscopic video. The aim is to provide further information to the surgeon, such as allowing them to “see” hidden internal structures. This overlay of images onto the operative field is known as augmented reality (AR). Image guidance was first used in neurosurgery to plan open procedures. Computed tomography (CT) and magnetic resonance imaging (MRI) images were used to create a 3D model of the patient to help surgical planning. The same models were also used to track instruments in real time during the procedure to avoid injury to critical structures.

Fig. 11.1, da Vinci SP System.

The principle for all forms of IGS is to combine intraoperative images, usually from the endoscopic camera with other imaging modalities, providing the surgeon with maximal information. Additional imaging may be preoperative, cross-sectional modalities: commonly MRI; CT; or ultrasound (US) or intraoperative images, for example, near-infrared fluorescence (NIRF—e.g., the Firefly system; see below).

In order to allow construction of a 3D model, relevant images or parts of the images known as data segments must be identified. To date, this is most frequently performed manually which is very labor intensive. The additional images require careful alignment with the patient or live operative images through a process known as registration. Hereby, specific points are matched on the imaging and patient or intraoperative images. Commonly, certain anatomic landmarks are used, such as the tragus of the ear.

Alternatively, specific fiducial markers can be attached to the patient to aid registration. In neurosurgery, fiducial markers are commonly used in conjunction with stereotactic frames attached to the skull. Importantly, registration is classified as either rigid or nonrigid. In rigid systems, the subject does not change in shape or relative position, greatly simplifying registration. Orthopedic surgery and neurosurgery often involve rigid registrations given the fixed nature of the bony skeleton or the relative relationship between skull and brain.

In contrast, nonrigid systems, in which structures are liable to move and deform, are a much greater challenge. Registration models require more complex overlay of the images, which often needs to be performed manually. This introduces the further potential for human error. The last important component of IGS is the user interface. Information needs to be displayed clearly to the surgeon in a way that is not distracting. In robotic surgery, this is often provided as images superimposed onto the laparoscopic video feed. At the most basic level, preoperative images may be displayed on the surgeons’ display. Tile Pro technology makes this possible on the da Vinci.

IGS has been successfully applied to several areas of robotic surgery. NIRF has been used by a number of specialties. It offers enhanced anatomic views of the surgical field and helps to identify critical structures. The Firefly system was developed by Intuitive for use with the da Vinci Si and Xi systems. It uses a water-soluble dye, indocyanine green (ICG), that is detected by the NIRF camera.

ICG has the advantage of remaining largely within the vascular compartment and may be used for identification of both vessels and areas of vascular perfusion within soft tissues. The NIRF camera detects ICG by directing an 805 nm laser at the target anatomy. This causes emission of a photon of light with a wavelength of 830 nm from the ICG dye, which is detected by the camera.

The Firefly system has been used in a wide variety of robotic surgeries, including both benign and oncologic procedures. Robot assisted partial nephrectomy is one of the commonest procedures in which Firefly is used. ICG is used to identify the precise vascular anatomy and aid selective arterial clamping instead of main renal artery clamping.

In one study, ICG allowed selective clamping to be performed in 80% of patients. Other common uses of ICG/NIRF include identifying sentinel lymph nodes during extended pelvic lymph node dissection and identifying tumors during adrenal surgery. However, despite the range of applications, the level of evidence remains low, and the actual benefit to the patient and surgeon is yet to be determined.

As described above, the use of AR to enable the overlay of preoperative and intraoperative imaging is far more complex. Various applications have been trialed in robotic surgery. One of the first image guidance systems was developed by Thompson et al. in 2013 for robotic radical prostatectomy.

Preoperative MRI images were successfully overlaid onto the laparoscopic video images and used during 13 live surgeries. However, early models, such as that described by Thompson et al., used rigid 3D models that could not match the tissue deformation. Porpiglia et al. were able to develop the technology further by enabling deformation of the 3D MRI model to match the prostate deformation during surgery. The models were further used to identify areas of extracapsular extension (ECE).

A trial was conducted in which areas identified intraoperatively using the 3D MRI models were analysed histologically. One hundred percent of identified areas contained ECE in comparison to only 41% without use of the 3D MRI model. However, this technology remains in its infancy and further developments in nonrigid registration and automatic organ tracking will allow far greater clinical applications.

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here