Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
The history of transoral robotic surgery (TORS) began just over 10 years ago with the first animal model in 2005 [ ], followed by first-in-man procedures in 2006 [ , ]. Across the world, one medical robot, the da Vinci® system (Intuitive Surgical Inc., California, USA), is used for almost all transoral surgeries. Since January 2010, it has been approved by the US Food and Drug Administration (FDA) for “surgical procedures restricted to benign and malignant tumors classified as T1 and T2, […] for adult and pediatric use (except for transoral otolaryngology surgical procedures)” [ ].
Since its first application in ENT, the use of robots has steadily increased worldwide. TORS was developed as an alternative treatment option to bypass the limits of traditional approaches. Studies have shown safety at least equivalent to that of traditional surgery [ , , ]. Despite these advantages, however, many publications pointed out its limitations, both technical and financial, and undertook development of specific tools, including for pathologies of the vocal folds [ , ].
On the basis of these same findings, a European Consortium of 5 institutions [ ] in 3 countries (Italy, France and Germany) was funded by the European Union under the 7th Framework Program for Research and Technological Development (led by the Commission for Research and Innovation) ( http://ec.europa.eu/research/fp7/index_en.cfm ). This μRALP project ( http://www.microralp.eu/ ) proposes a different conception of surgical robotics where, rather than starting from a “universal” robot that is then adapted to a given surgical practice, we start from a given need for which a tool is then created. The need here is microscopic phonosurgery and its laser tool, the limits of which concern the accessibility and visibility of the surgical site and remote control and accuracy of the sighting system, for which we propose an endoscopic surgical system assisted by a micro-robot. It will also have the important economic specificity that each technological solution developed and described below can be used independently for other applications to come.
Leibniz Universität Hannover (Leibniz University of Hannover): A. Schoob, D. Kundrat, L.A. Kahrs, T. Ortmaier
The μRALP endoscope is equipped with stereo vision and allows three-dimensional perception when inserted though the patient's mouth and positioned close to the vocal folds ( Figure 9.1A ), instead of the conventional transoral laser microsurgery (TLM) approach with a stereo microscope. μRALP's stereo vision system provides computable depth information for more precise and safer tissue manipulation and laser cutting. As described in the following sections, stereoscopic imaging facilitates vision-guided intraoperative planning within the μRALP workflow. The augmented reality system is based on a trifocal arrangement of integrated stereo vision and a surgical laser unit ( Figure 9.1B ).
Due to the limited depth-of-field of the fixed-focus laser integrated in the endoscopic tip, a constant distance to the tissue has to be set after inserting the endoscope in the larynx. To assist precise endoscope and field-of-view positioning, surface information is acquired by image-based reconstruction of the vocal fold tissue, as shown in Figure 9.2A . Detailed descriptions of the methods are given by Schoob et al. [ ]. In combination with registration to the integrated laser scanning unit, the area of intersection between tissue surface and laser workspace can be highlighted in the live surgical view [ ]. Registration enables transfer of image-based incision planning to three-dimensional laser cutting, yielding a maximum deviation of only 0.2 mm between planned and executed laser incision.
Experimental studies have shown that color-coding the laser focal distance provides visual feedback to the surgeon to position the endoscopic system with submillimeter accuracy in just a few seconds [ ]. In detail, the laser depth-of-field, which is characterized by the beam waist ( Figure 9.1B ), is represented by a color gradient ranging from red (near) to blue (far), and optimal focusing is highlighted in green. The lateral border of the color-coding indicates the maximum scanning range of the laser. Figure 9.2B illustrates color-coding applied to in-vivo data acquired with a commercial stereo endoscope (VSii®, Visionsense Ltd., Israel) and a sequence obtained with the μRALP endoscope (chip-on-the-tip camera modules MO-BS0804P®, MISUMI Electronics Corp., Taiwan) in human cadaver trials. Results demonstrated that an increase in color-coded distance easily reveals misalignment to the laser focal range; i.e. surface regions where incisions are expected to be unfocused and thus inefficient. In summary, the endoscope can be inserted through the patient's mouth and positioned at the correct distance to the lesion on the vocal folds.
Subsequent to positioning the endoscopic tip, the surgeon plans an incision using the stylus-tablet-based interface. To achieve consistent temporal planning on the vocal folds undergoing deformation, image-based non-rigid motion estimation is implemented. The target area containing the lesion is represented by a deformation model and tracked in the stereo view [ ]. A detailed description of the most advanced methods regarding this topic was published by Schoob et al. [ ]. As shown in Figure 9.3A , the target region is represented by a Thin Plate Spline-based mesh model that is able to represent soft tissue motion and deformation, as induced by endoscopic movements or tissue exposure by surgical grasping forceps. As a result, an incision line can be planned inside this region and adapted to underlying motion ( Figure 9.3B ). Optical triangulation of the corresponding points in the two views gives the three-dimensional motion vector of the tracked surface area. Finally, the path of a planned incision line can be followed on the deforming tissue by an integrated laser scanning unit. Experimental results on ex-vivo image data showed tracking accuracy of 0.83 ± 0.61 mm, with an up-date rate of 30 frames per second [ ].
Istituto Italiano di Tecnologia (Italian Institute of Technology): L.S. Mattos, N. Deshpande, B. Davies, J. Ortiz, L. Fichera, E. Olivieri, G. Barresi, D. Pardo, F. Mora, A. Laborai
Intraoperative control of the μRALP surgical system is performed entirely through the surgeon-robot interface. This is a teleoperation control console specifically designed to place the surgeon in an ergonomic position and to offer intuitive control over all system components. Its set-up is based on an open-frame structure that does not obstruct communication and interaction between the surgeon and the operating room (OR) staff. In addition, its compact cart structure can be easily rolled in and out of the OR and can be placed in the vicinity of the patient, thus facilitating direct surgeon-patient interaction.
During the operation, the surgical site is visualized through μRALP's Virtual Microscope interface [ ]. This is an immersive stereoscopic display specially configured to offer an improved visualization experience compared to the surgical microscope, which is the standard visualization equipment surgeons are trained with and accustomed to using for delicate microsurgery. Stereo images captured from μRALP's endoscope cameras are processed and displayed in the system in real time, allowing relevant information and augmented reality features to be added directly onto the surgeon's field of view. Examples of the use of such capabilities include dynamic planning of laser incision lines with graphic overlay, as shown in Figure 9.4 .
The surgical laser is controlled using a graphics tablet and stylus device, which is a highly intuitive interface able to significantly improve laser-beam control, aim precision, and the overall feasibility of laser microsurgery systems [ , ]. In the μRALP system, the tablet interface is used for several functions: 1) real-time laser aim control; 2) incision planning; 3) ablation planning; and 4) definition of operative regions (safe and forbidden areas for laser operation).
Real-time laser aim control involves directly mapping inputs received through the tablet interface by the motion controller of the laser-steering micro-robot. In this operating mode, the tablet interface works similarly to a computer mouse: the laser spot is instantly moved to follow the movement of the stylus.
In incision planning mode, on the other hand, the stylus is used to precisely draw incision lines over the region of interest; these are displayed through graphics overlays added to the live video stream. Once planning is completed and confirmed by the surgeon, the desired laser trajectory is sent to the micro-robot controller for high-precision autonomous execution. The surgeon can stop the planning process or reprogram the system by simply pressing a button on the stylus. This makes the process highly intuitive and dynamic.
Ablation planning is performed in a very similar way. The stylus is used to precisely draw the perimeter of the area to be ablated, which is marked using graphic overlays. Once this planning is completed and approved by the surgeon, the system computes the optimal laser trajectory for total coverage and ablation of the defined region. This trajectory is then sent to the micro-robot controller, which executes the defined trajectory by high-speed scanning. Once again, re-planning is simple and dynamic: a click on the stylus button cancels the previous plan and allows a new programming cycle to be started.
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here