Computed Tomography for Electrophysiology


This chapter reviews the technical background and current uses of cardiac computed tomography in the diagnosis and management of atrial and ventricular arrhythmias.

Imaging plays a fundamental role in the diagnosis and treatment of cardiac arrhythmias. The increased use of imaging has led to an improved ability to understand and successfully treat complex tachycardia circuits. Traditional imaging methods used in the electrophysiology laboratory such as x-ray fluoroscopy, coupled with intracardiac recordings obtained from bipolar electrode catheters, form the basis for the conventional study of electrical conduction inside the human heart. Additionally, the advent of radiofrequency, cryoenergy, pulsed field, and radiation sources to target specific arrhythmia-enabling structures moved cardiac electrophysiology from a merely diagnostic field to an area of rapidly improved therapeutic success in the management of different arrhythmias. Because of the inherent limitations of fluoroscopy for imaging of cardiac soft tissue structures, the ablation of complex arrhythmias such as scar-mediated ventricular tachycardia (VT) and left atrial (LA) tachycardias proved very challenging. The development of three-dimensional (3D) mapping systems coupled with advanced cardiac imaging allows for detailed tissue characterization and highly refined anatomical delineation, creating an environment for accurate, safe, and effective treatment of complex arrhythmias.

Technical Background of Cardiac Computed Tomography Imaging

The process of computed tomography (CT) imaging of the heart, commonly referred to as cardiac computed tomography (CCT), consists of four main steps: data processing, image acquisition and reconstruction, display, and storage.

CCT is an imaging modality that uses x-ray beams that are projected in a fan-shaped configuration. These are then funneled into a region of interest to encompass the heart and great vessels using a collimator that eliminates the x-ray beams not traveling parallel to a prespecified direction, thus creating an image within a biplane coordinate system (X,Y) known as the imaging plane. This planar image, although of high resolution, is unable to differentiate contrast differences because of superimposition of images from multiple adjacent structures. Such images would serve no clinical use, but when combined with the CT capability to provide multiple anatomical cross sections (stacks of transverse axial images), reconstructed CCT data sets can provide images of very high temporal-spatial resolution with clear differentiation of adjacent cardiac structures.

For accurate reconstruction of cardiac anatomy, it is important to minimize the time of image acquisition and be able to account for movements during breathing and the different phases of the cardiac cycle. Multislice technology and helical (spiral) scanning improved the acquisition of image data sets from several minutes to seconds, minimizing motion artifact. Coupled with electrocardiogram (ECG) and respiratory gating, current CT scanners have multirow detectors (up to 320) with subsecond rotational speeds (gantry rotation) that enable an accurate reconstruction of cardiac structures and assessment of functional parameters. An aluminum beam filter is used to absorb very low energy x-rays, which would not penetrate the body tissue, increasing patient exposure without contributing to image quality. This process, called beam hardening, results in uniform average beam energy and can significantly reduce patient radiation exposure.

Synchronization of the x-ray tube and detector movements in the gantry and the horizontal table movement where the patient is positioned are essential for spatial resolution and correct reconstruction of the multiplanar images to avoid image overlapping and false spacing. This concept is known as pitch in helical CT imaging and is defined as the ratio of the patient’s movement through the gantry during one 360-degree beam rotation relative to the beam collimation. Another factor that defines image slices and their relative positions is the distance between reconstructed axial slices in the Z -axis. This is known as increment and is defined as the distance between axial image slices. The increment can be manipulated, for example, during 3D image reconstruction, to permit some overlap and improve image resolution in specific areas of interest.

Acquisition of a correct cardiac imaging data set (a stack of axial imaging slices of the heart, great vessels, and adjacent structures) requires coordination between scan length, collimation, increment, and the number of slices obtained. During this process, continuous radiation, gantry rotation, imaging table movement, and data transfer from the detector array occur. Digital data received from the detectors are transmitted back to a central processing unit using high-speed radiofrequency signals. The computer then synchronizes the gantry and table motion, acquiring data from known positions of the gantry rotation and the imaging table position, allowing for accurate, high-speed data acquisition.

The data acquired from the gantry’s helical motion are the projected data representing the attenuated beam of radiation from which a specific algorithm will determine an attenuation coefficient for each pixel inside an image matrix. Once the full volume of the image data set is obtained, the scan’s raw data are reconstructed to obtain clinically useful images. The width of the beam set by the detector size and collimator ultimately determines the maximal resolution of the image data set.

The process of image reconstruction is complex and requires special algorithms to convert the helical projection data. Reconstructed segments (or voxels) are isotopic (equal on the X-, Y-, and Z- axes). The pixel values, which are a measure to quantify a grayscale spectrum, are measured in Hounsfield units (HU) and essentially represent the amount of x-ray attenuation. They are obtained by calculating the relative difference between the linear attenuation coefficient of tissue and water. As an example, the HU for water is zero, for air is −1000, and for bone varies between 500 and 1000 depending on the bone density. The average HU values for cardiac tissue are between 10 and 60.

Before image reconstruction can be achieved from the acquired raw data, three essential parameters must be defined. The field of view (FOV) will determine the area of the image to be reconstructed, and the matrix will determine the pixel size for each image plane. The following relation is then applied: pixel size = FOV/matrix . In addition, the kernel reconstruction defines the degree of smoothing during image reconstruction. Different kernels can be applied depending on the anatomical region or clinical application. Low kernel values provide smoother images, whereas higher values generate sharper images. Ultimately for adequate cardiac image reconstruction, the scan requires adequate ECG and respiratory gating. Held expiration for a few seconds during scanning is usually sufficient to avoid diaphragmatic and chest excursion. The QRS obtained from the ECG determines the boundaries of systole and diastole. Multiple images acquired during consecutive cardiac time intervals can be combined to obtain an image of the heart at the exact same phase of the cardiac cycle, in either a prospective (triggering image acquisition to a specific phase of the cardiac cycle) or retrospective (acquiring images during the complete cardiac cycle and then selecting the specific cardiac timing) fashion. The latter will naturally result in higher radiation exposure.

Finally, CCT images are displayed in a grayscale of 256 intensity values ranging from black to white. Digital image size is measured in bytes (each byte consists of 8 bits). Images can be viewed as slices (bidimensional plane) or volumes (tridimensional reconstruction). Three-dimensional images are particularly useful in electrophysiology as they provide accurate anatomical reconstruction of cardiac structures and great vessels, which are essential for catheter navigation, lead implantation, and ablation therapies. Two techniques for volumetric image reconstruction are maximum intensity projection (MIP) and shaded surface display (SSD). MIP reconstruction uses a stack of image slices and projects the maximum intensity of the brightest pixel along a specific path. SSD reconstruction uses shading and artificial light sources to create 3D-rendered anatomical images. Once the images are acquired and processed, they are stored and backed up onto a server. Images are stored in standard the Digital Imaging and Communications in Medicine (DICOM) format, which allows the sharing of radiographic image studies across different vendors regardless of the workstation/program used for post image processing or reviewing.

Image Integration

Since the mid-2000s, the integration (or fusion) of preprocedural CCT with intraprocedural 3D maps has become a routine practice in many electrophysiology centers for complex arrhythmia ablations, including atrial fibrillation (AF), postprocedural atrial tachycardia, and VT. The high temporal-spatial resolution provided by tomographic scans allows for detailed 3D anatomical reconstruction of the cardiac structures, great vessels, and adjacent relevant structures (esophagus, phrenic nerves, and epicardial coronary arteries). When merged with the electroanatomical information from a 3D mapping system, they provide a very accurate road map of the heart for catheter manipulation and guidance for the mapping and ablation of arrhythmia substrates. Prospective randomized and nonrandomized studies suggest some clinical benefits when image integration imaging is used to guide the ablation of AF. Some of the benefits observed with image integration in AF ablation include better long-term AF-free survival, reduced fluoroscopy time, , reduced procedural time, , and decreased incidence of procedure-related complications. However, other studies have failed to demonstrate a benefit with image integration techniques compared with conventional 3D mapping for ablation of AF in terms of clinical outcomes.10 A meta-analysis looking at published studies of image integration for AF ablation showed a trend but no statistically significant improvement in clinical outcomes between image integration and conventional mapping. These discrepant results are likely caused by differences in the methodology, image fusion technique used, and center experience with image processing. Two commercially available 3D mapping systems are equipped with image integration or image fusion software models (CARTOMERGE, Biosense Webster; and EnSite Verismo for St. Jude Medical). Both systems allow for 3D reconstruction and registration of preprocedural CCT images with 3D electroanatomical maps. Both imaging software modules continue to be further developed, and integration software continues to improve with newer versions.

Fundamental Concepts of Image Integration

There are three steps involved in any image integration process: preprocedural image acquisition, segmentation of the CCT planar images, and registration of images from the CCT and the 3D mapping system. To minimize motion artifact and improve accurate anatomical reconstruction, the CCT image acquisition is ECG and respiratory gated (although with the current fast acquisition times, current CT scanners can obtain images in end expiration within a few seconds or even <1 second). Once the preprocedural images are obtained, segmentation consists of digital separation of the different cardiac and extracardiac structures using a 3D volume-reconstructed model obtained from biplanar CT slices. The use of iodine-based contrast in CCT allows for the delineation of intravascular (aorta, pulmonary arteries, or coronary arteries) and intracardiac (atria and ventricles) volumes. As the blood pool displays high signal intensity, the striking contrast with adjacent low-intensity structures allows for accurate delineation of endocardial and epicardial contours. The process of segmentation is semiautomatic and uses vendor-specific software that employs a combination of signal threshold, boundary detection, and regional identification algorithms. The segmented images are finalized with some degree of visual editing to generate a final CCT volumetric data set. The final step is registration , which is crucial for an accurate fused map. This step requires a thorough understanding of the cardiac anatomy along with the technical expertise for image manipulation within the 3D-merging software environment. An accurate registration will allow for intraprocedural catheter manipulation inside the CCT-generated anatomical 3D images.

Technical Aspects of Image Segmentation and Registration

The most common method for image segmentation is thresholding, which consists of separating all pixels of an image on either side of a predetermined threshold value and grouping them into either above or below pixel groups. Once the pixels are divided into the two groups, boundary extraction methods are employed to differentiate values between adjacent pixels within a group, which further divides them into regions. These regions represent different chambers depending on their contrast uptake and timing of the contrast bolus respective to image acquisition. For example, a region of high pixel signal intensity after intravenous contrast will represent the left atrium and pulmonary arteries, whereas a region of low pixel signal intensity will correspond to the esophagus, lungs, and phrenic nerves in a CCT obtained for guiding an AF ablation. More advanced algorithms include shape-based assumptions in which the software performs reconstructions using expected geometries extracted from the anatomy of a normal patient cohort ( Fig. 64.1 ).

Fig. 64.1, CARTO segmentation module.

Registration or fusion of CCT and 3D maps is an intermodal process, which means that both image data sets originally reside in separate image spaces, and special registration algorithms are required to transform (T) one image space into another. For accurate linear transformation of two image data sets residing in different image spaces, six degrees of freedom (three translational and three rotational) are generally employed for accurate image fusion. If the voxel sizes between both image data sets are different, additional calibration parameters in the X,Y,Z axis are necessary to reconcile the discrepancy. In contrast, if nonlinear transformation is employed, multiple (often >6) degrees of freedom are necessary to minimize image distortion. A cost function is additionally employed to measure discrepancies or similarities between the reference and transformed images.

Registration methods are mainly geometry or voxel-intensity based. Geometry-based registration can be further divided into point-based and surface-based techniques. In point-based registration techniques, fiducial points are selected on both images (CCT and 3D map) and are aligned or paired in one single imaging set. The number of fiducial points required for an accurate fusion depends on the available software, chamber of interest, and expertise of the operator. Alignment of fiducial points or superimposed cardiac surfaces can be done by visual estimation or landmark registration (basically the superimposition of all fiducial points between the two images). Fiducial points can be automatically assigned by the fusion software, but usually they are manually selected by the operator. Examples of commonly used fiducial points include atrioventricular valves, the ostium of the pulmonary vein (PV), and the coronary ostia. Once both images (CCT and 3D map) are registered, the difference between each image centroid (defined as the central point of an image) is calculated, and it provides an estimation of the translation required to perfectly align all fiducial points in a 3D space. The unified centroid is used as a reference to further align all fiducial points, this time by means of rotation rather than translation, until the sum of the squared distances between each corresponding point pair is minimal. The fiducial registration error (FRE) is defined as the square root of the mean squared distance between the point pairs. FRE defines the degree of spatial accuracy of the fused image data set to be used during the procedure. In landmark registration, anatomically distinct endocardial surfaces (the PV ostia, carinas, LA appendage [LAA], and right ventricular [RV] or left ventricular [LV] septum) are used to align both imaging data sets. Any manual registration will inevitably cause an inherent error of alignment for each registered point, known as the fiducial localization error (FLE). The FLE represents a vectorial distance from the intended to the actual fiducial point obtained. A similar value known as the target registration error (TRE) is obtained from the FLE and is used clinically to quantify the degree of accuracy of the fused images ( Fig. 64.2 ).

Fig. 64.2, Registration steps.

Surface-based registrations are often the preferred method of image integration between 3D maps and CCT. Similar to point-based registration, both imaging surfaces are delineated, registered, and transformed (T). Registration is generally performed using three translations and three rotations. Calibration of each image is performed to determine the image scaling values. Unlike the alignment of fiducial points, the registration of endocardial contours or specific anatomical landmark surfaces is visually easier to achieve. Multiple studies have validated both point and surface registration using CCT, with acceptable registration errors in the range of 2.9 ± 0.7 to 6.9 ± 2.2 mm. , , ,

Steps for Image Integration With Commercially Available Mapping Systems

Vendor-specific differences in commonly used 3D mapping systems with image fusion software are as follows.

The CARTO 3 System uses the image integration software known as the CARTOMERGE Module. The image fusion process begins with the creation of landmark pairs or surface points between the CCT and the 3D map. This is followed by the visual or automated alignment of landmark pairs or surface points between the two images. Finally, the surface registration algorithms are employed to align both image data sets, using three possible registration methods: visual alignment, landmark, and surface registration. The accuracy of the CARTOMERGE Module was assessed in an animal study where fiducial markers were surgically implanted in the epicardium in which radiofrequency ablation lesions were delivered. The offset of an integrated CCT/3D voltage map fusion image to localize the lesion sets was less than 3 mm on average (1.8 ± 1.5 for atrial flutter, 2.2 ± 1.3 for the fossa ovalis, and 2.1 ± 1.2 mm for AF ablations). In clinical practice, image fusion can be achieved by performing surface registration of an LA posterior wall electroanatomical map with preprocedural CCT images, achieving a mean registration error of 1.27 ± 0.23 (with mean values of 0.03 minimal and 3.9 mm maximal)18 ( Fig. 64.3 ).

Fig. 64.3, CARTOMERGE module registration.

The EnSite NavX and Velocity mapping systems use a fusion registration module (FRM), which consists of three steps. The first step is the segmentation of a preprocedural CCT–digital image fusion (DIF) model, which is imported into the 3D mapping system environment. Second, field scaling is performed by measuring interelectrode spacing from multiple points collected and adjusting the image volume of the 3D map to approximate that of the preprocedural CCT. Finally, the fusion of both image data sets is accomplished by registration using paired locations (fiducial points). A dynamic registration algorithm locally adjusts the 3D shell geometry and surface to the size and shape of the DIF surface model. Several studies have assessed the registration accuracy of the EnSite system integrating preprocedural CCT with intraprocedural 3D maps for AF ablation. The registration errors range from 1.9 ± 0.4 to 3.2 ± 0.9 mm, and good correlation between PV diameters has been consistently found , ( Fig. 64.4 ).

Fig. 64.4, EnSite velocity registration.

Despite the differences in the registration process for both the CARTOMERGE Module (rigid registration using rotational alignment) and EnSite FRM (dynamic registration with both rotation and field scaling), both systems have similar registration accuracy when clinically tested.

During ablation procedures, it is not uncommon to have a sudden change in the patient’s heart rhythm, which can possibly affect registration accuracy. A study looking at patients undergoing AF ablation guided by the image integration of CCT and 3D maps found that registration accuracy did not significantly differ between image fusion in sinus rhythm versus AF (surface-to-point distance , of 1.91 ± 0.24 vs. 1.84 ± 0.38 mm, respectively; P = .60); this suggests that reregistration is not required to maintain map accuracy if there is an intraprocedural change in heart rhythm.

In addition to the commercially available image integration algorithms, several research groups in the field of advanced cardiac mapping have developed their own software and image fusion workstations for both research and clinical applications in the field of complex arrhythmia mapping and ablation.

Current Applications of Cardiac Computed Tomography in Cardiac Electrophysiology

The clinical uses of CCT in today’s electrophysiology practice can be divided into preprocedural, periprocedural, and postprocedural imaging studies and can be further separated depending on the chamber of interest (atrial vs. ventricular).

Pre- and Periprocedural Computed Tomography Imaging for Guiding Atrial and Ventricular Arrhythmia Ablations

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here