Quality Improvement in Congenital Cardiac Disease


Introduction

The field of congenital heart disease is expansive in the breadth of patient complexity and exciting in the continuous improvement in patient outcomes. Several tools have been used to achieve improvement in patient outcomes over the past several decades. These tools include pioneering work by congenital heart surgeons, advances in anesthesiology, critical care and nursing, improved diagnostic tools, and innovations in nonsurgical interventional techniques. The application of quality improvement methods should be considered an additional tool that has been used and may continue to be used to improve clinical outcomes in this field. Quality improvement strives to understand data, reduce variation, and implement changes to our practice so that patients receive the best care at the right time. This tool relies on the theory of Profound Knowledge ( https://deming.org/explore/so-p-k ), developed by W. Edwards Demming, a framework that teaches that one must focus on four areas to make improvements to a system: (1) appreciation of the system; (2) knowledge of variation; (3) theory of knowledge; and (4) psychology. The system must also be understood through the lens of the culture of the system. This chapter will define the role of quality improvement in congenital heart care and give perspective of where this tool fits in our efforts to improve outcomes in our patients.

Why This Matters: The Voice of Our Patients and Their Families

The news that your child has a congenital heart defect evokes a breadth of feelings and emotions. Even when the rush of emotions subsides, if they ever do, there remains a fear of the unknown. Like clinicians, parents find comfort with increased knowledge—both substantive knowledge and the knowledge of ongoing efforts to improve treatment and outcomes. Knowing the field of pediatric congenital cardiology is active in quality improvement efforts is extremely meaningful to patients and parents. The decision and commitment to undertake quality improvement demonstrate that clinicians are both invested in improvement and believe that improvement is possible.

Too often, our child's well-being feels out of our control, when in fact, our role is vital to both short- and long-term outcomes. Quality improvement work provides parents the opportunity to capitalize on our role and collaborate with clinicians to become part of the solution, instead of a silent bystander. As the field of cardiology continues to evolve, it is imperative to include patients and parents/caregivers in the cardiac team, including the field of quality improvement work. As seen through the work of the National Pediatric Cardiology Quality Improvement Collaborative (NPC-QIC; www.npcqic.org ), patient and parent partnership can positively impact improvement efforts. To ensure continuous collaboration, parents should consider being involved in various aspects of improvement, working side by side with clinicians, in design, planning, implementation, assessment, and analysis of data.

The ability to collect and analyze larger fields of data through quality improvement work also allows for greater research possibilities. Additional research opens the door to new innovations. And hope comes when clinicians and centers actively share discoveries and ideas and identify best practices. In short, quality improvement provides ongoing hope to patients and families.

Improvement efforts within the field of pediatric cardiology are of great importance to patients and families and should be discussed in a transparent fashion. As parents, our number one desire is to “cure” our children of their congenital heart defect. Knowing that a cure is not possible, the next best thing is maximizing processes, protocols, care, and treatment, to ensure that our child—and every child—can (and will) obtain his or her best possible outcome.

Model for Improvement

There are several organizing approaches or methodologies in quality improvement science that can be used to guide action, including the Model for Improvement, Six Sigma-DMAIC (define, measure, analyze, improve, and control), and Lean. Each of these approaches has its advantages and disadvantages, and one may be better suited for different types of improvement projects. For example, Lean is particularly useful for operational improvement efforts such as improving throughput, flow, and efficiency through the emergency department. In our minds, the Model for Improvement ( Fig. 87.1 ), developed by Associates in Process Improvement ( http://www.apiweb.org/index.php ), is a conceptually straightforward starting point for most improvement efforts and accessible even to those with no formal training in improvement science. At its most basic level, the model asks three basic questions:

  • 1

    What are we trying to accomplish (e.g., aim)? The aim statement is ideally described as a SMART aim—specific, measurable, actionable, relevant, and time-bounded.

  • 2

    How will we know that change is an improvement (e.g., measurement)? This question is focused on metrics or measurement. Many informal improvement efforts whether in our professional or personal life put the least emphasis on this question, although particularly for scientific-minded and data-driven health care professionals, it can often be the most important and influential for successful change efforts. Given that data should ideally be examined over time rather than simply in a pre-post assessment, run charts and statistical process control (SPC) charts are the visual representation of this question.

    • Organizing metrics or measures into distinct categories facilitates optimal learning from interventions. Outcome measures, such as reducing mortality, morbidity, or length of stay, are the ultimate results we are most interested in. Process measures are the steps that are logically connected to the outcomes of interest but are also able to be influenced. Process measures are necessary because often changes are apparent more rapidly through these than with outcome measures, particularly in many health care examples (e.g., adherence to a protocol for interstage monitoring of hypoplastic left heart after Norwood operation may demonstrate improvement before interstage mortality decreases). Balancing measures are the unintended consequences of an intervention or change. For example, if an improvement effort was designed to reduce length of stay for a patient after surgery with transposition of the great arteries by using an “early extubation protocol,” reintubation or readmission rates would be important balancing measures.

  • 3

    The third question is, what changes can we make that will result in an improvement? A list of potential change strategies can be found as an appendix in The Improvement Guide . Each change strategy is then undertaken and tested using a PDSA (plan, do, study, act) cycle. Specifically, plan the details of the “test of change” and develop a hypothesis of what is expected, do the change and collect data, study the data collected and compare with predictions, and act based on what you have learned (i.e., continue the change, modify it, and/or try something else). A test of change may start with one patient, on one unit, on one day and gradually ramp up to an entire unit over several PDSA cycles ( Fig. 87.2 ). PDSA is the framework to test, adapt, and implement change. Although deceivingly simple, in our experience, the PDSA concept may be unfamiliar in clinical settings.

    Fig. 87.2, Plan-do-study-act cycles begin with small tests of change, especially when based on hunches or expert opinion. As confidence builds in the process changes, tests may be performed on larger groups and across wider areas.

Fig. 87.1, Model for Improvement, developed by Associates in Process Improvement, provides a framework for quality improvement efforts.

There are several methods and tools in quality improvement that are used in stages of improvement efforts such as process mapping, key driver diagrams, Pareto charts, failure mode and effects analysis, and A3 reports, to name a few. These have been well described elsewhere and are readily applicable to the care of children with heart disease.

Quality Improvement and Variation

A fundamental quality improvement principle is learning from data gathered through measurement over time. However, using data over time for improvement requires a sophisticated understanding of data variation. Variation can be divided into “common” cause and “special” cause. Common cause variation is the variation that is inherently present in any system or process that is being measured. Special cause variation occurs when something outside of the system occurs that alters the results (positively or negatively). A classic exercise that demonstrates variation is the “Red Bead” experiment devised by W. Edwards Demming. Demming can be seen executing this exercise with a group of his students.

Understanding variation and the difference between common and special cause variation prevents making two mistakes when analyzing data: interpreting common or routine variation as a meaningful change (improvement or decline) from the historical data (i.e., “interpreting noise as if it were a signal, since this mistake will lead to actions that are, at best, inappropriate, and at worst, completely contrary to the proper course of action…”). The second mistake is not recognizing when a true change has occurred in the process (i.e., failing to detect a signal when it is present that can lead to similarly inappropriate action or inaction).

Data visualization is critical to understanding variation and its sources. A simple tool that can be used to interpret data over time is the run chart, acknowledging that we cannot truly differentiate common and special cause variation with this approach. The run chart is simply a figure with time on the x-axis and the outcome of interest on the y-axis with a median of the dataset drawn as a centerline. There are probability-based “rules” that can help to determine if a signal, or nonrandom evidence of change, has occurred. These rules can be seen in Fig. 87.3 .

Fig. 87.3, Statistical rules to assist in interpreting data analyzed using run charts.

When more data are available, the preferred tool is the SPC chart, sometimes referred to as a Shewhart chart or control chart. The advantage of SPC is that we can examine variation in far more depth because these displays can distinguish common cause variation from special cause variation. These visualizations typically have the average of the dataset as the centerline with “control limits” above and below the centerline. Like run charts, these charts can be annotated as key changes or interventions for a quality improvement project are taking place to determine the impact. As with run-charts, probability-based rules help to suggest when a nonrandom pattern has occurred or special cause has occurred in relation to the centerline and/or limits (ideally is temporally related to an improvement intervention) ( Fig. 87.4 ).

Fig. 87.4, Statistical rules to assist in interpreting data analyzed using statistical process control charts.

Moreover, as with hypothesis testing in traditional biomedical statistics, there are different types of charts with different mathematical calculations of limits depending on the type of data being analyzed (e.g., continuous data vs. attribute data including count or classification data). Some of the more common chart types include a C-chart or U-chart for count data (e.g., central line bloodstream infection rates in patients awaiting heart transplant), a P-chart for classification data (e.g., percent compliance to a clinical practice guideline [CPG] or percent of patients meeting a defined clinical outcome), or an I-chart or X-bar/S-chart for continuous data depending on the subgroup size (e.g., hours of mechanical ventilation after each tetralogy of Fallot repair or total average time in cardiology clinic each day to obtain an echocardiogram, respectively). More advanced charts to examine rare events such as mortality or specific rare hospital acquired infections may use a G-chart or T-chart and cumulative sum (CUSUM) or exponentially weighted moving average (EWMA). These charts may be used to help detect small changes over time because other charts may lack sensitivity in this case. One example from the field has been the improvement in interstage mortality realized by the NPC-QIC ( Fig. 87.5 ). A more complete understanding of the use of basic and advance control chart can be found in The Improvement Guide .

Fig. 87.5, The National Pediatric Cardiology Quality Improvement Collaborative initially monitored interstage mortality using a cumulative mortality chart, but the group realized that this chart was not sensitive enough to measure change in this rare condition. Therefore mortality measurement was changed to a G-chart where each point on the figure represented a patient who experienced interstage mortality. Over time the vertical and horizontal space between interstage mortalities increased and a point of special cause was seen. This shift corresponded with an approximately 40% reduction in interstage mortality.

There have been numerous research studies examining practice variation throughout pediatric cardiac care. Research has been ongoing since at least the mid-1990s, initially using survey-based approaches for management of specific conditions such as dyslipidemia, aortic stenosis, hypoplastic left heart syndrome, or coronary anomalies to broader topics such as perioperative management and care delivery models. More recent efforts have used datasets from large clinical registries such as the Society of Thoracic Surgeons (STS) Congenital Heart Surgery Database (CHSD), administrative datasets such as the Pediatric Health Information System, or linking of clinical and administrative databases. Examples include characterization of variation in delayed sternal closure or perioperative management of hypoplastic left heart syndrome, outcomes for benchmark operations, postoperative infection, perioperative mechanical circulatory support, and hospital costs, to name a few.

Defining Outcomes in Congenital Cardiac Care: Targets for Improvement

Donebedian, a physician and health services researcher, developed a conceptual model that frames health services and quality of care around three categories: structure, process, and outcomes. Structure describes the context in which care is delivered and includes physical structures, supplies, and equipment; process is the flow and interaction of patients through the care delivery system and the interaction with care givers; finally, outcomes refer to the health status of the patient receiving care in the system. Porter further delineated the complexity of health outcomes in his model of value in health care. Outcomes, according to Porter, include short-term outcomes, such as mortality, but also must include long-term functional outcomes. Following this pattern, we often attribute the highest importance to reduction in mortality and morbidity in the clinical metrics we follow in congenital heart disease. These are the metrics that are primary in many of the clinical registries and learning networks. These metrics are important but often short sited; in other words, they are necessary but insufficient. However, as we move into the next phase of improving outcomes, we will certainly follow Porter's lead in defining and measuring metrics that track long-term outcomes in our patients.

In the current era of health care delivery, the concept of value is critical to any discussion of quality improvement in pediatric heart disease, a concept that has been most fully develop by Michael Porter and colleagues at Harvard Business School. Health care value is actually an overarching strategy that at its core is defined by a simple equation: outcomes/cost. Accordingly, maximizing value for patients means delivery of the best outcomes at the lowest cost.

The outcomes Porter refers to are specifically the outcomes that matter most to patients. What is unique about this approach is that, although mortality of course is a critical determinant of success in pediatric heart centers, for most conditions, except perhaps the most complex, mortality is low across most programs, with limited differences among them when appropriately risk adjusted. Outcomes beyond mortality in our field may be metrics such as postoperative length of stay, long-term neurodevelopment and quality of life, or even patient and family satisfaction. Porter believes that not only should data surrounding these outcomes that matter most to patients be measured but they should be transparently shared internally and externally to drive improvement.

Cost is the denominator of the value equation but the meaning is more nuanced than how the term is often used. The cost of a specific service in health care is difficult to discern. Porter and Kaplan have asserted that the “cost of using a resource—a physician, nurse, case manager, piece of equipment, or square meter of space—is the same whether the resource is performing a poorly or a highly reimbursed service. Cost depends on how much of a resource's available capacity (time) is used in the care for a particular patient, not on the charge or reimbursement for the service, or whether it is reimbursed at all,” and most health systems do a poor job in measuring costs in this way. One of the fundamental problems to be solved in the course of the next decade will be to better understand value in medicine and to take steps to maximize that value to our patients.

Data Sources in Congenital Cardiac Care

To improve outcomes in any clinical field, it is first critical to have measurable and valid clinical data. These clinical data are contained in several data sources. Insight and discovery in medicine has traditionally been constrained by fragmented approaches to data aggregation and use within hospital systems. There has been a tendency to create databases to support a single inquiry or set of inquiries in a manner that is not scalable to other questions or other data types. This results in database duplication within institutions and needless loss of efficiency. A powerful solution to this problem is an institutional data warehouse. Data warehousing offers a single source of information for reporting across the organization that is subject to a consistent institutional approach to data quality and veracity. Additional important benefits include the flexibility to incorporate new data sources as they become relevant and scalability that permits an increase in database size and complexity over time.

Data warehousing emphasizes a “self-service” model rather than the traditional report-driven model, allowing end users to access data directly using query tools. Self-service models lower barriers to data access, making relevant data available in a time signature that supports institutional decision-making. A final underappreciated observation about data warehousing is that the proximity of data types begins to erode artificial barriers between clinical intelligence and business intelligence, allowing the creation of a complete record of the interaction between an individual patient and the institution.

The fragmented approach to database creation in medicine is at least partially a consequence of the heterogeneous nature of medical data. Most hospitals are faced with integrating data from administrative applications, an electronic health record, medical imaging applications, a variety of clinical patient monitors, and a laboratory information system. An increasingly important data source for understanding patient and community perspectives of hospital programs and initiatives, although at times biased, is data from social media platforms and blogs. Ideally these data sources will also be increasingly incorporated in institutional data-mining strategies and made amenable to analysis. Aggregation and time alignment of these heterogeneous data types is a technical challenge, particularly as only a minority of data generated in the process of caring for patients is structured. The term “structured” refers to data that conform to traditional database techniques or have a predefined data model. Because of the relative ease with which structured data can be made available to traditional analytical techniques, this type of data is used almost exclusively in current research paradigms to try to achieve insight to the exclusion of most medical data such as imaging, videos, physiologic waveforms, and natural language text that are unstructured.

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here