Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Due to the prevalence and major morbidity of heart failure (HF), as well as its associated public health burden with regard to total health care expenditures, HF serves as one of the top conditions targeted for quality of care improvement. Randomized clinical trials have established the efficacy of several therapies to reduce all-cause mortality and to reduce the risk of other adverse outcomes for patients with HF with reduced ejection fraction (HFrEF). This evidence has been translated into professional society guideline recommendations including many class I recommendations for HF treatments, such as the use of angiotensin-converting enzyme inhibitors (ACEIs)/angiotensin receptor blockers (ARBs)/angiotensin receptor neprilysin inhibitors (ARNI), beta (β)-blockers, aldosterone antagonists, implantable cardioverter defibrillators (ICDs), and resynchronization therapy. However, there has been wide variation in the implementation of evidence into practice; often years go by before guideline recommended therapies are routinely applied to clinical practice. This 15- to 20-year gap between evidence and routine practice is often referred to as the “Quality Chasm,” which received national attention in a landmark report published by the influential Institute of Medicine (IOM). Beyond the gap between evidence and care, there are also substantial disparities in care based on age, sex, race, ethnicity, and socioeconomic backgrounds. While there are multiple reasons for this quality gap, clinical inertia has most often been noted as a major barrier. In order to overcome inertia and other barriers, attention has increasingly turned toward aligning incentives in the delivery of health care by measuring and improving quality of care.
Historically, the pursuit of high-quality care has been challenging. Until receiving scrutiny from the IOM, payers, and government agencies, the need to translate life-prolonging therapies into practice did not receive due attention. Systems and strategies for improving quality of care were not routinely in place until early- to mid-2000. Unfortunately, the lack of systematic efforts for improving quality of care leaves a substantial proportion of patients at risk for hospitalizations and deaths, which could be prevented by better implementation of evidence-based therapies. For example, if ARNI therapy was comprehensively used in eligible patients with chronic HFrEF, approximately 28,000 deaths per year could be prevented in the United States alone. This chapter will review the existing framework for quality of care and how to integrate it into everyday practice through performance improvement systems that are designed to facilitate the use of evidence-based therapy and to improve outcomes for patients with HF in the inpatient and outpatient setting.
Defining quality of care is often controversial. While stakeholders such as patients, clinicians, and payers agree that quality is an important value for health care, it is often difficult to precisely define how quality is measured and what its limits may be. Recognizing that quality is a difficult value to conceptualize, the IOM proposed a definition of quality health care based on six dimensions. This definition is now widely accepted as the means by which health care quality can be characterized.
The IOM defines health care quality as the degree to which health care services increase the likelihood of desired health outcomes and are consistent with current professional knowledge. Put simply, high quality health care involves the delivery of appropriate care, from which patients benefit.
In the influential IOM report, Crossing the Quality Chasm: A New Health System for the 21st Century , six aims are outlined for health care improvement. Health care should be safe, effective, patient-centered, timely, efficient, and equitable. As a result, efforts for improving quality of care generally target these six aims ( Table 49.1 ).
|
Avoiding injuries to patients from the care that is intended to help them |
|
Providing services based on scientific knowledge to all who could benefit and refraining from providing services to those not likely to benefit (avoiding underuse and overuse, respectively) |
|
Providing care that is respectful of and responsive to individual patient preferences, needs, and values, and ensuring that patient values guide all clinical decisions |
|
Reducing waits and sometimes harmful delays for both those who receive and those who give care |
|
Avoiding waste, including waste of equipment, supplies, ideas, and energy |
|
Providing care that does not vary in quality because of personal characteristics such as gender, ethnicity, geographical location, and socioeconomic status |
While the goal of high quality care is directed toward having optimal outcomes, there needs to be a supportive framework capable of evaluating the steps towards these ideal outcomes. Most often referenced is the Donabedian Model, which is a conceptual model that provides a framework for examining health services and evaluating quality of care. This model highlights three specific areas that may be targeted for quality measurement and improvement: structure, process, and outcomes.
Structure generally includes the context in which the care is delivered, such as the physical facility and organizational characteristics (i.e., staff, equipment, and human resources). While these structural factors of a health system, such as number of critical care beds or access to invasive procedures, are easy to measure and observe, they are often more difficult to change. Structure may represent the overall ability to provide quality of care and sometimes can be identified as a potential problem upstream for an adequate process of care. An example of a potential structural problem is the financial ability of a health system to leverage health information technology or other resource intensive services. This can be particularly troublesome for health systems that serve socioeconomic disadvantageous populations where health policy may penalize organizations that have high readmission rates. In health systems that serve vulnerable populations, there are many different factors that need to be overcome for high quality care, one of which is access to financial resources, staff, clinical access, and other structural improvements that will prevent readmission for worsening HF. Procedural volume is a classic example of a structural relationship with outcome. Implantation and care for patients with left ventricular assist devices is quite complex for health systems. In general, hospitals with higher procedural volume have better survival outcomes, which may represent the overall ability of the health system to deliver the service, as well as having the infrastructure to support care demands. Despite their intimate relationship, the connection between structure and optimal outcomes is often difficult to demonstrate, given that the observed relationship is often modest, if observed at all. In the case of left ventricular assist devices, there appears to be a modest volume–outcome relationship for mortality but not readmission, making it difficult to determine what volume may be most appropriate for optimal outcomes. This is particularly difficult for structural relationships, such as procedural volumes with readmission, an outcome putatively expected to be influenced by infrastructural support.
Process is the second area highlighted by the Donabedian Model as a target for quality measurement and improvement, and is most often the focus of development. In general, process represents the transactions or services that sum up health care, such as diagnosis, treatment, or other actions of care delivery. Measuring process is common, due to the difficulty of using outcome measures for quantifying quality of care. For most processes of care to be considered a measure of quality care, they typically must have a strong link to an important health outcome. Nevertheless, there are some process measures that are so fundamental to care (e.g., defining an HF patient’s left ventricular ejection fraction [LVEF]), that they serve as a means to systematically define potential patient populations in which evidence-based therapies are targeted. Perhaps most importantly, a process measure must be actionable by a health care clinician or a health system.
Process measures are defined by the percentage of eligible patients receiving a given treatment or service, with the numerator indicating which patients received the treatment or service, and the denominator defining the eligible population. Typically, for a process measure to be considered important for defining quality of care as a performance measure, the measure usually needs to undergo a thorough review and endorsement by national bodies. The criteria for selecting quality evaluation process measures includes whether the measure is easily interpretable, actionable, and feasible. Professional societies (i.e., American Heart Association [AHA], American College of Cardiology [ACC], etc.) have traditionally formed a task force dedicated to defining process measures that may be considered quality performance measures. Programs such as the AHA’s Get With The Guidelines (GWTG) will define process measures from Class I recommendations of care guidelines. For process measures to be tied to either payment or public reporting, they normally must meet approval by national organizations that are dedicated to quality, such as the National Quality Forum or the Joint Commission. These quality-focused organizations review a proposed measure based on its importance, feasibility, and evidence as a best care allotment. Payers like the Centers for Medicare & Medicaid Services (CMS), ultimately impact health care once a process measure is implemented for performance of a health care system.
Challenges for defining process measures as a measure of quality of care include the link to outcome, the trade-off between specificity and sensitivity, and how to combine measures for an overall quality measure. Ideally, the definition yields a strong link to outcome, but as noted earlier, there are some fundamental process measures that may be important for quality of care that have high variability in delivery (e.g., measurement of LVEF). These basic measures may be difficult to link to outcome, due to confounding factors or ceiling effects. For example, certain basic measures of HF quality (LVEF assessment, smoking cessation counseling, ACEI/ARB use in left ventricular systolic dysfunction [LVSD], and discharge summaries) do not have a strong link to short- or long-term outcomes. In contrast, other measures such as prescribing a β-blocker for patients with LVSD, have a strong link to outcomes.
For any process measure, there will be a trade-off between sensitivity and specificity. For example, the evidence and guidelines strongly support the use of ACEIs, ARBs, or ARNIs due to the risk reduction for mortality and other important outcomes among patients with HFrEF. One could apply this evidence across all patients with HFrEF, thereby creating a very sensitive process measure. In order to be more specific and undeniable, a measure with widespread endorsement and used by payers should include absolute contraindications for ACEIs, ARBs, or ARNIs. To be even more specific, then further restriction of the eligible population to precisely those patients studied in clinical trials could be considered; however, applying such stringent standards would limit the general applicability of a process measure, as well as its potential impact on quality of care.
Another area of debate is how to measure the overall quality of health systems, particularly when multiple process measures may be available for evaluation. While outcome measures can provide an overall assessment of quality, they are often difficult to use due to their inability to fully address potential confounding factors, such as patient case mix or severity of illness. Furthermore, there is a need to summarize process measures that are considered more actionable. To address this need, an alternative approach to quantifying overall quality is to use composite measures that combine two or more measures, most often process measures. These composite performance measures allow data reduction when there is a large array of individual indicators. They also allow for combining measures into a summary measure to better profile or inform decisions on clinician performance, such as pay-for-performance programs.
The most commonly used composite measures are “opportunity” scoring and “any-or-none” scoring. Opportunity scoring counts the number of times a given care process was actually performed (numerator), divided by the number of chances a clinician had to give this care correctly (denominator). Unlike simple averaging, each item is based on the percentage of eligible patients, which may vary from clinician to clinician. Any-or-none scoring is similar to composite outcomes for a clinical trial where a patient counts once towards the primary endpoint as an event regardless of how many events the patient may have had during the observation period. In this method, a patient is counted as failing if he or she experiences at least one missed process from a list of two or more processes. The use of any-or-none composites may be misleading, since this method is driven by the processes most commonly failed and, therefore, may not be representative of the overall number and range of processes that can occur during the health care process.
Outcomes are the third area highlighted by the Donabedian Model as a target for quality measurement and improvement. Outcomes are easily understood as the all-encompassing measure of quality and represent the end-effects of health care on patients or populations. Outcome measures for HF may include mortality, morbidity, readmission, home-time, and quality of life. The rationale behind outcome measures such as 30-day mortality or 30-day readmission after a hospital stay for HF is three-fold. First, process measures typically focus on narrow aspects of care; the number of process measures that can be feasibly measured is finite. Second, due to eligibility criteria, process measures may only apply to a small segment of the population, yet there remains the need to fully assess the overall quality of the health system. Finally, the existing or typical process measures may have a limited relationship to the most meaningful outcomes that matter most to patients. Therefore outcome measures can provide a broad perspective on health system performance and spur local innovation to improve the end results desired for quality of care.
Accurately measuring outcomes in a manner that is fair across health systems for performance profiling is challenging due to measured and unmeasured confounding factors such as patient case mix. Making conclusions on quality due to outcomes measurements requires large sample sizes, as well as rigorous adjustment methods for case mix and other factors. The challenge of using outcome as a quality measure may be best illustrated by considering HF mortality. At face value, mortality is an important outcome; however, for a group of patients under a single health care clinician who has different referral patterns (e.g., a cardiac transplant physician), mortality comparisons would be generally unfair. At the health system level, case mix may still be a factor, but it is more easily addressable due to a larger sample size and analytic methods that provide greater ability to conservatively draw comparisons between health systems, particularly when considering outliers.
Selecting the duration of exposure for an outcome also can be difficult and depends on the overall goal. For example, how far out from an admission should a hospital be accountable for a patient’s outcome? While 30 days may seem like a reasonable amount of time to hold a hospital accountable, there is increasing pressure for health systems to be held accountable for outcomes up to 1 year. The more time that passes after a hospital admission, the more that actionable factors become difficult to define. Long-term outcomes more than 1-year post–hospital admission are unlikely affected by care during the index hospitalization. On the other hand, shorter observation periods may not allow enough time to accurately assess the potential benefit of a given process, such as ICD implantation or β-blocker therapy.
Notably, measuring outcomes is generally an insufficient means of improving quality, as efforts often have to be isolated into discrete processes or segments of care such as admission, discharge, and transitional care. Another challenge with measuring outcomes is that organizations using continuous quality improvement techniques need to assess performance measures at relatively frequent intervals since some outcome measures may be too small, occurring over short periods of time, or varying considerably (similar to day-to-day changes in the stock market). Receiving reliable outcomes data in a timely fashion may also be a challenge, especially for health systems that are not integrated or have patient care spread across disparate organizations.
In the end, quality is measured in a variety of ways. For many, quality needs to be summarized across different domains into a composite measure. The most publicized example of a composite measure that includes structural, process, and outcome measures is the annual US News & World Report Annual Index of Hospital Quality, yet whether this report reflects “true” quality is of considerable debate. Looking toward the future, value of care will also become increasingly emphasized, and for some measures, value of care will be directly incorporated depending on the cost effectiveness of a given therapy.
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here