Quality Measures and Performance Improvement for Breast Cancer Surgery


The first 15 years of the 21st century witnessed an exponential increase in the emphasis on quality and performance measurement for providers, as well as hospital systems. Furthermore, payers of care, including the United States government, have proposed shifting US medical reimbursement policy to payments that are tied to quality and performance instead of the number of services provided or procedures performed. For instance, the Department of Health and Human Services stipulated that 90% of Medicare payments should be tied to quality or value through Alternative Payment Models (APMs) by 2018, with further increases thereafter. Currently, this target has not been achieved. Furthermore, public transparency of performance metrics for which providers will be held accountable will dramatically increase in the coming years. Because these shifts are so consequential—and likely irrevocable—it behooves us to understand the opportunities and pitfalls that we face in advance of these changes. In this rapidly evolving framework of health care measurement, surgeons and other providers of care must be engaged to ensure developed measures are meaningfully related to clinical quality and are reliable and accurate. This chapter is a primer for breast surgeons on this critical issue of quality measurement and performance improvement.

Why measure quality?

The quality of health care received, including breast cancer care, varies by geographical region, institution, care provider, and patient characteristics such as race and socioeconomic status. These variations account for the known disparities and inequities that currently exist in health care delivery. Access to care and quality of care sometimes differ by socioeconomic status, level of education obtained, insurance status, and race. Documented quality concerns also include variations in appropriateness of care, with under- and overutilization of services resulting in differences in cost without adding value. The Institute of Medicine (IOM) provides notable examples of quality of care gaps in its seminal publications To Err Is Human, Crossing the Quality Chasm , and Delivering High Quality Cancer Care: Charting a New Course for a System in Crisis . Such gaps exist along the entire continuum of cancer care, from diagnostic evaluation to survivorship, and include safety issues, unacceptable variability of care, and examples of failure to adopt evidence-based best practices in cancer care. Examples of these gaps include the use of more invasive diagnostic surgical excisional biopsies instead of needle biopsies and disproportionate numbers of patients undergoing breast-conserving therapy without receiving breast irradiation, based on economic and insurance status. Some variability is evident even within institutions participating in national clinical trials.

There are other reasons to measure quality. Health care spending in the United States is currently rising faster than anywhere else in the world. Not only is this not sustainable, but more importantly, it has not been shown to be associated with improved outcomes. The Commonwealth Fund concludes that when based on cost and outcomes, the overall health care ranking of the United States is lower than that of nearly all of the countries of Western Europe, New Zealand, and Australia. In a cross-national comparison of health care spending per capita in 11 high-income countries, spending in the United States far exceeded that of the other countries. This disparity was largely driven by greater utilization of expensive technologies and notably higher prices. Despite this heavy investment, the United States ranked lower in many measures of population health, such as life expectancy and the prevalence of chronic conditions. An inverse association usually exists between quality and cost of care, such that improved quality of care would be expected to lower health care costs, which would in turn help to address our health care fiscal crisis. Increased cost of care is often associated with overutilization or waste of care. These and other observations motivated multiple professional organizations including the American Society of Breast Surgeons (ASBrS) to participate in the American Board of Internal Medicine’s Choosing Wisely campaign, a national effort aimed at promoting appropriate care and reducing wasteful care. For example, routine systemic imaging to search for metastatic breast cancer preoperatively in patients with early-stage breast cancer is noncompliant with evidence-based guidelines and has not been shown to improve patient survival. Conversely, it results in increased health care costs and leads to unnecessary additional testing and biopsies for false-positive findings.

Although there has been significant progress in developing quality measures (QMs), these data often do not make it into the hands of the frontline providers. As a result, simply developing valid and meaningful approaches to not only measure, but also report, quality can have a profound and lasting impact on the way in which we deliver care. Many studies have provided proof of concept that measuring breast cancer quality of care and providing peer performance comparison usually correlates with improved care. Providers often know what needs to be done to improve care and can be motivated to improve if they know where their current performance stands relative to benchmarks.

Who are the stakeholders for quality measurement?

The stakeholders interested in health care quality include not only patients and their providers, but also purchasers, payers, professional organizations, policy makers, patient advocacy groups, and governmental agencies that oversee health care delivery. Although all stakeholders desire high-quality care, their perspectives and goals may differ. A historical example of differing stakeholder opinions occurred more than a decade ago when breast cancer patients were offered, then desired, reconstruction after mastectomy but could not afford it because it was not covered by their insurance plan. From the payer’s perspective, reconstruction was costly and characterized as an optional “cosmetic” operation. In contrast, from the provider and patient perspective, reconstruction was a critical component of care—a way to restore normality, both physically and emotionally, after cancer surgery. Resolution of these conflicting perspectives did not occur until federal legislation, the Women’s Health and Cancer Rights Act of 1998, mandated reimbursement. This regulatory action resulted from a collaboration between stakeholders that included policy makers.

What is the us history of surgical quality measurement?

Ernest Codman at Massachusetts General Hospital was one of the first surgeons to advocate for measurement of surgical outcomes, as he believed that these tools provided the foundation for improvements in quality of care. Dr. Codman reported the end results of 337 surgical patients treated between 1911 and 1916. He concluded that there were 123 errors and 4 “calamities.” He advocated for surgeon accountability and transparency, and he recommended development of national patient registries to monitor metrics over time. These were prescient concepts and remain relevant today. One such example is the use of morbidity and mortality conferences to review patient outcomes and address patient safety or treatment concerns related to these. Unfortunately, most of Dr. Codman’s peers disagreed with him and, as a result of his quality advocacy, he was the victim of professional rebukes and social ostracism. There were likely many other surgical leaders interested in quality and outcomes research after the era of Ernest Codman, but few seminal conceptual articles were published on quality measurement during the next half of the century, perhaps reflecting Codman’s poor experience.

In the 1960s, Avedis Donabedian, a health care researcher at the University of Michigan, began refining the concepts of quality measurement, moving away from unstructured peer-review methods, such as traditional morbidity and mortality conferences, toward an objective “outcome-driven process.” He created a taxonomy of quality measurement based on structure, process, and outcomes of care—later called the Donabedian trilogy— which is still in use today. Since the late 1990s, additional domains of quality to audit have been recommended and include patient access, patient experience, affordability, population health, safety, effectiveness, and efficiency.

Moving beyond theoretical concepts and motivated by recognition of disparate care in the Department of Veterans Affairs (VA) hospitals, methods to provide hospital-level peer performance comparison for surgical outcomes were eventually developed in 1991. Beginning in 1994, Khuri and colleagues described the system of auditing, peer comparison, and risk-adjusted analytics used by the American College of Surgeons (ACS) and the Department of VA to report surgical outcomes. They provided hospitals with report cards of postoperative morbidity and mortality. In 2004 the VA hospital program evolved into the ACS National Surgical Quality Improvement Program (NSQIP). Quality measurement programs specific to cancer include the Commission on Cancer (CoC), and for breast cancer specifically there exists the National Accreditation Program for Breast Centers (NAPBC), the ASBrS Mastery of Breast Surgery Program, and the National Consortium of Breast Centers (NCBC). These programs have been recently summarized. The contemporary strategies to improve patient quality that are endorsed by multiple organizations are listed in Table 75.1 .

Table 75.1
National Health Care Policy Stakeholder Quality Improvement Objectives
Organization Objective Name Objectives
Institute of Medicine a Six Aims
  • Safe

  • Effective

  • Patient centered

  • Timely

  • Efficient

  • Equitable

Institute for Healthcare Improvement b Triple Aim
  • Improve the patient experience

  • Improve population health

  • Reduce per capita cost

Agency for Healthcare Research and Quality and the National Quality Strategy c Three Aims
  • Better care

  • Healthier communities

  • More affordable care

Six Priorities
  • Safer care

  • Person and family partnership in care

  • Effective communication and coordination of care

  • Effective prevention and treatment practices

  • Promoting best practices in communities for health

  • More affordable health for patients, employers, and governments

Nine Levers
  • Provider measurement and feedback

  • Public reporting of cost, care, and outcomes

  • Learning and technical assistance to help organizations

  • Certification, accreditation, and regulation to meet quality standards

  • Consumer incentives to adopt healthy behavior

  • Provider rewards and incentives

  • Improve health information technology efficiency

  • Foster innovation and rapid adoption

  • Workforce development—invest in next generation of health care providers

a Committee on Quality of Health Care in America, Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century . Washington, DC: National Academies Press; 2011.

b The IHI Triple Aim. Institute for Healthcare Improvement http://www.ihi.org/engage/initiatives/TripleAim/Pages/default.aspx .

c The National Quality Strategy. Agency for Healthcare Research and Quality http://www.ahrq.gov/workingforquality .

What are quality and value?

Many organizations have defined quality (see Table 75.1 ). The IOM defines health care quality as an iterative process with six domains: safety, effectiveness, timeliness, efficiency, equitability, and patient centeredness.

Value of care has a broader meaning than quality of care. In addition to quality, cost is considered. Spending on breast cancer care is substantial—not surprising because it is the most commonly diagnosed nonskin cancer in women. Breast cancer expenditures account for the largest share of cancer-related spending. In 2006 Porter and colleagues defined a value metric as “patient health outcomes achieved per health care dollars spent.” Others describe it as the simple ratio of quality to cost of care. In his monograph Discovering the Soul of Service , Leonard Berry takes a patient-centered approach, defining the value of care as the ratio of quality to burdens, in which burdens can be both financial and human. Using a Surveillance, Epidemiology, and End Results (SEER)-Medicare database, Hassett and colleagues explored the relationship between breast cancer care, cost, and outcomes—as measured by adherence to recommended treatments and survival—in 99 geographic regions. Convincingly, they identified significant variability in all. They failed to identify an association between survival and cost of care. Process measures that assessed necessary and unnecessary therapies did not correlate with survival and they found evidence of expenditures for treatments not recommended. Other investigations have also provided evidence of unnecessary expenditures that did not aid survival, suggesting overutilization of some services with no gain. As the United States moves away from a system of fee-for-service and toward value-based reimbursement, many definitions of value are already in use or anticipated. Nearly all have or will have a quality metric in the numerator and a cost metric in the denominator, resulting in the creation of a value metric.

What are safety in surgery and diagnostic errors?

Diagnostic errors and patient safety are commonly included under the rubric of quality . Surgical safety is of great importance for all surgical subspecialties, and safe surgery is necessary to achieve optimal outcomes in breast cancer care. Surgical safety checklists and safety improvement methodology have been reviewed elsewhere. Direct observation of care may identify root causes of safety issues.

Diagnostic errors in medicine are common but underreported, according to a recent comprehensive review. The IOM’s updated 2015 definition of diagnostic error is “the failure to (a) establish an accurate and timely explanation of the patient's health problem(s) or (b) communicate that explanation to the patient.” This definition frames this quality issue from the patient’s perspective, recognizing that errors may in turn lead to patient harm. A diagnostic error can occur at any stage in the diagnostic process, and there is a spectrum of patient consequences related to these errors ranging from no harm to severe harm. Diagnostic errors have been described as “missed opportunities,” either errors of omission (failure to order tests or procedures) or commission (ordering unnecessary tests, such as overutilization of systemic imaging in early-stage breast cancer patients). Relevant examples of diagnostic errors of omission in breast cancer care include delayed diagnoses attributable to misses on screening mammography, continued antibiotic treatment for a missed diagnosis of inflammatory breast cancer, and failure to recognize imaging-pathology discordance of breast lesions after image-guided needle biopsy that showed benign, indeterminate, or high-risk findings. Multiple reviews and publications have highlighted the reasons for a delayed diagnosis of breast cancer.

How do we identify a gap in the quality of care?

The World Health Organization (WHO) identifies gaps in health care quality by examining variability of care. Gaps are identified when measurements of actual care do not match achievable care and when variability of performance coexists with evidence that high levels of performance are obtainable. For breast care, searches for variability can be conducted in national databases such as the SEER and the National Cancer Database (NCDB). For example, interrogation of the NCDB and the ASBrS databases recently identified wide variability of reexcision rates after breast-conserving surgery for cancer, indicating a performance gap. Searching for variability has been the primary method for identification of disparities and inequities of care based on socioeconomic status, race, location of care, and other patient and provider characteristics.

Where are the databases for quality and clinical outcomes research?

Institutions throughout the United States are generating an ever-increasing amount of electronic data via increased user adoption of electronic medical record platforms. The American Recovery and Reinvestment Act of 2009 included $17 billion in Medicare/Medicaid incentive payments for the adoption of electronic health records (EHRs) starting in 2011, and reductions in Medicare/Medicaid reimbursement for nonadopters beginning in 2015. In addition to the obvious economic implications, this mandate represents an enormous opportunity to evaluate quality and clinical outcomes if data are entered, stored, and extracted appropriately. The ability to access and analyze these data will markedly improve our ability to study practice patterns, compare the effectiveness of treatment beyond clinical trials, evaluate performance, and provide objective feedback to practitioners. It will also enable us to identify bottlenecks within the system that can be targeted for process improvement initiatives. Although significant progress is being made, we currently still rely primarily on the secondary use of administrative data and manual abstraction from cancer registries.

The secondary use of administrative data, the most common data source for such activities, is limited for breast cancer care by the need for accurate pathologic data, such as stage and histology. The abstraction of administrative data from hospital systems (e.g., discharge data) can be available from a variety of sources, such as state hospital associations or private consortiums, such as the University Health System Consortium. Other administrative data exist in the form of insurance claims. Such data are more flexible than discharge data but can be cumbersome to analyze and often represent a biased sample of the population. For example, Medicare claims are often used to evaluate quality and outcomes, but Medicare data come only from patients aged 65 years and older. Thus their utility is limited for cancers that primarily occur in young patients (e.g., testicular cancer) or where the biology and treatment of the disease in elderly patients are different from those in younger patients (e.g., breast cancer). Furthermore, claims data cannot be linked to the medical record for more specific information as advances are made (e.g., genomic profiling). These data are, however, extremely well suited to surgical questions. Because they are tied directly to payment, surgical and other procedure codes are reliable and represent a clear, distinct intervention.

Clinical cancer registries exist at the institutional, state, and national levels. State cancer registries vary greatly in terms of the completeness of data entered, as well as the specific variables collected. The most common national cancer registries are the SEER and the NCDB of the CoC. The NCDB abstracts data from the more than 1200 CoC-accredited institutions, representing more than 70% of the newly diagnosed cancers in the United States. It is an extremely powerful tool for investigating quality and outcomes that are directly tied to clinical care; however, limitations in the capture of treatment information can limit its application for such measures. It is also important to recognize that the NCDB is not a population-based data set. SEER, on the other hand, now collects data from 18 sites around the United States that were selected to ensure representation of socioeconomic and racial diversity. Because it includes all individuals living in the site, it is a population-based data source, which can be important for generalizability. Another major and well-recognized limitation of cancer registries is the inability to accurately track recurrence. As patients move from one geographic region to another, and even between hospitals within a region, longitudinal outcomes can be difficult to track, and capture of recurrence in particular is noted to be unreliable. The latter can in some cases be addressed by institutional tumor registries; however, these are not mandated nationwide and thus data are severely lacking.

Such registries can provide accurate pathologic and other cancer-specific information, but often have less accurate information on treatment, and little to no information on comorbidity. In addition, data elements in cancer registries are manually collected and entered, making it extremely resource-intensive to collect and maintain, resulting in a considerable lag time before data are available for analysis. Several initiatives have linked cancer-specific variables from registry data to the details of treatment available in administrative data in order to take advantage of both sources. The best example is SEER-Medicare, which links cancer registry data from SEER with Medicare claims data.

Many of the data elements that are lacking in traditional administrative data sources, including cancer-specific variables, are available as unstructured free text in EHRs. To access these data in a cost-efficient manner, automated approaches can be used to extract key measures pertaining to patient, disease, and treatment characteristics. To date, a number of EHR-based initiatives abstract data from multiple institutions and systems to create a virtual data warehouse. Perhaps the most relevant for the quality and outcomes of breast cancer patients is the National Cancer Institute–funded Cancer Research Network, which draws from multiple health maintenance organizations. With the continued expansion of EHRs, a number of new national initiatives aim to increase the availability of these data for the evaluation of clinical effectiveness, quality, and outcomes. The National Patient-Centered Clinical Research Network, a major initiative of the Patient-Centered Outcomes Research Institute, funds the creation of multiple large data warehouses with standardized data structure. Further, it is being recognized that abstraction of unstructured free text within the EHR is a resource-intensive and often inaccurate method of data collection. As such, the use of structured data elements is gaining momentum. This highlights the importance of the role of informaticists within the health care team to provide their expertise on the build of databases and registries to optimize functionality.

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here