Quality, Accountability, and Effectiveness in Addiction Treatment: The Measurement-Based Practice Model


If you can't measure it, you can't improve it Lord Kelvin (William Thomson)

Introduction

Alcohol and other drug (AOD) use disorders and related conditions pose a major threat to public health and safety among middle and high-income countries globally. These growing problems confer a prodigious burden of disease, disability, and premature mortality as well as an increasing economic toll related to lost productivity, criminal justice, and healthcare costs. To address these endemic substance-related problems, many societies provide an array of formal prevention and treatment services (e.g., screening and brief intervention, medications, outpatient, inpatient, residential treatments, recovery housing) across a broad range of healthcare settings as well as informal services (e.g., community-based peer supports, mutual-help organizations such as Alcoholics Anonymous [AA]). In the United States, there are approximately 14,000 treatment facilities focused on treating AOD problems with costs for treatment in the tens of billions annually. It is anticipated that spending on SUD treatment is predicted to grow from $24.3 billion in 2009 to $42.1 billion in 2020.

While the cost of treating AOD disorders care may be increasing substantially, the quality and effectiveness of treatments remains largely unknown. This is due, in part, to an implicit assumption that implementing best practices will produce better outcomes than treatment as usual. Thus, the fiscal appropriation and provision of such services is presumed to be sufficient to reduce the public health impact of AOD on the population. However, among clinicians who adopt science-based standards of care, adequate adherence to the evidence-based protocols and the clinical competence with which such protocols are delivered vary greatly. Additionally, studies have found that patients receiving treatment using evidence-based standards of care delivered with a high degree of adherence and competence do not actually have better clinical outcomes as compared to treatment as usual. The large variability in the quality and benefit of implementing presumed best practices has been compounded by some treatment programs making outrageous claims of fantastically high success rates (despite rarely, if ever, defining success) and guaranteed recovery (despite no verification of these claims).

In sum, immense variation in the uptake and implementation of evidence-based practices; large variability in the quality of the delivery of care—even when evidence-based practices are adopted; inadequate demonstration of improved patient outcomes when programs deliver evidence-based practices with high fidelity; and the overselling of AOD treatment by some sectors of the treatment field has led to calls for much greater accountability in the AOD treatment field.

One way of improving the standard model of care is to move from evidence-based practice to measurement-based practice (MBP). MBP has the potential to enhance accountability among providers and programs while improving the quality and effectiveness of AOD care through longitudinal capture, scoring, summarization, and relational graphic and/or tabular representation of brief, psychometrically validated patient-reported outcomes at the point of care. This chapter will provide the following: (1) The rationale for MBP; (2) The potential benefits of MBP for patients, providers, programs, and payors; (3) A brief review of scientific findings regarding the demonstrated impact of implementing MBP approaches on care; and (4) A discussion of the implications of the findings of an MBP approach for addiction health care.

Rationale for Measurement-Based Practice

Healthcare costs continue to rise in the United States without concomitant improvements in patients' clinical outcomes. While insurers, practitioners, and clinical programs do their best to reimburse for and provide best practices to maximize health outcomes, system-wide inefficiencies remain. Health services are often based in clinical science; more specifically, they consist ideally of practices that have been proven under rigorous research conditions to provide an added benefit in patients' clinical outcomes and quality of life. Consequently, these are known as evidence-based practices (EBPs). One of the major problems faced in addiction health care, however, is determining what type and how much service to provide, as there is a lack of data on the observed degree of benefit accrued once the EBPs are implemented.

This lack of detailed data on patient outcomes is pervasive across health care and is particularly absent among the highly stigmatized addiction disorders. This has meant more recently that addiction treatment programs have come under increasing pressure to demonstrate effectiveness and be more accountable to justify their costs. If, for example, a family has $30,000 in life savings that they are willing and able to contribute toward care for a loved-one's addiction, should they spend it all on 28 days in “rehab”? If so, which program? Or, should they spend their money on a few years of outpatient treatment? Or, perhaps a combination of residential and outpatient treatment? Where is the evidence that can truly guide such decisions?

Research to Practice Gap and Barriers to Research Implementation

Evidence-based and evidence-informed practices have been buzz words for the past 20 years. The emergence of these concepts and terminology stemmed from calls from the Institute of Medicine of the National Academy of Sciences to help bridge the observed chasm in quality between the clinical science base and actual clinical care—the so-called “research to practice gap”. This was an effort that mobilized a generation of researchers to develop new behavioral and pharmacological treatments and to conduct rigorous studies of their potential benefit. In turn, this brought widespread attempts to adopt and implement EBPs based on the results from these randomized clinical trials. However, several challenges exist that hinder the timely acquisition of systematic clinical knowledge in the behavioral health and addiction fields, as well as science-based clinical practice dissemination and implementation in an attempt to improve patients' outcomes. These challenges include the following: (1) The long time lag between recognizing the need to answer a specific clinical question through research and being able to answer it; (2) The wide variability among addiction providers and programs in the adoption and implementation of EBPs; (3) The poor fidelity with which EBPs actually may be delivered in clinical care; and (4) The lack of convincing evidence that patients' outcomes are improved when EBPs are adopted and implemented with high fidelity.

The Long Time Lag Between Posing the Clinical Research Question and Finding Out Its Answer Through Standard Clinical Research Protocols

One of the challenges in devising and conducting clinical trials is the expense and sheer length of time they take to complete. A clinical researcher, for instance, may pose an important research question that needs to be answered. If motivated, the researcher then will conceive of a testable study design and spend several months writing a grant application to obtain the money needed to run the study. Once the grant is submitted to a funding institution (e.g., the United States National Institutes of Health), it might take four to five months for the institution to review the grant, assign a score, and provide feedback. Even if the response is favorable, grant funding institutions commonly ask grant writers to submit a revised application. It typically takes another year to submit a revised application and have it reviewed. If the funding institution approves the study, it may take several months thereafter to obtain the money. Finally, the researcher can begin the study. Clinical studies often last several years due to the need to recruit participants and follow clinical cases longitudinally over time. Thus, it may take five years of study recruitment and follow-up, statistical analyses, and writing of the study's results before the findings begin to be published. Even then, it can take another few years for the findings to be disseminated and discovered by treatment agencies and clinicians before the results are implemented in frontline practice settings. Therefore, while the clinical researcher needed that question answered at the time he or she first thought of it and decided to study it more systematically, it may be 10 years before any semblance of an answer will come to light. By that time, intervening changes (e.g., in clinical practices or funding/reimbursement structures) may render that question irrelevant. Additionally, the effects of using exclusion criteria add to the frustratingly long timeline of such studies. If relative improvements are found in a study under rigorously controlled experimental conditions, the benefits may not always be transferred to real-world clinical settings because certain types of more severe and medically and psychiatrically complex patients—the type seen in day to day real-world clinical practice settings—may have been excluded from those studies. Furthermore, there may be other unintended therapeutic artifacts operating under ideal controlled research conditions, such as the additional clinical researcher attention and support (which can have additional therapeutic benefit) and assessment reactivity, which can enhance outcomes in clinical trials—therapeutic elements that are typically absent in frontline clinical care.

The Wide Variability Among Addiction Practitioners and Programs in the Adoption and Implementation of EBPs

It has long been known that there is wide variation in the degree to which clinical providers and programs adopt and implement EBPs. One reason for this is a simple lack of knowledge about new practices or innovations (e.g., “I didn't know about it”). Most practitioners and programs do not read academic journals and are therefore not exposed to the latest innovations in clinical science, leaving them unperturbed in continuing to implement existing practices. A further barrier is a presumed (or actual) lack of incremental benefit of a sufficient magnitude to justify the effort and expense of implementation (i.e., “it won't be worth the hassle”). Unless clinicians and programs perceive that the expense and effort needed to change practices is worth it, they are unlikely to adopt new practices or innovations. A study of clinicians conducted by Miller et al. found that clinicians would only be willing to adopt a new addiction care practice if it resulted in at least a 10% benefit in patients' outcomes. Another obstacle is structural barriers including insufficient time, financial resources, and system support to teach and train practitioners in the new practices or to supervise them over time to ensure maintenance of the new practices. Many programs and providers are required to maximize the amount of time spent in face-to-face clinical treatment delivery to maximize reimbursement and pay for overhead costs. Adopting and learning to deliver new practices can be an expensive disruption to healthcare systems. Finally, ideological barriers may cause providers and programs to object to the adoption and implementation of a particular approach even if evidence is strong (e.g., methadone/buprenorphine for the treatment of opioid use disorders) because they “don't believe in it” or believe “it's unethical.”

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here