Introduction

The Relationship Between Safety and Quality

Health care is replete with examples of errors culminating in adverse events that compromise the quality and value of care. It comes as no surprise that optimum oncological outcomes depend on excellent execution and design of a radiotherapy plan. In the clinical trial setting, there are numerous reports of the association of protocol variations with inferior survival, or worse, toxicity. Considering that the protocol violations are classically related to inappropriate dosing or inappropriate targeting, which is the same ultimate outcome in the setting of error, it is no stretch to understand that error can profoundly affect outcome, and optimum quality depends on optimum safe practice. It is only surprising that this relationship is not seen in every post-hoc analysis of protocol deviations. Imagine, if you will, the complex relationship between the importance of safety in quality care throughout a varied spectrum of patients seen in any clinic. There is an idealized survival rate for any cancer. We then must adjust this for a variety of factors: current medical knowledge, safe and accurate execution of that medical knowledge, competing morbidities of the patient, and the ability and desire of the patient to tolerate and adhere to recommended therapy. The survival rate for any disease can never be higher than what the most limiting factor allows ( Fig. 19.1 ). It is likely that this interplay explains why protocol variation and errors in radiotherapy do not always exhibit measurable decrements in outcome.

Fig. 19.1, In this figure, the limitations on the maximum attainable survival rate for three patients is given. For Patient “A,” there is a huge rate of competing morbidity; thus, the major factors impacting potential for survival are death from other causes and tolerance to treatment. The effectiveness of treatment based on our medical knowledge and safe and accurate execution play a minor role. For Patient “B,” the knowledge of the disease and available medical treatments are extremely poor; thus, the adherence to this therapy and its execution have essentially no import. The main factor impacting survival is the disease itself and competing morbidity. Finally, for Patient “C,” there is very little competing morbidity, and tolerance to treatment is certain. For this patient, the proper application of medical knowledge through treatment and appropriate execution of this treatment is of critical value.

Despite this, we recognize the unfortunate truth that the capacity for error is intrinsically human and, wherever humans exist, the capacity for error will exist. Historically, there has been an overarching tendency to look for a scapegoat and to blame a single individual or group of “bad apples” for the incident in question. Apart from reckless (defined here as deliberately risk-taking) behavior on the part of the individual, the majority of adverse events are attributed instead to any number of error-permeable conditions prevalent at the system level. There can be individual factors that prevent the interception of the error from its origin to the “sharp end” of patient care, but usually these occurs with a much greater number of system factors to carry the weight of culpability. This has led to the popularity of the approach of “systems thinking,” which refers to the need for solutions that address systems weaknesses behind the error. Indeed, the very bottom of the hierarchy of effectiveness ( Fig. 19.2 ) is personal vigilance, underlying exactly why exhortation should be an uncommon corrective action.

Fig. 19.2, The hierarchy of effectiveness—the top of the pyramid has the most effective strategies and the bottom has the least effective strategies for error prevention.

The Nature of Error

It is important to recognize that there are several categories of error, which center around whether the failure occurred at the planning stage or execution stage. Execution failures—in which an appropriate intervention is performed, but performed poorly—are called slips or lapses . Slips typically involve failure of attention: confused perception, misordering of events, or reversing events. Lapses tend to involve memory failures: omission of steps, performing tasks without appropriate intentionality. In contrast, planning failures involve rule-based and knowledge-based mistakes. In rule-based mistakes, the agent misapplies a rule that is good and appropriate or applies a rule that is inappropriate or poorly made. With knowledge-based mistakes, the causes are various: there can be confirmation bias, one can engage in encysting (also referred to as situational unawareness , in which one pays attention to small details, overlooking the broader picture), one can experience search satisfying, or any number of other biases that we will discuss shortly. Such behavior has also been classified under the broad term “unsafe acts,” which can be errors or violations. Errors can be skill based, decisional, or perceptual; whereas violations can be routine (normalization of deviance in failure to follow an ill-regarded policy) or exceptional (intoxication at work). Additionally, one can think of “latent” or “active” failures. Active errors occur at the point of interaction between the health care provider and some aspect of a larger system. Latent errors are more subtle failures of organization or workflow design that contributed to the error. For instance, an active failure would be an inappropriate manual override of the table tolerance for an incorrectly shifted patient. The latent failure behind this may be a complicated simulation procedure that inadvertently encourages error in the calculation of a shift. The reader should be advised that there are many ways to classify error. The important piece of this discussion is to comprehend the many pathways for error, most of which are distinct from negligent or reckless behavior. In the majority of the aforementioned pathways to error, there is no malice or deviation from professionalism.

The role of cognitive bias in both the failure to recognize error and in the commission of error is profound. The Joint Commission has recognized cognitive bias as a major issue in patient safety. The world of health care is a complex system, fraught with person factors, system factors, and patient factors that make cognitive bias more likely ( Table 19.1 ). One can easily imagine a scenario in which a normally highly proficient health care provider, overloaded with patients, working through illness and little sleep, might be treating a complex admitted patient with spinal cord compression under great time pressure across several different electronic medical record systems amid a busy clinic day, with an error resulting from this convergence of factors.

TABLE 19.1
Factors Associated With Higher Prevalence of Cognitive Bias
Adapted from Ford EC, Evans SB. Incident learning in radiation oncology: a review. Med Phys. 2018;45(5):e100-e119.
Person Factors Patient Factors System Factors
Fatigue Complex patient presentation Workflow design (task complexity, reliance on memory, multiple hand-offs)
Cognitive loading Elevated number of comorbidities Insufficient time to procure, integrate, and make sense of information
Affective bias Lack of complete history Inadequate processes to acquire information (e.g., transfer)
Poorly designed/integrated or inaccessible health information technology
Poorly designed environment (e.g., distractions, interruptions, noise, poor lighting)
Poor teamwork, collaboration, and communication
Inadequate culture to support decision-making (e.g., lack of resources, time, rigid hierarchical structure)

The field of cognitive bias has more than 150 different recognized human biases that can cloud judgment and result in poor decisions or actions. Readers are directed to the work of Croskerry for a comprehensive view of these biases. Anyone who has ever been in chart rounds understands sunk cost fallacy : the reluctance to change one's course of action when there is a lot of time and effort already expended in the current course of action. Likewise, posterior probability error is common in diagnostic error, in which one assumes that because the prior cause of a headache was a migraine three times in the past that this headache must be a migraine and not brain metastases despite new concerning features in the presentation . Gambler's fallacy refers to the reluctance to believe that a certain event cannot be repeated multiple times in a row, such as 3 patients with leg swelling in one clinic day all having deep venous thrombosis, when the truth is that the patients have no relationship to each other and each likelihood must be considered in isolation from the other. Confirmation bias is present during chart checks of a skilled dosimetrist's plan when it is erroneously judged that the plan is perfect because that is what one subconsciously expects from a normally excellent team member. Availability bias is assigning a cause based more on what readily comes to mind rather than a rigorous consideration of all reasonable possibilities. Anchoring is the tendency to hold on to one's initial thought about a situation despite subsequent disconfirming evidence. In the investigation of the Lisa Norris Glasgow incident, in which a young woman being treated for CNS malignancy received an ultimately fatal dose of radiation to her brain, an error was found in her spine plan. The investigators postulated that search-satisfying bias contributed to the miss of the second error in the whole-brain plan that caused her demise, such that those performing the plan check subconsciously stopped looking once they found a single error despite the presence of two errors. It is important to note that cognitive biases are also referred to as failed heuristics (failed rules of thumb) because these mental shortcuts can be quite useful in everyday life. For instance, hearing hoofbeats, one can use posterior probability error to correctly conclude that there are horses approaching; however, sometimes zebras appear.

Additional work in radiation medicine has been done regarding the NASA task load index, which is helpful for understanding the environment in which error occurs. In this schema, each task is given a certain “load” based on mental and physical demands. In this data, it becomes apparent that mistakes happen at very low and very high task-load indices. One might postulate that in the low-load time period, one “goes on autopilot” and inattention prevails, leaving one open to a slip. At high workloads, there is cognitive overload, predisposing one to rely on cognitive biases and subsequent error. This work has also shown that cross-coverage is associated with higher workloads, which is another vulnerable clinical situation for error. A multitude of other factors may impact our decision-making quality and task performance, including rudeness (from patients, families, or team members), clinician attitudes, group gender composition and collective intelligence, and group dynamics. The clear understanding that all clinicians are vulnerable to error and the recognition of the scenarios that increase its likelihood are essential to the foundation of a safety culture and a compassionate department.

The Role of Safety Culture

System weaknesses also tend to be pronounced in systems with complex interactions, for which a normal rate of accidents may be expected. Thus, the opportunity to improve quality and safety in radiation oncology must originate at the system level. This creates a culture that drives high reliability to minimize adverse events despite the intrinsically complex and hazardous work associated with the delivery of high-energy ionizing radiation. The commitment to safety at all levels of the department and system, from frontline providers to managers and executives, establishes a culture of safety that forms the foundation of “Safety is No Accident,” issued by the American Society for Radiation Oncology (ASTRO). That cultural foundation acknowledges the risk associated in the treatment of patients. It requires a just culture, in which reporting of errors is without fear of recrimination, with encouragement of collaboration across the spectrum of job descriptions. The goal is to seek solutions to safety problems; thus, leadership is committed to resources to address safety concerns.

Improving the culture of safety serves as a foundation for everyone in health care in general, specifically in radiation oncology in preventing or reducing errors and improving overall health care quality. The importance of creating this culture is as or more important than any specific disease-related treatment covered in this book, as outcomes are directly related to the quality of care that our patients receive. Participating in the safety culture is paramount to everything we do every day in ensuring that our patients receive the best care possible.

The Impact of Error

Clearly, the occurrence of medical error is truly devastating for the patients and families affected by it. Medical error is also devastating to clinicians who are involved in error, leading to the use of the term “second victim.” It should be noted that this term is controversial in some circles: it can be seen to detract from the patient experience or, alternatively, can be valued for the degree of urgency that it conveys. For the purposes of this chapter, the term second victim will be used for health care providers, with no intention of detracting from the patient experience of error. Involvement in medical error has been linked to physician burnout, suicidal ideation, and loss of physicians to the profession of medicine. There is an increasing movement to provide support to clinicians following error, including removing them from clinical care to process the event, if possible, providing support from peers, or just a simple intervention of a small gift to show concern for the individual. Hospital-wide programs in peer support to clinicians in times of adverse events have been found to be highly valued and highly cost-effective. Readers are directed to learn more about what helps clinicians in times of error ; establishing a moral context, teaching others about the error, and becoming an expert are strategies that can help individuals thrive after an error experience.

Systems Engineering

Health care quality is defined by the Institute of Medicine (IOM) as “the degree to which health care services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge .” Six goals for defining quality care have been identified: that care is safe, effective, timely, efficient, equitable, and patient centered. Although patient-centered care is somewhat ambiguous, the reader should consider it to be timely, dignity promoting, respectful of privacy, quality-of-life focused, culturally respectful, and inclusive of shared decision-making.

The conceptual work of Donabedian forms the basis for the IOM framework, which established seven pillars for quality: efficacy, efficiency, optimality, acceptability, legitimacy, equity, and efficiency ( Table 19.2 ). This framework also required that the connections and links between the dimensions of structures, processes, and outcomes must be understood before quality can be assessed.

TABLE 19.2
The Seven Pillars of Quality Defined
Adapted from Donabedian A. The seven pillars of quality. Arch Pathol Lab Med. 1990;114(11):1115-1118.
Quality Pillar Definition
Efficacy The ability of care, at its best, to improve health
Effectiveness The degree to which attainable health improvements are realized
Efficiency The ability to obtain the greatest health improvement at the lowest cost
Optimality The most advantageous balancing of costs and benefits
Acceptability Conformity to patient preferences regarding accessibility, the patient-practitioner relation, the amenities, the effects of care, and the cost of care
Legitimacy Conformity to social preferences concerning all of the above
Equity Fairness in the distribution of care and its effects on health

Radiation medicine standards have been profoundly impacted by Donabedian's work. This framework is the basis for patterns-of-care studies in radiation medicine and the dimensional triad of structure, process, and outcome associated with 10% of over 400 global standards in radiation medicine. Structural aspects upstream of processes are easiest to measure, especially by accreditation organizations. Such structural aspects include physician or physicist board certification, therapist certification, hospital joint commission status, or patient volumes treated. Process measures are easier for health care providers to relate to, proximal to errors, can be relatively easily benchmarked using policies and medical records, require less follow-up, and provide direct feedback, but must be linked to outcomes. Process indicators include the appropriate use of chemotherapy and/or radiation for a given disease stage, margin status or completeness of surgical nodal evaluation, pain control, or adequacy of dose prescriptions. Multiple process measures are considered to reflect the multidisciplinary nature of oncology care and the variation in quality that might occur across disciplines but within a given patient's treatment course. Although some might defend such variations as “the art of medicine,” deviations from radiotherapy treatment protocols built on firm structural and process bases have been linked with poorer patient outcomes. All dimensions should be collectively considered along with understanding causes for deviation and variation to assess quality in an “environment of watchful concern.”

The International Council on Systems Engineering (INCOSE) defines system engineering as “an interdisciplinary approach and means to enable the realization of successful systems,” designed to allow for excellent functioning over the lifespan of the system. Although this science began in the 1930s, systems engineering focuses on the system with particular emphasis on maintenance of communication and management of uncertainty and complexity in the interaction of its components—including human-machine interface. It facilitates the translation of qualitative customer demands into concrete quantitative product/process design features through discovery, learning, diagnosis, and iterative conversations.

Six Sigma methods (introduced by Motorola in the 1980s) are statistically driven methods that strive to achieve high-quality process performance that compares favorably to client expectations. Quantitatively, the aim is to minimize defect (or error) rates to fewer than 3.4 per million opportunities. The sigma level is a measure of the ability of a process to achieve a desired mean value, centered within a tolerance range. Ideally, the variability in the process itself (standard deviation) is much smaller than the difference between the mean value and the limits of the tolerance range. A defective process is one that is not contained by the tolerance range. Thus, a Six Sigma process is one in which the standard deviation is one-sixth of that difference (between the mean value and the limits of the tolerance range) and corresponds to a long-term defect-free rate of 99.99966% ( Fig. 19.3 ). Despite repeated testing, such processes are more resistant to variation and are exceedingly reliable. Reaching this level of nearly defect-free performance requires a dedicated system with excellent understanding of the factors that affect the process, their variations, and effective strategies to facilitate process control.

Fig. 19.3, A schematic of the definition of Six Sigma.

Process Engineering and Radiation Medicine

Quality function deployment (QFD) is a hierarchical, iterative systems engineering approach to quality, which can also be referred to as customer-driven engineering . The success, or quality, of QFD is thought to be based on the customer's satisfaction with the service or product. QFD is inclusive of the customer's desires, along with the desires of company stakeholders. This input from both sources is sought along the continuum of product design stages, including parts requirements, manufacturing processes, and quality control. Management, technical, and business elements are considered integratively, correlations between key enabling factors are made transparent, and prioritization of efforts is established to ensure that quality efforts are directed at key control parameters (KCP) and key noise parameters (KNP).

The process starts with obtaining key customer requirements, also known as critical-to-quality (CTQ) characteristics, or “Ys.” These are colloquially termed customer “wows, wants, and musts” to reflect the desirability of each item and where it lies within customer expectations. The relative importance of each CTQ characteristic is ranked by the customer, and industry benchmarks are sought by the team. Next, technical product characteristics, also known as “Xs,” required for each of the CTQ characteristics are established. The magnitude of correlation (high, medium, low, or numerical ranks) between each X and all Ys, along with the direction (increase, decrease) determines their overall relationships. The Pareto principle maintains that 80% of the output in a given system is produced by 20% of the input. The Xs are Pareto sorted in order of their overall impact on all Ys using the weighted rank sum. For an example of how such charting is useful, see Fig. 19.4 .

Fig. 19.4, An example Pareto Chart for radiotherapy patient complaints. Through this Pareto chart, you can see that 80% of the complaints in this department relate to issues with parking and waiting times for treatment. Therefore, quality management efforts would be best focused on these two issues.

In radiation medicine, QFDs aid in the selection and prioritization of Lean Six Sigma projects, or risk mitigation. Alternatively, this approach may help us achieve a more patient-centered practice by helping our departments align with patients' needs and increasing patient satisfaction.

Retrospective and Prospective Error Analysis

The safety pillar of the IOM framework is part of the imperative that leads us to pursue risk mitigation in radiation medicine. Both predictable errors and prior errors are appropriate foci for analysis; incorporating both analyses is optimal for surveillance. Root cause analysis (RCA, a retrospective tool) and failure mode and effects analysis (FMEA, a prospective tool) are useful systems and safety engineering tools that can also be of value in Six Sigma projects. A key point about these tools is that a single individual cannot perform them successfully: they must be performed with a multidisciplinary team, ideally representing all of the professions involved in a given workflow.

RCA is usually done when errors of significance come to the attention of the multidisciplinary team. The goal is to identify and ultimately improve or eliminate contributory factors that lead to unsafe conditions, near misses, or incidents that reach the patient. The taxonomy of contributing factors deployed in the Radiation Oncology Incident Learning System (RO-ILS) for radiation medicine is based on the principles of RCA.

FMEA is most useful when contemplating a process change or implementing new technology in the clinic. Team members identify potential weak points in the process steps in which errors may occur, how they may appear, what causes them, and what existing controls may restrain them. Three risk assessments are assigned on an ordinal scale corresponding to potential severity of errors, their likelihood of occurrence, and their likelihood of detection if they do occur. It may help to think of detectability as a scale of “undetectability”—as this scale ranks something that would not be seen by any existing control—with a top score of 10. The other two scales rank logically according to their name (very severe = 10, happens a lot = 10). The product of these three risk factors represents the composite risk, the risk priority number (RPN), which will later be used to help identify the order in which these process weaknesses may be addressed. When ranking severity, many advise that the worst possible outcome of a failure should be considered at that level. A complete FMEA process would then mean reperforming the process analysis after the new controls are devised such that any new inadvertent pathways for error are identified and assessed for control procedures.

Six Sigma Design-Measure-Analyze-Improve-Control for Quality Improvement

Design-Measure-Analyze-Improve-Control (DMAIC) is a data-driven Six Sigma approach widely used in quality management. This approach conceptualizes the process as five sequential phases. The first three phases concentrate on identifying and understanding the problem, while the last two focus on developing solutions. DMAIC requires that measurable performance metrics be identified. For the efficient use of a DMAIC process, the scope of the problem must be fairly contained and well understood. Full completion of the cycle is necessary to realize the benefit of a DMAIC process. The IOM framework's six domains of quality are all amenable to improvement through a DMAIC approach. It has been used productively in many health care settings as well as in improving safety processes within radiation medicine.

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here