Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Children may be harmed by the healthcare that aims to make them better. Such harms include central line–associated bloodstream infections (CLA-BSIs) and medication overdoses. In 1991 the Harvard Medical Practice Study reviewed a large sample of adult medical records from New York State and found that adverse events occurred in an estimated 3.7% of hospitalizations. Most events gave rise to serious disability, and 13.6% led to death. The Institute of Medicine (IOM) estimated that as many as 98,000 Americans per year die in the hospital from medical errors.
Although fewer data are available for children, it is clear that children experience substantial healthcare-related harm. Nationally, hospitalized children experience approximately 1,700 CLA-BSIs and 84,000 adverse drug events each year. While the evidence is less robust, and not without controversy, substantial progress has been reported, particularly in healthcare-associated conditions (HACs) . Less strong epidemiologic estimates are available for adverse events in the ambulatory environment, but these events are likely more common than reported.
The Solutions for Patient Safety (SPS) collaborative started with the 8 children's hospitals in Ohio and has expanded to include >130 hospitals across the United States and Canada ( http://www.solutionsforpatientsafety.org ). The collaborative uses a learning network model to pursue the aim of eliminating serious harm across all children's hospitals. The American Academy of Pediatrics (AAP), Children's Hospital Association, and The Joint Commission (TJC) also have convened improvement collaboratives in pediatric safety. In addition, healthcare has recognized the high rates of healthcare worker injury and the critical role that the safety of healthcare providers plays in outcomes, burnout, and safe patient care.
Clinical leaders, improvers, and researchers often employ measures of error and harm to understand and improve safety, but the differences between these 2 measures can lead to confusion. Errors occur when a physician, nurse, or other member of the healthcare team does the wrong thing ( error of commission ) or fails to do the right thing ( error of omission ); errors of omission (e.g., not arriving at the right diagnosis) are considerably more difficult to measure. Harm , as defined by the Institute for Healthcare Improvement, is “unintended physical injury resulting from or contributed to by medical care (including the absence of indicated medical treatment), that requires additional monitoring, treatment, or hospitalization, or that results in death.” Most errors in healthcare do not lead to harm; harm may be both preventable and nonpreventable ( Fig. 5.1 ). A physician may erroneously fail to add a decimal point in a medication order for an aminoglycoside antibiotic, ordering a dose of 25 mg/kg rather than the intended dose of 2.5 mg/kg of gentamicin. If this error is caught by the computerized order entry system or the pharmacist, this would be an error with no resultant harm. If this error was not reviewed and caught by a pharmacist and reached the patient, who suffered acute kidney injury, this would be preventable harm since evidence shows that pharmacist review can reduce the risk of these errors 10-fold. Alternatively, if a patient received a first lifetime dose of amoxicillin and had anaphylaxis requiring treatment and hospital admission, this harm would be considered nonpreventable since no valid predictive tests are available for antibiotic allergy. Furthermore, the concept of latent risk , independent of any actual error, is inherent in any system where patients can be harmed. Among errors that do not lead to harm, near misses that do not reach patients—or high-risk situations that do not lead to harm because of good fortune or mitigation—are important learning opportunities about safety threats.
Several classification systems exist to rate harm severity, including the NCC-MERP for medication-related harm and the severity scales for all-cause harm. Serious safety events (SSEs) are deviations from expected practice followed by death or severe harm. The SPS collaborative has SSE elimination as its primary goal. Sentinel events or never events , such as a wrong-site surgery, are also targets of external reporting as well as for elimination through quality improvement (QI) initiatives (see Chapter 4 ). Increasingly, health systems are using a composite serious harm index , which combines a variety of preventable HACs (e.g., CLA-BSIs) to examine system safety performance over time across various patient populations and sites of care.
Safety frameworks are conceptual models and tools to help clinicians, improvers, and researchers understand the myriad contributors to safe healthcare and safety events. Healthcare is delivered in a complex system with many care providers and technologies, such as electronic health records and continuous physiologic monitors. The Donabedian framework, which links structure, process, and outcome, can be a very useful tool. The Systems Engineering Initiative for Patient Safety (SEIPS) model, developed by human factors engineers and cognitive psychologists at the University of Wisconsin–Madison, provides more detailed tools to understand the work system and the complex interactions between people and task work and technology and the environment. The SEIPS 2.0 model more prominently includes the patient and family in co-producing care outcomes. Other available safety frameworks include those from the Institute for Healthcare Improvement. The “Swiss cheese” model illustrates how an organization's defenses prevent failures from leading to harm, but only when the holes of the Swiss cheese slices, representing different components of the system, do not line up properly.
Traditionally, safety science and improvement have focused on identifying what went wrong (near misses, errors, and harm) and then tried to understand and improve the system of care that led these events. There is increasing focus on what goes right . This framework, called Safety-II to contrast with Safety-I and its focus on learning from what goes wrong, brings focus on the much greater number of things that go right and how people act every day to create safety in complex and unpredictable systems. Safety-II seeks to learn from people, the greatest source of system resilience, particularly in the midst of high levels of risk and stress, as often seen in healthcare.
Health systems use a toolbox of processes to discover, understand, and mitigate unsafe conditions.
Many health systems and hospitals offer employees access to a system to report errors, harms, or near misses. Most frequently, these are anonymous so that healthcare workers feel safe to submit an event in which they may have been involved, or when the harm involved someone in a position of authority. Ideally, these systems would facilitate smooth and efficient entry of enough information for further review but avoid excessive burden of time or cognitive load on the reporter. Incident reporting systems likely work best in the presence of a strong safety culture and when employees have some confidence that the event will be reviewed and actions taken. From studies that use more proactive assessment of harm and error, it is clear that incident reports dramatically underreport safety events. With this being the chief limitation, other mechanisms must also be in place to learn about safety. Trigger tool systems have been evaluated in pediatrics with encouraging results. These systems use triggers , such as the need for an antidote to an opioid overdose or the transfer of a patient to higher-level care, to facilitate targeted medical record review by trained nurses and physicians and elucidate any errors or system risks.
Simulation is an excellent tool to better understand system and latent threats. High-fidelity simulation can allow clinicians to practice technical skills such as intubation in a safe environment; perhaps more importantly, simulation can help clinical teams improve non-technical skills such as using closed-loop communication and sharing a mental model (e.g., a team leader states, “I believe this patient has septic shock. We are rapidly infusing fluids and giving antibiotics. Blood pressure is normal for age. What other thoughts does the team have?”). It is often easier and more feasible to give feedback in a simulated scenario vs a real event.
Low-fidelity simulation on the hospital unit or in the clinic does not require costly simulated patients and may have advantages in identifying latent threats in the system. For example, a simulated scenario on a medical-surgical unit might identify that nurses do not know where to find a mask for continuous positive airway pressure (CPAP) to support an infant with respiratory failure. Identifying—and then mitigating—this latent threat in a simulated environment is preferable to doing so in an acutely deteriorating child.
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here