Ensuring Patient Safety in Surgery: First Do No Harm and Applying a Systems-Engineering Approach


Primum non nocere —first do no harm. The Hippocratic statement epitomizes the importance the medical community places on avoiding iatrogenic complications. In the process of providing care, patients, physicians, and the entire clinical team join to use all available medical options to combat disease to avert the natural history of pathologic processes. Iatrogenic injury or, simply, “treatment-related harm” occurs due to various reasons and sometimes gets confused with trying to help the very patient who is deteriorating from their disease. In essence, giving a patient a medication or performing surgery to help the patient may be associated with short-term or long-term worsening (“risks” of treatment), which may or may not be due to the treatment itself, and may or may not ultimately help the patient’s prognosis (“benefit” within a “risk vs. benefit ratio”). Due to advancements in care and emphasis on improving safety over time, society and the medical community have become increasingly intolerant of medical and surgical outcomes, both good and bad. Negligence is a medico-legal term when a patient is harmed due to an act or omission that deviates from an accepted standard of care. This chapter will review the science of human error in medicine and surgery and in particular a redesign approach used in systems engineering.

The Nature of Safety in Medicine and Surgery

The earliest practitioners of medicine recognized and described iatrogenic injury. Iatrogenic (Greek, iatros = doctor, genic = arising from or developing from) literally translates to “disease or illness caused by doctors.” Famous examples exist of likely iatrogenic deaths, such as that of George Washington, who died while being treated for pneumonia with bloodletting. The Royal Medical and Surgical Society, in 1864, documented 123 deaths that “could be positively assigned to the inhalation of chloroform.” Throughout history, physicians have reviewed unexpected outcomes related to the medical care they provided to learn and improve that care. The “father” of modern neurosurgery, Harvey Cushing, and his contemporary Sir William Osler modeled the practice of learning from error by publishing their errors openly so as to warn others on how to avert future occurrences. However, the magnitude of iatrogenic morbidity and mortality was not quantified across the spectrum of health care until the Harvard Practice Study, published in 1991. This seminal study estimated that iatrogenic failures occur in approximately 4% of all hospitalizations and is the eighth leading cause of death in America—responsible for up to 100,000 deaths per year in the United States alone.

A subsequent review of over 14,700 hospitalizations in Colorado and Utah identified 402 surgical adverse events, producing an annual incidence rate of 1.9%. The nature of surgical adverse events was categorized by type of injury and by preventability. These two studies were designed to characterize iatrogenic complications in health care. While not statistically powered to allow surgical subspecialty analysis, it is likely that the types of failures and subsequent injuries this study identified can be generalized to the neurosurgical patient population. More recent literature supports the findings of these landmark studies.

Over time health care systems have become increasingly complex, with multiple moving parts, more and more team members, increased complexity of equipment in the operating room (OR), and complex patients. The Institute of Medicine used the Harvard Practice Study as the basis for its report, which endorsed the need to discuss and study errors openly with the goal of improving patient safety. The Institute of Medicine report on medical errors, “To Err Is Human: Building a Safer Health,” must be considered a landmark publication. It was published in 1999 and focused on medical errors and their prevention. This was followed by the development of other quality improvement initiatives such as the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) Sentinel Events Program.

One might argue that morbidity and mortality reviews already achieve this aim. The “M&M” conference has a long history of reviewing negative outcomes in medicine and surgery to ultimately improve the quality and safety and systems of care involved with patient care. The ultimate goal of this traditional conference is to learn how to prevent future patients from suffering similar harm, and thus incrementally improve care. However, frank discussion of error is limited in M&M conferences. Also, the actual review practices fail to support deep learning regarding systemic vulnerabilities ; indeed, historical M&M conferences did not explicitly require a systems-design approach to errors to be reviewed, until recently. One prospective investigation of four US academic hospitals found that a resident vigilantly attending weekly internal medicine M&M conferences for an entire year would discuss a systematic approach to errors only once. Surgical versions of the M&M conference were historically better with some surgical or technical error discussion. However, while surgeons discussed adverse events associated with error 77% of the time, individual provider error was the focus of the discussion and cited as causative of the negative outcome in 8 of 10 conference discussions. Surgical conference discussion rarely identified system defects such as resource constraints, team communication, or other systematic problems that affected individual cases. Further limiting its utility, the M&M conference is reactive by nature and highly subject to hindsight bias or blaming a single individual rather than a systematic high-level re-design approach. This is the basis for most clinical outcome reviews, focusing solely on medical providers and their decision making. In their report on “Nine Steps to Move Forward from Error” in medicine, human factors experts Cook and Woods challenged the medical community to resist the temptation to simplify the complexities practitioners face when reviewing accidents post hoc. Premature closure by blaming the closest clinician hides the deeper patterns and multiple contributors associated with failure and ultimately leads to naïve “solutions” that are weak or even counterproductive. The Institute of Medicine has also cautioned against blaming an individual and recommending training as the sole outcome of case review. While the culture within medicine is to learn from failure, the M&M conference does not typically achieve this aim. The so-called “Swiss cheese” model is an apt analogy for most medical and surgical errors, since it requires a constellation of issues to line up like holes in Swiss cheese to fall through rather than a single hole alone.

A Human Factors Approach to Improving Patient Safety

Murphy’s Law applied to medicine and surgery implies that whatever can go wrong will go wrong. Murphy’s law, however, can be “reverse-engineered” to the science of quality and safety. For example, anticipated defects in the Swiss cheese model (i.e., anticipating things that can go wrong in advance) should be articulated prior to giving a medication and/or preoperative planning. In the realm of medical errors, medication allergies are a common high-volume issue. If a patient is exposed and becomes allergic (e.g., identification) to a medication, this allergy broad communication in the medical record is a common “hole” to fill in the Swiss cheese model and helps prevent recurrent medication adverse events. The field of human factors or systems engineering grew out of a focus on human interaction with physical devices, especially in military or industrial settings. This initial focus on how to improve human performance addressed the problem of workers that are at high risk for injury while using a tool or machine in high-hazard industries. In the past several decades, the scope of this science has broadened. Human factors engineering is now credited with advancing safety and reliability in aviation, nuclear power, and other high-hazard work settings. Membership in the Human Factors and Ergonomics Society in North America alone has grown to over 15,000 members. Human factors engineering and related disciplines are deeply interested in modeling and understanding mechanisms of complex system failure. Furthermore, these applied sciences have developed strategies for designing error prevention and building error tolerance into systems to increase reliability and safety, and these strategies are now being applied to the health care industry. The specialty of anesthesiology has employed this science to reduce the anesthesia-related mortality rate from approximately 1 in 10,000 in the 1970s to over 1 in 250,000 three decades later. Critical incident analysis was used by a bioengineer (Jeffrey Cooper, PhD) to identify preventable anesthesia mishaps in 1978. Dr. Cooper’s seminal work was supplemented by the “closed claim” liability studies, which delineated the most common and severe modes of failure and factors that contributed to those failures. The specialty of anesthesiology and its leaders endorsed the precepts that safety stems more from improved system design than from increasing vigilance of individual practitioners. As a direct result, anesthesiology was the first specialty to adopt minimal standards for care and monitoring, preanesthesia equipment “checklists” similar to those used in commercial aviation, standardized medication labels, interlocking hardware to prevent gas mix-ups, international anesthesia machine standards, and the development of high-fidelity human simulation to support crisis team training in the management of rare events. Lucien Leape, MD, a former surgeon, one of the lead authors of the Harvard Practice Study, and a national advocate for patient safety, has stated,

"Anesthesia is the only system in healthcare that begins to approach the vaunted 'six sigma' (a defect rate of 1 in a million) level of clinical safety perfection that other industries strive for. This outstanding achievement is attributable not to any single practice or development of new anesthetic agents or even any type of improvement (such as technological advances) but to application of a broad array of changes in process, equipment, organization, supervision, training, and teamwork. However, no single one of these changes has ever been proven to have a clear-cut impact on mortality. Rather, anesthesia safety was achieved by applying a whole host of changes that made sense, were based on an understanding of human factors principles, and had been demonstrated to be effective in other settings."

The Anesthesia Patient Safety Foundation, which has become the clearinghouse for patient safety successes in anesthesiology, was used as a model by the American Medical Association to form the National Patient Safety Foundation in 1996. Over the subsequent decade, the science of safety has begun to permeate health care.

The human factors psychologist James Reason, PhD, has characterized accidents as evolving over time and as virtually never being the consequence of a single cause. , Rather, he describes accidents as the net result of local triggers that initiate and then propagate an incident through a hole in one layer of defense after another until irreversible injury occurs ( Fig. 1.1 ). This model has been referred to as the “Swiss cheese” model of accident causation. Surgical care consists of thousands of tasks and subtasks. Errors in the execution of these tasks need to be prevented, detected, and managed, or tolerated. As described above, the layers of the Swiss cheese model represent the system of defenses against such error. Latent conditions is the term used to describe “accidents waiting to happen” that are the holes in each layer that will allow an error to propagate until it ultimately causes injury or death. The goal in human factors system engineering is to know all the layers of Swiss cheese and create the best defenses possible (i.e., make the holes as small as possible). This very approach has been the centerpiece of incremental improvements in anesthesia safety.

FIGURE 1.1, A “Swiss cheese” model of accident causation originally described by Ransom. Accidents (adverse outcomes) require a combination of defenses to line up in sequence for it to occur.

One structured approach designed to identify all of the holes in the major layers of cheese in medical systems has been described by Vincent. , He classifies the major categories of factors that contribute to error as follows ( Fig. 1.2 ) :

  • 1.

    Patient factors: condition, communication, availability and accuracy of test results, and other contextual factors that make a patient challenging

  • 2.

    Task factors: using an organized approach in reliable task execution, availability and use of protocols, and other aspects of task performance

  • 3.

    Practitioner factors: deficits and failures by any individual member of the care team that undermines management of the problem space in terms of knowledge, attention, strategy, motivation, physical or mental health, and other factors that undermine individual performance

  • 4.

    Team factors: verbal/written communication, supervision/seeking help, team structure and leadership, and other failures in communication and coordination among members of the care team such that management of the problem space is degraded

  • 5.

    Working conditions: staffing levels, skills mix and workload, availability and maintenance of equipment, administrative and managerial support, and other aspects of the work domain that undermine individual or team performance

  • 6.

    Organization and management factors: financial resources, goals, policy standards, safety culture and priorities, and other factors that constrain local microsystem performance

  • 7.

    Societal and political factors: economic and regulatory issues, health policy and politics, and other societal factors that set thresholds for patient safety

FIGURE 1.2, A, multiple factors that lead to safety and errors in health care described by Vincent. 28 , 29 NASA-TLX (Task Load Index) 30 models are reported to impact both surgeon and teams (e.g., heavier loads can lead to more errors). Bottom image , example of cyclical patient safety improvement Process A and how the identification and changes in the initial process and refinement leads to a new Process B .

A variety of systems-engineering models have been proposed as above to help mitigate patient errors. If this schema is used to structure a review of a morbidity or mortality, that review will be extended beyond myopic attention to the singular practitioner. Furthermore, the array of identified factors that undermine safety can then be countered systematically by tightening each layer of defense, one hole at a time. I have adapted active error management as described by Reason and others into a set of steps for making incremental systemic improvements to increase safety and reliability. In this adaptation, a cycle of active error management consists of (a) surveillance to identify potential threats; (b) investigation of all contributory factors; (c) prioritization of failure modes; (d) development of countermeasures to eliminate or mitigate individual threats; and (e) broad implementation of validated countermeasures ( Fig. 1.3 ).

FIGURE 1.3, Sequence of steps for identifying vulnerabilities and then implementing corrective measures. Holes A, B, C line up and hazards can penetrate all three layers. Countermeasure plugs (blue) can plug these holes but require vigilance and systems awareness of these defects and how they line up to provide broad implementation of countermeasures.

The goal is to move from a reactive approach based on review of actual injuries toward a proactive approach that anticipates threats based on a deep understanding of human capabilities and system design that aids human performance rather than undermines it.

A comprehensive review of the science of human factors and patient safety is beyond the scope of this chapter; neurosurgical patient safety has been reviewed, including ethical issues and the impact of legal liability. Safety in aviation and nuclear power has taken over four decades to achieve the cultural shift that supports a robust system of countermeasures and defenses against human error. However, it is practical to use an example to illustrate some of the human factors principles introduced. Consider this case example as a window into the future of managing the most common preventable adverse events associated with surgery.

Illustrating Systems Improvement Over Time: Wrong-Sided Brain Surgery

Wrong-site surgery is an example of an adverse event that seems as though it should “never happen.” However, there are more than 40 million surgical procedures annually, making this a potential area for improvement and systems improvement. The news media has diligently reported wrong-site surgical errors, especially when they involve neurosurgery. Headlines such as “Brain Surgery Was Done on the Wrong Side, Reports Say” ( New York Daily News , 2001) and “Doctor Who Operated on the Wrong Side of Brain Under Scrutiny” ( New York Times , 2000) are inevitable when wrong-site brain surgery occurs. As predicted, these are not isolated stories. A recent report from the state of Minnesota found 13 instances of wrong-site surgery in a single year during which time approximately 340,000 surgeries were performed. No hospital appeared to be immune to what appears on the surface to be such a blatant mistake. Despite the debut of the Universal Protocol at all JAHCO certified hospitals since 2004, the issue of wrong-site surgery remains. From JAHCO data from 2014 through 2017, a total of 407 events with wrong site, wrong patients, or wrong procedures were documented. In a 2007 national survey, the incidence of wrong-sided surgery for cervical discectomies, craniotomies, and lumbar surgery was 6.8, 2.2, and 4.5 per 10,000 operations, respectively.

The sensational “front page news” media fails to identify the deeper second story behind these failures and how to prevent future failures through creation of safer systems. In this example, we provide an analysis of contributory factors associated with wrong-site surgery to reveal the myriad of holes in the defensive layers of “cheese.” These holes will need to be eliminated to truly impact the frequency of this already rare event and create more reliable care for our patients.

Contributory Factor Analysis

Patient Factors Associated With Wrong-Site Surgery

Patient Condition (Medical Factors That If Not Known Increase the Risk for Complications)

Neurosurgical patients are at higher risk for wrong patient surgery than average patients and their surgical conditions contribute to error. When patients are asked what surgery they are having done on the morning of surgery, only 70% can correctly state and point to the location of the planned surgical intervention. Patients are a further source of misinformation of surgical intent when the pathology and symptoms are contralateral to the site of surgery, a common condition in neurosurgical cases. Patients scheduled for brain surgery and carotid surgery often confuse the side of the surgery with the side of the symptoms. Patients with educational or language barriers or cognitive deficits are more vulnerable, since they are unable to accurately communicate their surgical condition or the planned surgery.

Certain operations in the neurosurgical population pose higher risk for wrong-site surgery. While left-right symmetry and sidedness represents one high-risk class of surgeries, spinal procedures in which there are multiple levels is another. Also full disclosure before surgery to the patient about the potential need to operate or shift to the other side or extend surgery should be discussed up front in the consent phase and documented before the patient is under anesthesia.

Patients with anatomy and pathology that disorient the surgical team to side or level are especially at risk. Anterior cervical discectomies can be approached by surgeons from either side. This lack of a consistent cue for the rest of the surgical team as to the approach for the same surgery makes it unlikely anyone would trap an error in positioning or draping. It is known that patient position and opaque draping can remove external cues as to left and right orientation of the patient and thus predispose surgeons to wrong-sided surgery. When a patient is lateral, fully draped, and the table rotated 180 degrees prior to the attending surgeon entering the operating theater, it is difficult to verify right from left. Furthermore, the language for positioning creates ambiguity, since the terminology of left lateral decubitus, right side up, and left side down are used interchangeably by the surgical team to specify the position. A patient with bilateral disease, predominant right-sided symptoms, and left-sided pathology having a left-sided craniotomy in the right lateral decubitus position with the table turned 180 degrees and fully draped obviously creates more confusion than a gallbladder surgery in the supine position.

Communication (Factors That Undermine the Patient’s Ability to Be a Source of Information Regarding Conditions That Increase the Risk for Complications and Need to Be Managed)

Obviously, patients with language barriers or cognitive deficits represent a group that may be unable to communicate their understanding of the surgical plan. This can increase the chance of patient identification errors that lead to wrong-site surgery. In a busy practice, patients requiring the same surgery might be scheduled in the same OR. It is not uncommon to perform five carotid endarterectomies in a single day. When one patient is delayed and the order switched to keep the OR moving, this vulnerability is expressed. Patients with common names are especially at risk. A 500-bed hospital will have approximately 1,000,000 patients in the medical record system. About 10% of patients will have the same first and last names. Five percent will have a first, middle, and last name in common with one other individual. Only by cross-checking the name with one other patient identifier (either birth date or a medical record number) can wrong patient errors be trapped.

Another patient communication problem that increases risk for wrong-site surgery consists of patients marking themselves. Marking the skin on the side of the proposed surgery with a pen is now common practice by the surgical team and part of the Universal Protocol. However, some patients have placed an X on the site not to be operated on. The surgical team has then confused this patient mark with their own in which an X specifies the side to be operated on. Patients are often not given information of what to expect and will seek outside information. For example, a neurosurgeon on a popular daytime talk show discussing medical mistakes stated incorrectly that patients should mark themselves with an X on the side that should not be operated on. This error in information reached millions of viewers, and was in direct violation of recommendations for marking provided by the Joint Commission on Accreditation of Healthcare Organizations (and endorsed by the American College of Surgeons, American Society of Anesthesiology, and Association of Operating Room Nurses). Patients who watched this show and took the advice of the physician are now at higher risk than average for a wrong-sided surgical error.

Availability and Accuracy of Test Results (Factors That Undermine Awareness of Conditions That Increase the Risk for Complications and Need to Be Managed)

Radiologic imaging studies can be independent markers of surgical pathology and anatomy. However, films and/or reports are not always available. Films may be lost or misplaced. Also, they may be unavailable because they were performed at another facility. New digital technology has created electronic imaging systems that virtually eliminate lost studies. However, space constraints have led many hospitals to remove old view boxes to make room for digital radiologic monitors. While less of an issue due to widespread adoption of electronic viewing systems, for patients who bring films from an outside hospital, this decision to eliminate view boxes prevents effective use of the studies. Even when available and in digital form, x-rays and diagnostic studies are not correctly labeled with 100% reliability. Imaging studies have been mislabeled and/or oriented backward, leading to wrong-sided surgery.

Task Factors Associated With Wrong-Site Surgery

Tasks are the steps that need to be executed to accomplish a work goal. It is especially important to structure tasks and task execution procedures when work domains are complex, the task must be executed under time pressure, and the consequences of errors in task execution are severe. Typical tools for structuring task execution are protocols, checklists, and algorithms. A high workload and intensity (i.e., being “on-call” with emergency cases can be distracting and trying to do elective high-complexity cases) are also factors being looked at in systems engineering. Sleep deprivation also is another factor inherent to medicine, surgery, and even hospitalized patients themselves.

Task Design and Clarity of Structure (Consider This to Be an Issue When Work Is Being Performed in a Manner That Is Inefficient and Not Well Thought Out)

In large hospitals, ORs do not execute a consistent set of checks and balances to verify that the right patient, the surgical intent, and critical equipment and implants are present. If surgical team members think that they can announce the patient name and procedure aloud and that this will reliably prevent wrong-site surgery, they are mistaken. Structuring tasks for reliability such that current failure rates will be moved from approximately 1 in 30,000 to 1 in 1 million will take the kind of task structure and consistency seen on the flight decks of commercial planes. For decades, pilots have used well-organized preflight checklists to perform the tasks to start up an engine and verify that all mission-critical equipment is present and functional.

An example of a mature use of checklists exists in anesthesiology. An anesthesia machine (and other critical equipment) must be present and functional prior to the induction of anesthesia and initiation of paralysis so that a patient can have an airway as well as breathing and circulatory support provided within seconds to avoid hypoxia and subsequent cardiovascular complications. Until 1990, equipment failures were a significant problem leading to patient injury in anesthesia, even though anesthesia machines and equipment had been standardized and were being used on thousands of patients in a given facility. At this time, a preanesthesia checklist was established to structure the verification of mission-critical components required to provide the anesthetic state and to verify that these components functioned nominally. This checklist includes over 40 items and has included redundancy for checking critical components. It has been introduced as a standard operating procedure for the discipline of anesthesia and is now mandated by the US Food and Drug Association ( Fig. 1.4 ).

FIGURE 1.4, Example of computerized safety checklist implementation shown in randomized fashion to reduce errors.

Availability and Use of Protocols: If Standard Protocols Exist, Are They Well Accepted and Are They Being Used Consistently?

The first attempts to establish standardized protocols for patient safety began with JCAHO. The JCAHO “Sentinel Event” system began monitoring major quality issues in the late 1980s about the same time the original AAOS Sign Your Site program launched. A sentinel event was defined as “an unexpected occurrence involving death or serious physical or psychological injury, or the risk thereof.” In addition to the reporting aspect of the program, a quality review is triggered that requires a root cause analysis to try to determine factors contributing to the sentinel event. This has been successfully adapted and studied, and specifically evaluated with respect to neurosurgery, utilizing a voluntary, confidential database with a subsequent aviation-derived root cause analysis in order to identify systemic factors and best practices to prevent future events.

The Universal Protocol was a logical extension of the Sentinel Events quality improvement program. Wrong-site surgery is considered a sentinel event. Because of the mandatory reporting of Sentinel Events, some of the best data on the incidence and anatomic location of wrong-site surgeries come from the JCAHO. Before implementation of the Universal Protocol, the JCAHO analyzed 278 reports of wrong-site surgery in the Sentinel Events database up to 2003. This review showed that in 10% of the cases the wrong procedure had been performed, in 12% surgery had been performed on the wrong patient, and a further 19% of the reports characterized miscellaneous wrong. Thus it was felt that a protocol to address this issue must include provisions to avoid wrong patient, wrong procedure, as well as wrong-site surgery.

In May 2003, the JCAHO convened a “Wrong-Site Surgery Summit” to look into possible quality initiatives (QI) in this area. The three most effective measures identified were patient identification, surgical site marking, and calling a “time out” before skin incision to verify factors such as the initial patient identification, patient allergies, completion of preoperative interventions such as intravenous antibiotics, the procedure to be performed, available medical records, imaging studies, equipment, etc. When correlated to Sentinel Event Data, it was found that only 12% of wrong-site surgeries occurred in institutions with 2 of 3 protocols applied. More importantly, no incidences of wrong-sided surgery were detected in institutions with all three measures in place. Therefore these three key processes became the Core Elements of the Universal Protocol, which is a mandatory quality screen in all JCAHO certified hospitals since July 1, 2004.

The universal protocol for preventing wrong patient, wrong-site errors is based on checklist principles, but there is not yet a validated comprehensive checklist that will trap errors in the way aviation checklists do. This is largely due to the lack of consistent execution of the checklists in a challenge-response format that is identical in procedure and practice throughout a single hospital’s ORs. , This protocol is a first step, but the barriers to effective implementation are extensive at present and hinder improved safety. , Compliance with standardized checklists is paramount to deriving the benefits that may be associated with utilizing the protocols.

Another hazard is the lack of clarity for marking surgical sites. Marking the surgical site has been endorsed to improve safety and is a major component of the Universal Protocol. However, as described previously, the mark can be a source of error when placed inappropriately by the patient or any other member of the surgical team. Some specifics regarding the details of what, when, and how to mark are lacking. Do you mark the incision site or the target of the surgery? What constitutes a unique and definitive mark? What shape and color should be used? What type of pen should be used? Does the ink pose any risk for infection or is it washed off during the course of preparation? Who should place the mark? What are the procedures that get marked and which should not? Are there any patients for whom the mark is dangerous? How do you mark for a left liver lobe resection or other procedures like brain surgery in which there is a single organ but still sidedness that is critical? I worked with over 10 surgical specialties to develop specific answers to these questions. Multiple marks and pens were tested. Not all symbols and pens were equally effective. Many inks did not withstand preparation and remain visible in the operative field. We now use specific permanent pens (Carter fine and Sharpie very fine) and a green circle to mark only “sided” procedures. We specified that the target is marked rather than the incision, the mark must be done by the surgeon, and the mark must be placed in a manner in which it is visible during the pre-incision check after the position, preparation, and drape have been completed. For example, a procedure requiring cystoscopy to inject the right ureteral orifice to treat reflux is now marked on the right thigh so that the green circle mark is a cue to all members of the surgical team and can be seen even when the patient is prepared and draped. Again, we used the mark to specify the target, not the incision or body entry point. In addition, we have had every procedure in our booking system labeled as “mark required” or “mark not required,” because this was not always clear. Even with this level of specificity, we have found marking to be erroneous and inconsistent during our initial implementation. Marking the skin for spine surgery to indicate the level may increase the risk of wrong-site errors. A superior method for “marking” to verify the correct spinal level to be operated upon is to perform an intraoperative radiologic study with a radiopaque marker. We expect that many revisions to this type of safety checklist of measures will be needed before the marking procedure is robust and truly adds safety value. Cross-checking procedures in aviation were developed and matured over decades to achieve the reliability and consistency now observed.

The first two quarter statistics after implementation of the Universal Protocol were encouraging. It appeared that reports of wrong-site surgeries had declined below the rate of approximately 70 cases per year for the previous 2 years. However, after a full year’s statistics had been accumulated, it was found that the incidents of wrong-site surgery had actually increased to about 88 for 2005. Overall, wrong-site surgery had climbed to the number 2 ranking in frequency of Sentinel Events. Whether these data represent a true increase in the frequency of wrong-site surgery or are simply explained by better awareness and reporting is unclear at this time.

Currently the direction in patient safety is more toward a holistic surgical checklist, including all aspects of a patient visit to the hospital and not only the limited time out before surgery. A number of studies have been conducted that evaluated the use of checklists in medicine and their effect in behavior modification. To that effect, the WHO surgical checklist was developed. The features of the Universal Protocol have been integrated in this checklist with the addition of preprocedural and postprocedural check points. Results from the implementation of the WHO checklist are encouraging. These initial attempts have been extended to the development of checklists, like the SURPASS checklist, that cover the whole surgical pathway from admission to discharge. Implementation of the WHO safety checklist has led to improved communication between surgeons and anesthesiologists and improvements in length of stay, adverse events, and readmissions on neurosurgical services.

Overall, although it has been shown that aviation-based team training elicits initially sustainable responses, effects may take years to be part of the surgical culture.

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here