Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Decades ago, accidental delivery of hypoxic gas mixtures was a constant threat during general anesthesia. Many instances of hypoxia were attributed to human error. In some cases, the anesthesiologist mistakenly turned the wrong gas flow control knob or failed to recognize that the oxygen cylinder was empty. In another case, a technician placed the flowmeter tubes in the wrong positions while servicing the anesthesia machine. In each case, these small human errors led to major injury or death of the patient. With modern anesthesia machines, the risk of accidental hypoxia has been dramatically reduced. In effect, the potential for human error has been reduced by redesigning the equipment. The concept that equipment can be designed for optimal performance by the human user is one of the core principles of ergonomics. This chapter reviews the role of ergonomics in the practice of anesthesia, including the design of the anesthesia workspace.
Ergonomics is a discipline that investigates and applies information about human requirements, characteristics, abilities, and limitations to the design, development, engineering, and testing of equipment, tools, systems, and jobs. , Most people have probably thought about ergonomic issues, even if they are not familiar with the term. Injecting the wrong medication and wondering whether there might be a system to prevent such errors, is thinking about safety, which is one area of concern in ergonomics. Ergonomics involves optimizing the work environment for the benefit of the user, such as positioning the surgical, radiographic, and anesthesia displays in a hybrid operating room so that all workers have a view. Evaluating the potential for user error prior to purchasing equipment is another ergonomic activity.
The objectives of ergonomists are to improve safety, performance, and well-being by optimizing the relationship between people and their work environment. The terms ergonomics , human factors , human engineering , and usability engineering are often used interchangeably; however, the term ergonomics is used exclusively in this chapter.
The Software-Hardware-Environment-Liveware (SHEL) model, first introduced by Edwards in 1972, can be used to illustrate the scope of ergonomics ( Fig. 18.1 ). Within this model, all jobs are performed by three classes of resources. The first class is composed of the physical items, or hardware . This includes the buildings, equipment, and materials used for the job. The second class, the software , consists of the rules, guidelines, policies, procedures, and customs involved in the job. People make up the third class of components, liveware. These components act together within a larger context, or environment , that is composed of external physical, economic, social, and political factors that affect the job.
Ergonomics is the discipline of designing and testing the human/systems interface with the goal of improving the interactions between the liveware component and the other components. In the broadest terms, ergonomics deals with the study and enhancement of the tools and systems used by humans to interact with the physical world around them.
Ergonomics is both a science and a profession, encompassing both research and application. One goal of ergonomics research is to understand and describe the capabilities and limitations of human performance. Another is to develop principles of interaction between people and machines. Examples of ergonomics research are the investigation of visual perception in relation to a particular task and the measurement and compilation of anthropomorphic data (e.g., What is the distribution in the lengths of the tibia and femur in all men between 18 and 45 years of age?). Application involves the use of these data in the development of equipment, systems, and jobs. For example, the selection of color coding for displays is based on an understanding of visual perception, information processing, and decision theory, whereas anthropomorphic data are used in the design of a chair.
Some aspects of ergonomics focus on the worker and the human-to-human interfaces within the system. This may include task and workload analysis, examination of vigilance and fatigue, and analysis of team interactions. The focus of this chapter is on the interface between the liveware and the hardware, that is, the interface between the human and the machine.
The number of ergonomics studies in anesthesiology continues to grow. The focus of these studies has been to identify human/machine interface factors that affect patient safety and the anesthesia caregiver’s job performance. Many recent efforts are concisely summarized by Cobb and Lane-Fall.
Task analysis is a basic ergonomics methodology for evaluating jobs or designing new human/machine systems. Several variants of this methodology are used, such as cognitive task analysis, critical decision method, and time-and-motion studies, depending on the focus of the problem. Task analysis methods typically involve the structured decomposition of work activities and/or decisions and the classification of these activities as a series of tasks, processes, or classes. At least three interacting components can be identified and described for each task: the task’s goals , constraints , and behaviors.
One of the first formal time-and-motion studies ever performed was an analysis of surgeons’ tasks in the operating room (OR). Frank and Lillian Gilbreth conducted time-and-motion studies of surgical teams during the early 1900s and concluded that surgeons spent an inordinate amount of time looking for instruments as they picked them off the tray. Their findings led to the current practice of the surgeon requesting instruments from a nurse, who places the instrument in the surgeon’s hand. Numerous other studies of surgical and nursing activity have helped to optimize time and movement, as well as reduce the possibility of physical injury to operating room personnel.
One of the first time-and-motion studies of anesthesiologists was conducted to identify ways to improve anesthesiologists’ job satisfaction. Drui and colleagues filmed eight operations and organized the anesthesiologists’ activities into 24 categories. They then had anesthesiologists rate each activity’s importance, knowledge demand, and skill requirement. They found that filling out the anesthesia record occupied a large proportion of the anesthesiologists’ time but was rated as relatively unimportant and easy to perform. They also found that blood pressure and pulse were determined faster when the pressure gauge was located at the head of the OR table instead of on the anesthesia machine. An unexpected finding was that the anesthesiologist’s attention was directed away from the patient or surgical field 42% of the time. The authors recommended automating the task of creating an anesthesia record and redesigning the anesthesia machine to increase productivity and decrease the amount of distraction away from the patient and surgical field.
It is interesting that almost 45 years after Drui’s recommendations were published, electronic anesthesia record-keeping systems and integrated anesthesia workstations have attained commercial viability. Kennedy and colleagues recorded three coronary artery bypass procedures on video and coded 13 categories of anesthesiologist activity at 2-second intervals. They found that the two most frequent activities were “observe patient” and “scan entire field” but that attention was directed away from the patient and surgical field 30% of the time. Logging data on the anesthesia record occupied 10% to 15% of the anesthesiologists’ time; this activity was tightly linked with observing instrument displays. These authors also recommended automation of the anesthesia record and a more structured arrangement of equipment around the patient and surgical field.
Neither of these studies directly resulted in a redesign of anesthesia equipment. However, in 1976, Fraser Harlake (Orchard Park, NY) produced a prototype line-of-sight anesthesia machine designed by Goodyear and Rendell-Baker. With this machine, the user could see both the patient and the machine controls with minimal eye movement. Although it was never made commercially available, the machine may have nevertheless influenced the design of the Ohio Modulus Wing anesthesia machine (Ohmeda; GE Healthcare, Waukesha, WI). An important feature of the Modulus Wing machine was that the displays and controls had more ergonomic viewing angles and could be positioned closer to the patient.
Boquet and colleagues collected 16 hours of time-and-motion data during general anesthetic procedures before redesigning an anesthesia system. They recorded and classified 31 manual activities and 26 visual activities. In their study, 40% of the anesthesiologists’ visual attention was directed away from the patient or surgical field, and the anesthesiologists were physically idle 72% of the time. They also found that logging data on the anesthesia record occupied 6% of the anesthesiologists’ time and was frequently linked to measurement of blood pressure. In addition, the patterns of activity were different during the four quarters of the anesthetic procedure. Based on these observations, the authors proposed a new anesthesia machine design.
Studies have confirmed previous findings that anesthesiologists spend significant amounts of time on indirect patient-related tasks and that the distribution of tasks is influenced by the stage of the anesthetic procedure. The similarities of the results in these time-and-motion studies are striking, especially because they were conducted over a period of 20 years in a wide variety of clinical settings.
Weinger and colleagues at the University of California–San Diego Medical Center used a combination of task analysis methods to compare the clinical performance of novice and experienced anesthesia care providers. A trained observer used a computer to record, in real time, 28 anesthesia-related tasks during 22 general anesthesia cases. Clinicians also rated their workload at intervals during the case and performed a vigilance task ( Fig. 18.2 ; see also Chapter 17 ).
Important differences were detected between the novices (residents in their first clinical anesthesia year) and the experts (residents in their third clinical anesthesia year and certified registered nurse anesthetists [CRNAs]). Novices took longer to induce anesthesia, performed fewer tasks per unit of time, and rated their workload as higher ( Fig. 18.3 ). In addition, novices appeared less efficient in their allocation of effort to different tasks. There were, however, many common findings among groups. With few exceptions, task distribution was similar between the novices and experts, although after intubation, experts spent significantly more time observing the surgical field ( Fig. 18.4 ). In both groups, there was a large effect of the stage of the anesthetic on task distribution; during the preintubation period, a more limited set of tasks was performed, and task durations were shorter than during the postintubation phase. Hardly any recordkeeping was done by novice or expert practitioners during the preintubation period, but record keeping consumed 15% of their postintubation time.
A more detailed analysis of tasks associated with anesthetic care was performed by Davis et al . separately analyzing drug preparation and patient preparation tasks using hierarchical task analysis, link analysis and anthropometry. They recommended design changes in the layout of anesthetic rooms (not normally found in U.S. hospitals) to better optimize workflow.
Task analysis has also been used as methodology to investigate delays and distractions. Trained observers at a teaching hospital recorded tasks during 1559 cases prior to surgical incision to reveal factors that influence the time taken for anesthesia and surgical preparation after entering the operating room. Delays of 5-minutes or more, where there was no progression of patient care, occurred in 25% of the cases. Most of the delays (67%) were attributed to the surgical team, and 22% were attributed to the anesthesia providers. Overall time taken by anesthesia providers to render the patient ready for surgical preparation was positively correlated with ASA physical status, resident training level, invasive monitoring, case duration, and case sequence in the room.
In another study, trained observers at a different academic medical center recorded activities of the primary anesthesia resident or nurse anesthetist during 319 general anesthesia cases. The anesthesia provider engaged in a non-case-related distracting activity (such as accessing the web or email, or conversation) in more than half (54%) of the cases. In cases with distractions, dwell time on the distracting activity (time before switching to another task) averaged 2.3 seconds, occurred almost exclusively during the maintenance phase of anesthesia, and occupied 3% of the maintenance time. More time was spent on major patient care tasks (manual tasks 34%, observing tasks 21%, conversing tasks 15%, record keeping 12%, and other tasks 10%) than on distracting tasks.
Workload assessments are important both for evaluating the cognitive requirements of new workplace designs and equipment and for predicting the worker’s cognitive capacity for additional tasks. Workload can have important effects on clinical performance. For example, recovery from critical events may be impaired during high-workload situations. Workload is multidimensional and complex; multiple cognitive, psychological, and physical factors contribute to overall workload, which has been divided into various categories such as perceptual , communicative , mediational , and motor load . Specific workload measurement techniques may be more sensitive and/or specific for different types of workload. , From a practical standpoint, workload measures can be divided into psychological , procedural (i.e., task related), and physiologic metrics.
Psychological metrics include psychologic tests and survey instruments, either retrospective or prospective. A common example is subjective workload assessment in which either an observer or the subjects themselves rate their workload, or some component of it, on a predefined scale. For example, Weinger and colleagues assessed subjective workload by having an observer and the subject rate the subject’s workload every 10 to 15 minutes during general anesthesia cases using an integrated workload scale ranging from 6 (no work) to 20 (extremely hard work). They found a strong correlation between the ratings of the subject and the observer, and subjective workload was significantly higher prior to intubation than during the remainder of the case ( Fig. 18.5 ).
Procedural workload assessment techniques are generally based on alterations in primary or secondary task performance. For example, Gaba and Lee used the ability of anesthesia residents to perform an extra task (paced arithmetic problems) during administration of anesthesia as a measure of the workload of the primary task (administering anesthesia). They found that performance of the secondary task was compromised in 40% of the samples; that is, the problem was skipped, or there was a greater than 30-second excess response time. Workload was highest during the induction and emergence phases of anesthesia. Higher workload occurred during performance of manual tasks, conversations with OR personnel, and interactions with the attending anesthesiologist.
Weinger and colleagues found a correlation between subjective workload and objective workload measured with a different secondary task probe, namely time to respond to the illumination of a light in the anesthesia monitoring array. Not only was the response time slower during induction than during maintenance, but less experienced anesthesia residents had slower response times compared with more experienced residents, especially during induction. This suggests that less experienced clinicians may have less spare capacity to respond to new task demands, particularly during high-workload conditions. Findings to date suggest that during the course of a typical OR procedure, the anesthesiologist’s workload is heavy 20% to 30% of the time and very low 30% to 40% of the time, and the anesthesiologist is physically active but able to respond to additional tasks the remainder of the time. Anesthesia providers are more likely to engage in self-initiated distracting tasks during periods of low workload and in cases with lower overall workload (as rated by themselves or an observer).
When workload increases, the sympathetic nervous system is activated, leading to a variety of physiologic changes, many of which can be measured. For example, increased workload is associated with increases in heart rate or respiratory rate, decreases in heart rate variability or galvanic skin response, and changes in pupil size or vocal patterns. In two older reports, Toung and colleagues , reported that the heart rate of anesthesia providers increased significantly while they were administering anesthesia, and heart rate increased to between 39% and 65% above baseline values at the time of patient intubation, although more experienced individuals manifested less of a heart rate increase. These results have been corroborated by Weinger and colleagues. , Experienced anesthesia providers showed significant increases in heart rate, above baseline values, during the induction and emergence phases of general anesthesia in healthy outpatients. In addition, their heart rate variability increased throughout the procedure; this is consistent with diminished stress levels as these experienced providers became more comfortable during the course of anesthesia administration.
Quantitation of the pace and difficulty of the tasks performed in a job may be an alternative type of procedural workload measure. Weinger and colleagues used data from their time-and-motion study to generate what they called “task density,” a continuous measure of the number of tasks performed per unit of time. Although task density correlated well with subjective workload in this study, its value seemed limited by the fact that the demands imposed by different tasks were all weighted equally. Workload density has been proposed as a real-time measure that incorporates both task density and a measure of the subjective workload associated with individual clinical tasks ( Fig. 18.6 ). Workload values for common anesthesia tasks were estimated from the results of a questionnaire on which anesthesia providers rated the difficulty of specific tasks (e.g., “observe monitors” or “laryngoscopy”) in three different dimensions: (1) mental workload , (2) physical workload , and (3) psychological stress. Factor analysis was used to generate a single index of the perceived workload for each task (i.e., workload factor scores; Table 18.1 ). Workload density was calculated by multiplying the amount of time spent on each task by that task’s workload factor score. Workload density correlates with heart rate variability, response latency, and subjective workload.
Task | Value |
---|---|
Procedural | |
Laryngoscopy | 1.519 |
Intubation | 1.463 |
Extubation | 1.426 |
Controlled ventilation by mask | 1.399 |
Teaching | 1.333 |
Airway secure/manipulation | 1.130 |
Position patient | 1.130 |
IV catheter placement | 0.940 |
Spontaneous mask ventilation | 0.935 |
Prep for next case | 0.909 |
Adjust TEE | 0.841 |
Other tasks | 0.700 |
Recording (manual) | 0.596 |
Medication preparation | 0.519 |
Tidying up | 0.475 |
Adjust monitors | 0.441 |
IV medications given | 0.426 |
Anesthesia machine adjustment | 0.404 |
Suctioning | 0.352 |
Adjust IVs | 0.222 |
Conversational | |
Attending conversation | 0.931 |
Surgeon conversation | 0.907 |
Patient conversation | 0.685 |
Converse with others | 0.308 |
Nurse conversation | 0.259 |
Observational | |
Observe TEE | 0.672 |
Observe monitors | 0.593 |
Observe patient | 0.574 |
Observe airway | 0.500 |
Observe anesthesia machine | 0.482 |
Observe surgical field | 0.352 |
Observe IVs/fluids | 0.154 |
Other observation | 0.154 |
Vigilance has long been considered important to anesthesiologists, as reflected in the word’s inclusion on the official seal of the American Society of Anesthesiologists (ASA). Anesthesiologists understand the need to pay attention to details and subtle signs that could easily be overlooked. Vigilance is discussed in some detail in Chapter 17 , although several studies pertinent to the above discussion are presented here. Kay and Neal performed one of the earliest studies of anesthesia vigilance, but their experiment had a number of methodological flaws. , Cooper and Cullen subsequently described a better method for investigating auditory vigilance. They used a computer-controlled device to occlude the stethoscope tubing silently at random intervals during routine general anesthesia cases. Study participants were instructed to press a button to restore function whenever they perceived the absence of stethoscope sounds. The elapsed time between the occlusion of the tubing and the press of the button was automatically recorded. Researchers studied 320 stethoscope occlusions in 32 intubated patients; the interval from occlusion to detection ranged from 2 to 457 seconds with a mean of 34 seconds ( Fig. 18.7 ). They concluded that auditory vigilance during general anesthesia was typically high but not infallible. Manual tasks and conversations interfered with auditory vigilance because the subjects were involved in one of these activities in all instances of response times greater than 5 minutes.
In another study, Loeb evaluated visual vigilance in eight anesthesia residents by displaying numbers at random intervals on an OR monitor during operative procedures. The residents were required to detect an “abnormal” value and asked to respond by pressing a button on the anesthesia machine. During 60 minor operative procedures, the average response time was 61 ± 61 seconds (mean ± standard deviation), and 56% of the detections were made within 60 seconds. Compared with Cooper’s study, it appears that response times in the OR are longer for visual than for auditory signals (see Fig. 18.7 ).
Loeb conducted a second vigilance study to investigate why his subjects took longer to detect changes in monitored data during the induction phase of anesthesia than during the maintenance phase. Residents performed the vigilance task described above, and task analysis data were recorded concurrently by a trained observer. Ten residents were studied during 73 surgical procedures, and performance on the vigilance task correlated with monitor-watching activities. Residents spent less total time watching monitors during induction than during maintenance, and the average duration of monitor observations was shorter. These results, combined with the findings of the above workload studies, suggest that anesthesiologists watch the monitors less during high-workload periods, such as during induction, so they may be less aware of electronically monitored data during that time.
Subsequent studies have used video technologies to assess visual attention of anesthesia providers. In one, they were covertly videotaped from a video camera looking out at them from above the anesthesia monitor. 600 minutes of videotape were analyzed, recorded during the maintenance period in 20 cases, 10 with a solo-provider and 10 with dual-providers (teaching cases). Overall, subjects spent little time observing the monitoring display (only 32 seconds per 10 minutes, or 5% of their time). Monitor observation occurred in frequent, brief glances, of 1.5 to 2 seconds duration. There was no difference in this behavior between trainees and supervisors, or solo providers. These results are substantially different from those reported, above. They indicate that providers may watch monitors more when they know they are being observed, or it may have been a sampling artifact. However, it confirms results of the observer-based studies, that anesthesia providers take short glances of their monitors. This should be taken into account when designing intraoperative anesthesia displays.
Another study recorded video from a head-mounted eye-tracking system while 15 anesthesiologists induced anesthesia in a high-fidelity simulator, once during a routine induction, and once during an eventful induction that triggered anaphylaxis. With the head-mounted eye-tracking system the general region of interest where the clinician was looking could be identified (e.g., patient monitor, patient’s thorax, respiratory mask or patient’s head, anesthesia chart), but it was not precise enough to determine which monitored variable was being observed. The results showed that amount of time visually attending to the different regions of interest, as well as the visual scan sequences, were different (1) before versus after injecting induction medications, (2) during routine induction versus eventful induction, and (3) between experienced and less experienced clinicians. Overall, anesthesiologists looked at the patient monitor 8% of the time prior to injecting induction agents, 20% of the time after injecting induction agents during routine induction, and 30% of the time after injecting induction agents during eventful induction. Other research has shown that anesthesiologists perform monitoring tasks as often during simulated cases as they do during real cases.
The topic of situation awareness in anesthesia, of which anesthesia vigilance is one component, has been recently reviewed.
A recurrent application of task analysis, workload, and attention studies has been to investigate the effect of automation and new technologies on anesthesiologist performance. The impetus for these studies may derive from two opposing schools of thought: one espousing technology and the other decrying it. From one side come claims that technology decreases workload, enhances task efficiency, and increases idle time, thereby allowing the anesthesia provider more opportunity to observe and process information from the patient, equipment, and surgical field. , The other side claims that technology removes the human from the information loop, thereby distancing the anesthesia provider from the patient and decreasing situation awareness. , , A more balanced view may be that technology can improve or degrade human performance, depending on how it is implemented. Automation will only prove beneficial if the human was previously overloaded, the automated system is a team player (i.e., responsive, directable, and nonintrusive), and the interface between them supports the human’s situational awareness. Systems that do not fulfill these criteria may create new problems and degrade overall system performance. ,
One study suggests a beneficial effect of automation on the anesthesia provider’s task distribution. McDonald and colleagues , compared the results of two time-and-motion studies conducted 5 years apart at the Ohio State University Hospitals. In the newer series, automated blood pressure devices, ventilators, and disconnect alarms were used. With these newer technologies, the time that anesthesiologists spent directly observing or monitoring the patient increased from less than 25% to nearly 60% of their total task time. At the same institution, Allard and colleagues examined the effect of an electronic anesthesia record-keeper (EARK) on the time spent keeping records and the anesthesia resident’s situation awareness. They videotaped 37 general anesthesia procedures in which record-keeping was done manually and 29 cases that used a commercial EARK. The intraoperative time of the subjects (33 anesthesia residents and 8 CRNAs) was categorized into 15 predefined activities. Situation awareness was assessed by having the subject turn away from the monitors and recall the value of eight patient variables. No difference was reported between the two groups in the time spent keeping records or the ability to recall clinical data accurately.
Loeb also investigated whether intraoperative vigilance was different when residents kept a manual record than when a human assistant performed the charting. Nine residents were studied during 36 procedures in a within-subjects balanced design. Vigilance was assessed as the subject’s response time and detection rate to detect an experimental signal displayed on the physiologic monitor. No overall difference was reported in vigilance between the two record-keeping conditions, but a tendency was observed toward reduced vigilance (i.e., longer response times or lower detection rates) during high-workload periods in the manual record-keeping group.
Tse et al. presented a study of the effects of automated record-keeping (ARK) on vigilance, situational awareness, and mental workload using various tools. While they found no differences in vigilance and mental workload, there was a measurable decrement in situational awareness (perception) associated with the use of ARK.
Weinger and colleagues studied the effects of modern anesthesia technology during the prebypass period of 20 coronary artery bypass graft procedures. In 10 cases, record-keeping was done manually; in the other 10 cases, a commercially available EARK was employed. Transesophageal echocardiography (TEE) was used in all cases. The investigators collected task analysis data (32 task categories, recorded by an observer in real time), subjective workload ratings (10- to 15-minute random intervals), and response latencies to the illumination of a light in the monitoring array. The EARK group spent less time on record-keeping between intubation and initiation of bypass and more of their time observing the monitors than did the group keeping manual records. However, no difference was found between the record-keeping groups in subjective workload or rapidity of detecting illumination of the light. Subjects spent nearly 8% of their time observing or adjusting the TEE, and it took an average of 7.4 min to insert the TEE and perform a preliminary assessment. Residents were slower to react to the illuminated light while observing or adjusting the TEE than while performing record-keeping or other monitor-ARK. However, the results from Weinger et al. do demonstrate a high workload imposed by current TEE technology and suggest the potential for impaired vigilance when TEE is used intraoperatively by a solo practitioner. The Tse et al. study also highlights the potential risk of reduced situational awareness when charting is automated.
In the anesthesia workplace, automated record keepers and automated drug dispensing carts are now commonplace. Automated devices that provide closed-loop control, mechanical robotic assistance, and artificial intelligence decision support will soon be introduced. It is important to remember that automated systems can have unintended negative consequences.
A careful analysis of adverse events and “near misses” can lead to productive changes in system structure, equipment design, training procedures, and other interventions to improve safety. Critical incident analysis is an established method for investigating human error that was first used in 1954 to study near misses in aviation. The technique involves structured interviews of people who have either observed or been involved in unsafe acts. Analysis of these interviews often provides evidence of behavior patterns and other recurrent factors that may contribute to accidents.
Cooper and colleagues were the first to apply the critical incident technique to anesthesiology. From 1975 through 1984, they collected descriptions of 1089 critical incidents from 139 anesthesiologists, residents, and nurse anesthetists. The descriptions were obtained through a combination of retrospective interviews and contemporaneous reports. Critical incidents were defined as occurrences of “human error or equipment failure that could have led (if not discovered or corrected in time) or did lead to an undesirable outcome, ranging from increased length of hospital stay to death.” Their data indicated that human error was responsible for 65% to 70% of the incidents. The 67 incidents that resulted in substantive negative outcomes included 28 technical human errors, 23 judgmental errors, and 13 vigilance errors. A number of recurrent technical human errors were related to the design or organization of equipment. Examples of these included syringe and drug ampoule swaps, gas flow control technical errors, vaporizers unintentionally turned off, drug overdoses (technical), misuses of blood pressure monitors, breathing circuit control technical errors, and wrong IV lines used. On the basis of their findings, the authors recommended a standardized system of syringe labels and redesign of the breathing circuit to prevent disconnections.
A detailed description of one of their critical incidents, gas flow control technical error, illustrates the importance of evaluating equipment designs prior to implementation. At one of the hospitals where the studies were conducted, all the anesthesia machines had been modified. On each machine, the oxygen flow control knob had been replaced with a large, square knob in an attempt to distinguish it from the nitrous oxide knob. However, rather than preventing gas flow control errors, the knob was found to be a contributing factor: half the accidental decreases in oxygen flow occurred when the knob was bumped by an object placed on the desktop surface of the anesthesia machine. This example highlights the importance of field-testing new device designs by intended users.
Subsequent critical incident studies have been performed using contemporaneous reporting strategies. In each, human error has been a predominant cause of mishaps, and the patterns of incident types and associated factors have been similar. Kumar and colleagues demonstrated that critical incidents decreased when an anesthesia equipment checklist was used, old anesthesia machines were replaced, and incidents were discussed at department conferences. They recommended critical incident surveillance as a method of identifying specific problems and ensuring quality control.
Since 1989, the Australian Patient Safety Foundation has supported an ongoing multi-institutional collection of anesthesia critical incidents. Anesthesiologists from 90 participating hospitals and practices in Australia and New Zealand anonymously reported unexpected incidents using a structured format. Each report was entered into a computerized database after it had been reviewed and classified using standard keywords. In 1993, an exhaustive analysis of the first 2000 incidents was published. Human error was believed to be involved in 83% of the incidents; only 9% of the incidents involved equipment failure. Equipment design improvement was suggested as an appropriate corrective strategy in 17% of the reports. System failure was a contributory factor in 26% of the incidents and, based on the results, the authors recommended 111 system improvements to increase patient safety.
In September 2009, the formerly paper-based system evolved into webAIRS, a web-based anesthesia incident reporting system. In the first 4000 reported incidents, 26% were associated with patient harm and another 4% resulted in death. More than 50% of incidents were classified as preventable. The four most common incident categories were (1) respiratory or airway (27%), (2) medication (17%), (3) cardiovascular (16%), and (4) medical device or equipment (12%). Interestingly, there was a gradual decline in the percentage of medical device or equipment incidents, which accounted for 17% of the first thousand reports versus 9% of the last thousand.
Reporting bias is a recognized shortcoming of the critical incident methodology. Many incidents are never reported, and those that are may be incomplete or inaccurate for a number of reasons. In studies of adverse drug reporting, only a very small fraction of the total number of events are voluntarily reported. Both the number and accuracy of adverse event reports can be enhanced by scanning automatically collected data for predefined criteria, such as out-of-bounds physiologic parameters. A continuing problem, however, is the collection of adverse events that become apparent only in the postoperative period. Until comprehensive computerized medical records are widely available, painstaking follow up will remain a cornerstone of accurate adverse event collection and analysis. , The recent emergence of large multicenter databases, such as the Multicenter Perioperative Outcomes Group (MPOG), may alleviate this need.
Gaba and DeAnda used a comprehensive anesthesia simulator to investigate factors of accident evolution and techniques used by clinicians to recognize and recover from critical events. The simulator recreated the OR environment with real monitors and equipment, and a patient mannequin was used. In an initial study of behavior of residents in response to planned critical incidents, these researchers noted problems and errors that arose in addition to the planned events. They documented 132 unplanned events during 19 simulated cases; 87 events were attributed to human error, and only 4 were equipment failures. However, many of the human failures involved errors in the use of equipment; for example, failure to switch the ventilator power back on after hand-ventilating the patient’s lungs and neglecting to increase the oxygen flow during preoxygenation. This study indicated that errors occur commonly, that many errors involve interactions with equipment, and that most errors are detected and corrected before they become hazardous to the patient. These findings also apply to experienced clinicians, who averaged five unplanned incidents during each simulated case. Again, many of the experienced practitioners’ errors involved interactions with equipment.
MacKenzie and colleagues , took an alternative approach to the study of clinical decision-making in anesthesia. Similar to the intraoperative task analysis studies described above, MacKenzie’s group assessed performance during actual trauma cases at the Maryland Shock Trauma Center and developed a sophisticated audio-video and physiologic data-capture methodology that allows offline analysis. They described four components of task complexity that appear to have a significant impact on teamwork during emergency resuscitation after trauma: (1) multiple concurrent tasks, (2) uncertainty, (3) changing plans, and (4) high workload. Their work suggests that video analysis methodology can be a powerful tool in the evaluation of factors leading to deficiencies in airway management. ,
The successful development of ergonomically sound equipment and systems requires that ergonomics principles and guidelines be adhered to throughout the entire design cycle, beginning in the predesign phase. A number of ergonomics handbooks and guidelines have been published for equipment designers in fields outside medicine. In the late 1980s, the Association for the Advancement of Medical Instrumentation (AAMI), the professional organization for American clinical/hospital engineers, began a national standards-making process to develop guidance for medical device manufacturers to improve the human factors of their products. The result, “Human Factors Engineering Guidelines and Preferred Practices for the Design of Medical Devices,” was largely an adaptation of human factors design guidance from other industries (especially for the design of military products). In the early 1990s, the AAMI Human Factors Committee decided to revise this document substantially. The group first developed a process-oriented standard on a structured approach to user interface design for all medical devices. This national standard, ANSI/AAMI HE-74-2001, described design approaches relevant to all aspects of the design of devices, including labeling, documentation, and learning tools. More importantly, the standard and the committee’s deliberations drove greater interest in human factors in the national and even international medical device industry and its regulators. For example, HE-74 was the foundation for the international collateral standard 60106-1-6 on medical device usability from the International Electrotechnical Commission (IEC), which only applied to medical devices requiring electricity to operate. This international standard was subsequently replaced by standard 62366, Medical Devices—Application of Usability Engineering to Medical Devices, a joint standard by the IEC and the International Organization for Standardization (ISO) that applies to all medical devices. The content of HE-74 is provided in Appendix G of standard 62366.
Although these standards specify the process of designing medical-device user interfaces, they do not provide any guidance on the design elements of a good medical-device user interface. Thus, the AAMI HF Committee spent 5 years creating a companion standard, HE-75 (Factors Engineering—Design of Medical Devices), which is intended to provide comprehensive human factors design principles for medical devices. In parallel with this effort, a number of the HF Committee members published a handbook intended to amplify HE-75 with greater topical detail, figures, and case studies. The Food and Drug Administration (FDA) is responsible for federal oversight of medical devices and has become increasingly interested in ensuring that medical device manufacturers use human factors design principles and adhere to standardized good manufacturing practices (GMP). The FDA recently published guidelines on this subject, that also are available online.
User requirements must be emphasized during the design of equipment and devices. The goal is to produce devices that are easily maintained, have an effective user interface, and are tailored to the user’s abilities. This is best accomplished during the early phases of system and equipment design, when the ergonomics and engineering specialists can work together with end users to produce a safe, reliable, and usable product. Norman eloquently presents this principle of user-centered system design in The Psychology of Everyday Things. This book is recommended for all engineers, programmers, and designers responsible for the development of new medical devices. Some of the key aspects of user-interface design that Norman emphasizes are to (1) make things visible, (2) provide good mapping, (3) create appropriate constraints, (4) simplify tasks, and (5) design for error.
A well-designed interface between human and machine conveys to the user the purpose, operational modes, and controlling actions for the device. If the design of the device or system is based on a good conceptual model, its purpose will be readily apparent to the user. Most devices have several operational modes, and the user must be able to determine rapidly and accurately whether the system is in the desired mode and when the mode changes. With most devices, a number of user actions are possible at any given time; with complex systems, the allowable commands often depend on the current operational mode. The user should be able to tell what actions are possible at any given instant and what the consequences of those actions will be. Feedback must be provided after each user action that should be readily understandable, and it should match the user’s intentions.
The user’s understanding of the function and operation of a device is paramount to the effectiveness of the system. The function and operation of many common devices is learned through cultural experience. People also expect certain objects to always function in a particular manner: knobs are for turning, buttons are for pushing, and so on. With other devices, the function can and often should be implied by the device itself. That is, the purpose and operation of a particular control or display should by design be as intuitive as possible for the user; for example, the sturdy horizontal handle on the side of the anesthesia machine is for pulling the device from one location to another. Such intuitive operation may be difficult to attain with complex, microprocessor-controlled multifunction devices. However, when the design requires the user to memorize specialized knowledge to operate the system (e.g., “to see the systolic blood pressure trend plot, I must push a particular sequence of soft buttons in a specific order”), the need for training increases and the chance of system-induced user error increases, especially under stressful, unusual, or high-workload conditions.
Mapping is the relationship between an action and a response and may be natural or artificial. Natural mappings are intuitive; artificial mappings must be learned. Artificial mappings that have been learned so well that the relationship between action and effect is recognized at a subconscious or automatic level are called conventional mappings . On an anesthesia machine, squeezing the bag to inflate the lungs is a natural mapping. Turning the oxygen flow control knob counterclockwise to increase gas flow is an artificial mapping. However, because this design follows the conventional mapping of valves, users typically do not have difficulty adjusting the flow of oxygen on the anesthesia machine. Unfortunately, for many medical machines and systems, methods for activating alternate modes of action, adjusting alarm limits, or manipulating data are via artificial, unique, and/or nonstandard mappings.
Any device has three different stages of mapping: (1) between intentions and the required action; (2) between actions and the resulting effects; and (3) between the information provided about the system and the actual state of the system. Inappropriate mapping at any stage leads to delayed learning and poor user performance. If natural or well-known artificial mappings are not used, the designer should seek preexisting standards or perform tests to ascertain optimal mappings, which should be consistent within a single device or system.
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here