Informatics in Perioperative Medicine


Key Points

  • Individual computers are connected via networks to share information across many users.

  • Information security is about ensuring that the correct information is available only to the correct users at the correct time.

  • Healthcare information storage and exchange is regulated to protect patient privacy.

  • Information regarding the provision of anesthesia care is highly structured and organized compared to most healthcare specialties.

  • Anesthesia care documentation systems have evolved in complexity and are now widely adopted in the perioperative care of patients in the United States.

  • Benefits of electronic documentation of anesthesia care typically emerge from integration with monitoring, scheduling, billing, and enterprise electronic health record (EHR) systems.

  • Active and passive decision-support tools may suggest typical courses of action or call to attention patterns that are not apparent to the clinician.

  • Secondary use of EHR data is valuable in understanding the impact of clinical decisions on patient outcomes and the measurement of quality of care.

  • Electronic devices may act as distractions within the operating room (OR) care environment.

Acknowledgment

The editors and publisher would like to thank Dr. C. William Hanson for contributing a chapter on this topic in the prior edition of this work.

Introduction

Computers have become ubiquitous in modern life. Their use has penetrated every medical field and the practice of perioperative care is no different. Computers have given rise to the academic discipline of informatics, the study of information creation, storage, handling, manipulation, and presentation. Within health care this is referred to as medical, biomedical, or clinical informatics.

Computer Systems

At their most basic, computer systems are complex electronic circuits that perform mathematical operations (add, subtract, multiply, divide, and compare) on information available to them. Even the most complicated computer systems consist of these operations repeated millions of times per second, which collectively generate the activity specified by the user. Every operation performed within the computer begins with the retrieval of information in the memory, a mathematical operation within the processor, and the storage of the output of that operation back to the memory. This cycle of retrieval, processing, and storage repeats millions of times per second.

Software applications execute the instructions that a computer uses to process information. The operating system is the fundamental software that controls the communication among the components of the computer. The operating system controls the order in which a processor completes tasks, allocates memory among different applications, provides a structure for organizing files in the long-term storage, controls access to files, determines which applications may run, and manages the interaction between the user and the computer. Modern operating systems provide graphic interfaces that act as paradigms to describe the organization of information and methods of user-specified computer action.

A software application is a set of instructions for a computer designed to perform a specific set of tasks. Electronic health record (EHR) software is an example of a software application. Software may (via the operating system) interact with external hardware devices, data held in long-term storage, and the user by way of input devices and display devices.

Because of the proliferation of mobile devices, traditional laptop or desktop computer systems have been supplanted in many environments by tablets or smartphone computers. These devices are structurally similar to traditional computing devices; however, the operating systems and software applications feature user interfaces that have been re-engineered to support use by touch screen or voice control operation. These devices trade off computational power, portability (size- and weight-related), and duration of operation (battery power).

Computer Networks

Networks are the means for the exchange of information among computers, enabling the sharing of resources. These networks may be established using wireless (e.g., microwave radio spectrum) or wired connections ( Fig. 4.1 ). Dedicated hardware (equipment) controls the sending and receiving of information across these links, with specialized devices required to ensure that information is sent correctly to the intended computers on the network. Software is used to ensure communication is performed according to predefined standards. In order for a computer to be accessible in the network, each computer must be given a unique address on the network so that information can be identified as destined for that computer. The process of obtaining and maintaining network addresses is performed within the local operating system and network hardware. This allows software applications to specify the information to be sent and the operating system and network hardware to manage how it is exchanged between computers.

Fig. 4.1, Relationship between a local intranet (within an institution) and the wider Internet.

Wired networks require the computer system and the receiving hardware to be physically connected by electrical or optical cable. This limits the flexibility in the connection points, which must be placed in preplanned areas, with any subsequent adjustments requiring re-routing of cables. However, information travelling on the network cannot be intercepted or accessed without physical access to the network cables or connection points.

Wireless network systems offer advantages of convenience and the ability to move around a work environment without maintaining a physical connection among the computer systems. However, this usually occurs at the expense of speed of information exchange. Information exchange via wireless links is an order of magnitude slower than the fastest wired connections. Because wireless systems require the availability of strong radio links between the computer and the network equipment, they are subject to issues of poor reception (possibly because of physical barriers) and interference, which manifest as inaccessible or degraded network performance. It is difficult to control the precise limits of where a wireless network is available (i.e., only within a building and not immediately outside of it), therefore processes to limit wireless network access to authorized users and to encrypt data transmitted across wireless links are required.

In practice, healthcare facilities use a blend of both wired and wireless networks to ensure that the advantages of each system are available to support the users.

In most settings, the network is organized as a “client-server” model. The computer that hosts the shared resources is referred to as the “server” and the computer accessing the resources is the “client.” The server is responsible for ensuring the client is an authorized user of the shared resource (access control) and ensuring the resource remains available to multiple users, potentially by preventing one client from monopolizing the use of the resource.

The client-server concept stands in contrast to peer-to-peer architecture, whereby resources are distributed across systems, with each computer on the network contributing its resources (e.g., files or specialized hardware). All computers are both clients and servers in this arrangement. There is limited ability to control access in a planned and coordinated manner.

Use of a client-server infrastructure may allow for a significant amount of the computational tasks to be outsourced to the central server. When the client has very limited computational resources this is referred to as a “thin client.” Computationally intensive tasks can be performed by the server and the client receives the results of the computation. Fundamentally, the thin client is viewing and interacting with a software application that is running on the server. The client is little more than a means of sending user input to the server and a dynamic display of application results. In order for this arrangement to work, there must be a limited, predictable set of software applications that the client accesses on the server, with a reliable network connection. Without the network connection, the thin client has no functionality. This model may be easier to maintain because any changes are done centrally and need to be made once and then become available to every client connecting in.

An alternative model is the “thick client,” where the client is capable of significant computational activities, retains a fully functional state when not connected to the network, accesses only the information required across the network, and processes it independently. However, these clients require individual maintenance.

A hybrid solution is the concept of “application virtualization,” whereby a single software application is hosted and uses the computational resources centrally and the client systems access this application regardless of their configurations. This blends the advantages of a thin client—control of the application’s availability, ease of maintenance, and ensuring compatibility (by not requiring any level of computational resources aside from running the connection to the server)—with users having a fully functional computer or device to use for the remainder of their tasks. Additionally, this hybrid enforces a separation between the information stored on the server and any applications running on the client and thus information can be secured within the server that is housed within the institutional network.

The Internet

The Internet is a global network of networks. Best known by two of the ways in which it can be used—websites and email—the Internet is at its simplest a method for transferring electronic information across the world. Internet service providers (ISPs) provide access to optical and electrical cables, which transfer information across the world. As these cables are all interconnected, multiple paths are available to transfer data at any one time. Routers control the flow of Internet traffic and ensure that it takes the most direct and fastest routes across the multiple paths available to it. Although the delay that a user may experience in accessing information varies widely and is dependent on many factors, the flow of information around the world can be measured in the order of hundreds of milliseconds or less.

Use of the Internet has led to the development of a series of technologies where computing resources are offered to multiple clients using an Internet connection as a means of distribution and interaction with the clients (see Fig. 4.1 ). These “cloud” platforms allow on-demand and scalable use of computing resources. Computing resources can be bought and sold based on the variable amount of time they are used or the amount of information stored; additional capacity can be flexibly added. These resources are accessible from anywhere with an Internet connection. Furthermore, cloud platforms give organizations the ability to transfer the management of the specialized computer hardware needed to provide these services to another party.

The integration of mobile phone data networks and the proliferation of increasingly powerful handheld devices (such as smartphones or tablets) has increased further the number of potential clients. For healthcare organizations, there is significant user pressure to be able to access healthcare information systems remotely or from these mobile devices.

The most ubiquitous usage of the Internet is in the delivery of “web pages.” Information is stored on a “web server” and upon request from an application being run on a remote client computer (web browser), the information and display formatting instructions (i.e., size, shape, position of text, or graphics) are sent to the client. The web browser then interprets these instructions and displays the information according to the specified instructions. This process is highly dependent on well-defined and accepted standards of information exchange between client and server and rendering by the client.

These web pages have become increasingly sophisticated incorporating text, video, audio, complex animations, stylesheets, and hypertext links. Technologies have evolved into an interactive process that can dispense information specific to only one user (e.g., a record of the user’s bank transactions) and that can be supplied in a manner that is generalizable to many different users (so all customers can access their bank transactions this way). When these instructions are assembled to generate specific business processes, they function as software applications that are web based and are referred to as “web applications” or “web apps.” Interaction with web pages may lead to complex business processes being undertaken in the physical world. For example, the ability to buy a book over the Internet starts with a web page displaying the information and ends in someone delivering it to the door, with many physical steps in between. Healthcare organizations have embraced these technologies to support their delivery and administration of patient care, including scheduling systems, laboratory result reporting, patient communications, and equipment management systems, all of which are delivered in this manner.

Of note, information which is travelling across the Internet, without additional measures, is not necessarily private. A salient metaphor would be to consider the difference between information being conveyed in an envelope (where the contents are not visible) and information being conveyed on a postcard (where the message is clear to anyone who holds it).

Information Security

Although computing technology has significantly influenced the delivery of medical care, it has also brought a series of challenges that must be addressed. A major consideration is information security. Core to these considerations is ensuring that the correct information is available to the correct users at the correct time.

These threats to information security may come from within or outside an organization. Within organizations, an employee may access information that they are not authorized to so do or by transferring and storing it in an insecure manner. They may introduce security threats by using applications that may transfer information outside of the organization or by modifying an existing network by using a personal device. External threats may seek to improperly access information (“hacking”) by obtaining passwords or identities from legitimate users (via “phishing” attacks) or by introducing applications that degrade computer function to extort payment (“ransomware” attacks).

The paradigm used for controlling access to computing resources is users and accounts. Each person who uses the computer is considered to be a user. Users can be identified and mapped to real-world persons. Users may belong to groups that share common attributes. It should be known in advance which resources should be available to which users or groups of users. A group of users (i.e., anesthesia providers) may have access to particular resources (e.g., a document of anesthesia policies) but each user may also have access based on their individual parameters (e.g., an individual anesthesiologist may have sole access to his or her own private files). A group of users with similar functional roles who have a defined set of resource privileges is known as “role-based security.” Changes in privileges affect all users in that functional group.

Users should be able to positively identify themselves; commonly this involves the combination of a username and password with the password being known only to the user and the computer system. However, other methods of authentication, such as biometric information (fingerprint, iris scan, or face scan) or physical access tokens (e.g., identification badges) are now commonplace. Password policies that enforce a mandatory level of complexity (minimum length, mixing letters and numbers, or special characters), specific expiry dates, and prevent password reuse are designed to make it harder for passwords to be guessed by an unknown party or to mitigate or minimize the risk of passwords being accessed or used externally. However, requirements for increasing complexity or frequency of changes may pose additional burdens on users that they consider unacceptable and may not decrease risk.

Organizations may also choose to adopt “two-factor authentication” methods, which can be summarized as requiring “something you know and something you have” to gain access to the computer system. The password fulfills the first part of this concept as it is meant to be known only to the user. Devices such as physical token code generators (which provide a predictable response to be entered alongside the password) or an interactive system (authentication via a smartphone application or phone call) may satisfy the second concept. Thus, in order for someone to impersonate the user they must have both the password (that may have been taken without the user’s knowledge) and a physical device (that the user is more likely to detect the absence of). This makes remote access less likely because an external user on the other side of the world may be able to obtain or guess a password but is very unlikely to also be able to obtain the token or smartphone required for access.

Physical security is an integral part of information security. Ensuring that an unauthorized person does not have physical access to computer hardware or access to the means of connecting to that computer hardware are important considerations. This can be accomplished by physical measures (such as locked rooms, doors, and devices that prevent movement of computer hardware) and considerations of where computers containing controlled information are placed (to prevent an unauthorized person from having access to a computer that is available in a public area).

However, as alluded to before, these restrictions are balanced against desires for increased usability and portability of computing devices from computer users and the need to make information available to the provider at the point of clinical interaction.

Therefore, it is necessary to ensure secure access to information across wireless links and across the Internet. One method for doing this is to ensure that the information transferred is not readily visible along its means of transmission. This is performed by a group of processes known as encryption. Encryption is the process of transforming a piece of information from its original and accessible state to one that is not accessible and lacks meaning without an additional piece of information (an encryption key).

The transformation to and from encrypted text takes place in a manner that is relatively easy to perform with the known encryption key but is infeasible to do so without knowing this key. Encryption processes are based on mathematics involving multiplication of very large numbers, which creates many possible combinations of different factors that could have led to the same outcome. Therefore, it would be computationally infeasible, with current technology, to attempt to try all possible solutions.

External threats to an organization involve outside entities attempting to access services or applications that are meant for internal use only. Because healthcare organizations must be connected to the Internet to enable many information exchange functions, their data may potentially be available to every Internet-connected device in the world. “Firewalls” are used to ensure that only legitimate transactions and interactions with the external world are exposed to the internal hospital network. These hardware or software tools, collectively known as a firewall, prevent the creation of unauthorized connections from outside the organization to the internal computing systems. Firewalls can also limit the types of network traffic that are allowed to exit from the internal networked system. For example, it may restrict network traffic typically used for the sharing of files.

In order to allow legitimate external access, organizations may allow the creation of virtual private networks (VPNs). After appropriate authentication and verification, VPNs set up an encrypted path for information from an external Internet-connected computer to the organization’s internal network. This allows the external computer to act as if it was physically connected to the organization’s internal network and to access resources such as specialized software or shared files. This adds an additional layer of access security to the connection and ensures the communication is secure. A healthcare organization may require use of a VPN to access an EHR from outside the organization’s network.

Standards for Healthcare Data Exchange

Although not always obvious, the EHR is typically an amalgamation of multiple computer systems and devices of various complexity. These systems exchange data according to common standards, languages, and processes.

Common connections include monitoring devices that allow automatic transfer for measured parameters into the electronic chart, infusion pumps (recording programmed settings), laboratory instruments (blood gas machines, cell counters, biochemistry analyzers, point-of-care testing devices), or systems that manage patient admission, identification, and bed occupancy (admission, discharge, and transfer [ADT] system). All of these devices and systems need methods of communicating with the EHR ( Fig. 4.2 ). Although in some situations it may be possible to use a proprietary standard for communication between systems, it can quickly become difficult to manage across an entire institution. As a consequence, a series of commonly used standards have been established that allow the communication of healthcare information.

Fig. 4.2, Information flows from connected devices across the institution into the electronic health record ( EHR ). Some departments maintain specialized software to manage the needs specific to their workflow—for example Radiology departments using Picture Archiving and Communication Systems—that are interfaced into the EHR (i.e., to allow a report to be connected to the original CT scan). Similarly networked monitor data is made available by the use of a gateway interface device. PACU , Postanesthesia care unit

The Health Level-7 (HL7) standard, originally developed in the late 1980s, is still used widely in the exchange of health information. HL7 allows the transmission of data in a standardized manner among devices and clinical systems. The information can be identified to a specific patient and organized into different data types, indicating laboratory results, monitor data, and billing information. It can also cause the receiving system to perform an action, such as update previously obtained data. The HL7 standard and subsequent derivatives that address the exchange of clinical documents in a structured and identified manner support communication among different clinical systems. However, this standard was based on data exchange within different software application systems within an institution and did not envisage the proliferation of Internet-connected devices remotely accessing shared resources across many healthcare organizations.

This new paradigm led to the development of Fast Health Interoperability Resources (FHIR). This communication standard is analogous to how modern Internet applications exchange data via simple standardized requests to a central resource. FHIR enables easier integration across different types of software and integrates security features necessary due to the proliferation of mobile devices. This standard is designed to facilitate the exchange of data regardless whether it is a single vital sign or a scanned document from a physical chart.

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here