Artificial Intelligence and Big Data in Neurosurgery


This chapter includes an accompanying lecture presentation that has been prepared by the authors: .

This chapter includes an accompanying lecture presentation that has been prepared by the authors: .

Key Concepts

  • Artificial intelligence (AI) and Big Data represent the big revolution of our time.

  • AI is the ability of a computer to master typically human skills, learning from past experiences and flexibly adapting responses to new situations.

  • Big Data refers to data that are characterized by high Volume, Velocity, Variety, Veracity, and Value. Such a rich source of information is the basis for the development of AI applications.

  • Machine learning (ML) defines AI algorithms applied to specific tasks, in which the software is supposed to extract key features of the data it is presented. In supervised learning the ML algorithm is provided with both the input and the output of interest, and its goal is to learn the best set of rules that allow it to associate input and output. Conversely, in unsupervised learning the ML algorithm is provided with unclassified data, and its task is to unveil potentially meaningful associations and patterns.

  • AI in neurosurgery can impact each step of patient care, from diagnosis to treatment and follow-up. Neurosurgery is particularly well suited to exploit the benefits provided by application of AI and the use of Big Data, in consideration of the great quantity and variety of data produced every day and its dependency on these data to provide state-of-the-art patient care.

  • Powerful algorithms are being developed to extract in an automated way clinically valuable information from radiologic imaging or pathology slides or to try to predict the occurrence of medical issues from longitudinal recordings of neurophysiological data.

  • Robotic surgery, augmented reality, and virtual applications are designed to improve accuracy and precision of surgical maneuvers, as well as to push the boundaries of training in neurosurgery.

  • The ever-increasing connectivity of personal digital devices and their multiple sensors open the possibility to follow patients more effectively, enabling the design of personalized recovery strategies.

  • The use of AI and Big Data presents limitations and risks, mainly related to our ability to interpret the algorithm’s output in the correct way as well as to accurately foresee the consequences for patient care.

  • The close collaboration between surgeons and data scientists is the key element that will allow the exploitation of AI and Big Data capabilities to their full potential.

This chapter provides a basis for neurosurgeons who are developing an interest in artificial intelligence (AI) and Big Data. First, we explore the origin and the essential concepts behind AI and Big Data, then we look into the most promising applications in neurosurgery. Finally, we provide an overview on limitations and risks, as well as some considerations on what the future may have in store.

Artificial Intelligence and Big Data: A Historical Overview

The history of AI is brief but fascinating, and far from its epilogue. Back in the 1950s, the mathematician Alan Turing defined the concept of “machine intelligence” in his famous paper “Computing Machinery and Intelligence,” in which he suggested that machines can use available information and reason in order to solve problems and make decisions, just as humans do through knowledge, memory, and brain (here defined as the physical substrate where the reasoning takes place). He developed the theory behind this proposition and even designed a way to test it. While some of the core mathematical theories already flourished in the 1950s (the reason), the ability of the machine to store and retain information (the memory), as well as the computational power essential to process and interpret that information (the brain), were still in early development.

With regard to computational power, it was only a matter of time before more complex and faster calculations could be performed. In 1965, Gordon Moore, cofounder of Intel, theorized that the number of transistors able to sit in an integrated circuit would increase following a geometric progression, allowing for a constant increase in the computational power of a computer that continues to be true even today, doubling almost every 24 months. , Additionally, new discoveries and advancements in materials science and engineering triggered a similar trend in the ability to store information, going from the production of punched cards, to magnetic disks, to the recent virtually infinite capacity of cloud storage, where hundreds of petabytes of data already exist. Both space availability and the amount of data are constantly and rapidly increasing.

Over time and in the last 25 years in particular, the ever-increasing availability of “memory” and “brain” turned the researchers’ attention toward trying to apply AI to very specific tasks, rather than solely aim to produce a so-called artificial general intelligence (AGI)—which nevertheless remains an important and active field of research. This approach consists in identifying activities typically performed by human beings and training a machine on those specific activities, through specific techniques (see “AI Techniques” later), with the aim of matching or even surpassing human performance. Examples of these activities are playing games, recognizing a picture, sustaining a meaningful conversation, and driving a car.

The success of these algorithms resides not only in their internal hardware structure and software, but greatly in the wealth of information to which they have access and are exposed. In fact, just as a human being learns from data and experience, these algorithms are structured in a way that enables them to extract meaningful information from databases and, most importantly, through repeating a specific activity again and again, becoming better and better to the point that they are able to beat human experts. Examples of remarkably successful and notorious algorithms very good at doing one specific job are Deep Blue, the chess program that was able to defeat the world champion Gary Kasparov in 1997, and Google’s Alpha Go, which was able to defeat European Go champion Fan Hui in 2016. ,

AI algorithms have been studied and developed in all kinds of research fields, from engineering to medicine, from computer vision to communication. The overarching aim is the automatization and improvement of multiple aspects of peoples’ lives, allowing them to focus on more intellectually demanding as well as more rewarding activities.

In a scenario in which the greater the amount—and quality—of data, the higher the chance to design a computer program that can outperform a human being in specific tasks, the Big Data explosion allowed the opening of a whole new chapter in the history of AI.

Big Data generally refers to data characterized by the “3 Vs”: high Volume, meaning the production of a great quantity of data; high Velocity, referring to the high speed with which such data are produced; and high Variety, identifying the high heterogeneity both in terms of content and presentation format as highly structured, semistructured, or totally unstructured. Two additional Vs have been added over time: Veracity and Value. The first relates to quality of the data, which needs to be as high as possible, and the second refers to the ability to translate this wealth of data into successful business opportunities.

Humanity has always produced a great quantity of data—let us just merely think about human literature—but the ability to efficiently store them and make them available and effectively usable was reached only in the last decades. The human capability of producing data has exponentially increased with the production and spread of digital devices and platforms (from weather satellites to smartphones and social networks). To picture this trend, just consider that 90% of all data in human history was produced in the last 2 years alone. We are literally surrounded by Big Data: weather satellites and probes data are used to train models for weather forecasts, data from the global market are used to develop the best financial strategies, and a combination of Global Positioning System (GPS) data and digital cameras input is being used to produce self-driving cars. Nevertheless, having a lot of data does not mean that all of it is useful (Veracity comes into place here), and this is why close collaboration between computer scientists and experts in the different fields in which AI seeks application is strongly needed.

When it comes to medicine, it is clear why a great interest has arisen and is steadily increasing in all specialties in trying to bring AI onto the table ( Fig. 75.1 ). Health care is one of the most productive human activities when it comes to data; let us only think about the amount of radiologic imaging examinations performed every day worldwide, or the amount of information stored in each patient’s electronic record. The last decade has in fact seen an unprecedented flourishing of AI algorithms with medical application that are slowly but steadily finding their way into daily clinical practice. With these conditions, the combination of AI and Big Data has the potential to have a real impact on countless aspects of people’s lives in unprecedented ways, with game-changing applications in medicine, economics, social sciences, and the natural environment.

Figure 75.1, Histogram depicting the exponential growth of the publication rate of machine learning papers over the last 20 years (based on PubMed search).

AI Techniques

When it comes to AI applied to specific tasks, the term used to identify those algorithms is generally machine learning (ML), which clearly identifies the key activity that the machine is doing. The software is in fact exposed to data about which it is supposed to extract key features that enable it to produce a meaningful output. Depending on the type of information the algorithm is provided with, the learning activity is distinguished in two main categories: supervised and unsupervised.

In supervised learning, the algorithm is provided with both the input and the output of interest and its goal is to learn the best set of rules that allow it to associate the input to the output ( Fig. 75.2A ). This set of rules will then allow the algorithm to produce the best output once it is exposed to a new, previously unseen input. Two main types of applications are empowered by supervised learning: classification tasks and regression tasks. In classification tasks the algorithm assigns the input to one class of a specified set of categories, whereas in regression tasks the algorithm predicts continuous variables as output. Classification algorithms are probably the most studied in medicine, where the ability to correctly classify is of paramount importance in specialties such as radiology, pathology, and genetics, with critical consequences on patient care. Examples of supervised learning applications in medicine are algorithms for the classification of tumors’ histologic or molecular features based on the preoperative radiologic examination, or the labeling of structures of interest, which can be both normal or pathologic in nature, in radiologic examinations. ,

Figure 75.2, (A) Layout of the concept of supervised learning. The algorithm is provided with both the input and output data. It is then trained to produce the best set of rules that allows it to effectively connect the input to the output. The algorithm is then tested and used on previously unseen data to make predictions. (B) Layout of the concept of unsupervised learning in the case of a cluster algorithm. The algorithm is provided with unclassified data and it will divide the data into clusters based on the relationships it is able to extract. The clusters are then reviewed by researchers to understand the input-output relationships, evaluate their clinical utility, and validate them for clinical use.

On the other side, we have unsupervised learning. In this case the algorithm is provided with unclassified data. The big difference with unsupervised learning problems is that in this case the output is not known at all, even by the researcher who is training the algorithm. Here, the aim is to exploit the algorithm’s superior computational power to unveil potentially meaningful and useful associations that will be assessed and further explored by the researcher looking at the algorithm’s output ( Fig. 75.2B ). These programs mainly deal with clustering activities, in which the algorithm is asked to categorize the input in different classes based on the features it learns. In medicine, cluster algorithms find application in trying to unveil complex relationships between genetics, biochemistry, and histologic data, or to find hidden patterns in clinically relevant data streams such as electroencephalography (EEG). ,

Finally, a combination of supervised and unsupervised learning, named semisupervised learning, is also being explored. These algorithms are usually fed with a great amount of unclassified data, but they exploit a small set of labeled data to increase their pattern recognition ability. Applications of semisupervised learning have been used to select the most meaningful articles to be included in systematic reviews and also to predict dementia. ,

Why Neurosurgery is Suitable for AI and Big Data Applications

In general, the field of neurosurgery is particularly well suited for AI applications for a variety of reasons. Both in the clinical setting and in research, AI has the potential to offer an unprecedented enhancement of speed, accuracy, and consistency. As the morbidity related to any neurological deficit can be quite extensive, most neurosurgical decisions and procedures have a considerable impact on a patient’s quality of life. Moreover, many of the conditions a neurosurgeon regularly confronts pose an actual threat to patients’ survival. The high stakes at play are often faced with questions regarding optimal patient management and treatment, in a field where much is still unknown regarding the physiology and pathology of the nervous system, and evidence-based guidelines are often lacking.

Neurosurgical practice involves the integration of a wide variety of diagnostic, monitoring, and follow-up data: for example, multimodality imaging, EEG, and neurophysiology may be used preoperatively, intraoperatively, and postoperatively; intracranial pressure data combined with regular intensive care unit monitoring is essential in critical care; and histopathology is the gold standard for diagnosis of neoplasms. If adequately collected, such types of Big Data provide the ideal foundation for AI applications. Although many specialties will benefit from the advances brought about by AI, neurosurgery can play a leading role because it has always paid close attention to innovation and technological progress, as demonstrated, for example, by the application and refinement of the operating microscope, nowadays a crucial instrument in most neurosurgical procedures.

Diagnosis

Radiologic Imaging

As a specialty, neurosurgery heavily relies on radiologic imaging to make its way to the right diagnosis in both brain and spine cases, in elective and emergency settings. Differently from what happens when a patient walks into the emergency department or is seen by the general practitioner, a neurosurgeon is usually presented from the beginning with some kind of radiologic imaging, be it magnetic resonance imaging (MRI) of the brain or an x-ray of the spine, from which meaningful information is extracted and used, in combination with the patient’s reported history and physical examination.

The type of information radiologic imaging provides can be broadly distinguished as anatomic or functional. The anatomic information, by far the most exploited, relates to the location, dimensions, and texture of a normal structure (cerebral vessels, nerve roots, bone structure) or a pathologic entity (tumors, malformations, degenerative changes) and contributes to the understanding of a patient’s symptoms and the stage of the disease, guiding the planning of the most suitable treatment. Functional information also has an anatomic basis, but with the superimposition of additional elements that provide information on the actual physiology of that specific system. Examples are functional MRI, in which information regarding metabolic activation is superimposed on the cerebral area; or the angles and lines drawn on a scoliosis x-ray, which inform on the biomechanical stability and balance of a patient’s spine; or the delineation of white matter tracts in the brain, used to identify eloquent structures and guide the surgeon.

Given these premises and the great amount of radiologic examinations available, ML algorithms in neurosurgery are proving to be particularly suited for the extraction of anatomic information, with the double aim of providing “traditional” information in a more accurate and fast way—as, for example, in tumor segmentation—and extracting otherwise undetectable but useful information, as in the case of histologic grade or clinically relevant molecular pattern detection from preoperative brain MRIs. ,

Neuroimaging in neuro-oncology is the branch in which AI has so far found some of the most successful applications, in particular in the form of classification algorithms. , Segmentation algorithms, by far the most studied and applied in neurosurgery, are a special type of classification algorithm that work in a pixel-wise manner. They accept images as input, which can derive from any type of radiologic examination, and produce a mask as output, outlining the boundaries of the structure the algorithm has been trained to recognize ( Fig. 75.3A ). Artificial neural networks are a specific type of software architecture that turned out to be particularly good at segmentation tasks, working with both two-dimensional images and three-dimensional (3D) volumes, the latter with better results on average (but greater needs in term of storage and computational power). Examples of successful algorithms are the ones developed for the labeling of normal structures (e.g., brain extraction, lumbar disks and cervical vertebrae delineation), or segmentation of pathologic tissue (e.g., tumors such as glioblastomas, low-grade gliomas, and meningiomas), as well as epileptic foci and arteriovenous malformations detection, reaching accuracy that usually hits and passes 90%. The implementation of such algorithms in the clinical workflow would allow the automation and almost instantaneous extraction of important and reliable information to be used for surgical planning and patient follow-up, with consistent resource saving, reduced variability, and eventually better patient care.

Figure 75.3, Examples of supervised machine learning algorithms applied in neurosurgery.

In contrast, more “traditional” classification algorithms also accept images as input but, instead of detecting a specific structure, they assign the whole image to a category as an output ( Fig. 75.3B ). Successful examples are algorithms able to predict isocitrate dehydrogenase and 1p/19q status in high-grade and diffuse low-grade gliomas from preoperative MRI, , differentiate between different neoplasms (e.g., glioblastoma, lymphoma, meningioma), or predict the histologic grade of meningiomas. , An ambitious but real goal that these algorithms pursue is to realize the so-called virtual biopsy, in which the algorithm will ideally be able to identify specific features of a lesion traditionally only detectable through pathology examination or molecular analysis. Obtaining such information in a reliable way with no need to directly analyze a tissue sample would allow a more tailored and informed decision-making process, assessing the actual need for a surgical operation, and in doing so, unnecessary risks would be reduced to the minimum and the probability of success of a therapeutic plan would be maximized.

Tissue Analysis

The analysis of pathologic tissue to extract clinically relevant information about the histologic, cellular, molecular, and genetic nature of a disease is key in neurosurgery, and neuro-oncology in particular. The utility of this information derives from the correlation that was originally established between a particular histologic and molecular pattern and a specific clinical condition, the associated prognosis, and the therapeutic options. As a consequence, the pathologist’s ability to recognize those different patterns and assign them to the correct category is of paramount importance. ,

By its nature, AI has a great potential to aid in both these tasks through supervised learning algorithms. Such algorithms would be able to analyze a pathology slide and correctly classify the type of tissue it contains (normal versus pathologic, grade/stage X versus grade/stage Y, pathology 1 versus pathology 2), automatizing and speeding up the process—which can be of critical importance, for example, to reduce the waiting time for a frozen section to be assessed—as well as reducing errors and interobserver variability. Numerous models have been developed for the automatic detection of brain tumor cells and microvascular proliferation in glioblastoma, labeling of nuclei and grading of gliomas, and classification of meningiomas into histologic subtypes.

While such algorithms are inextricably linked to human-acquired knowledge and would help in optimizing already existing human tasks, unsupervised learning systems (as, for example, cluster algorithms) could help in unveiling new clinically relevant patterns. Such systems could work with one or multiple inputs, allowing different types of information—morphologic, molecular, genetic, behavioral—to be taken into account. , Although this is an exciting perspective, results such as these require additional efforts to be validated, since the clusters identified by the algorithm need to be further studied in order to be adequately understood and to assess their potential clinical utility, in which case rigorous, prospective clinical trials should follow.

The ever-increasing amount of data and interest in the development of classification algorithms will ease the translation of these results from pure research to useful clinical tools, improving both patient care and the pathologist’s workflow.

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here