Noninvasive Brain–Computer Interfaces


Acknowledgments

This work was supported by the NIH (EB00856, EB006356, and EB018783), the US Army Research Office (W911NF-08-1-0216, W911NF-12-1-0109, W911NF-14-1-0440), and Fondazione Neurone.

Introduction

Overview of This Chapter

Brain–computer interfaces (BCIs) measure brain activity, extract features from that activity, and convert those features into outputs that replace, restore, enhance, supplement, or improve human functions.

BCIs may replace lost functions, such as speaking or moving. They may restore the ability to control the body, such as by stimulating nerves or muscles that move the hand. BCIs have also been used to improve functions, such as training users to improve the remaining function of damaged pathways required to grasp. BCIs can also enhance function, like warning a sleepy driver to wake up. Finally, a BCI might supplement the body’s natural outputs, such as through a third hand.

Different techniques are used to measure brain activity for BCIs. Most BCIs have used electrical signals that are detected using electrodes placed invasively within or on the surface of the cortex, or noninvasively on the surface of the scalp [electroencephalography (EEG)]. Some BCIs have been based on metabolic activity that is measured noninvasively, such as through functional magnetic resonance imaging (fMRI).

This chapter is focused on providing an overview of noninvasive BCIs. After a brief review of the relevant aspects of EEG and fMRI, each of the subsequent sections is dedicated to one of the four different purposes that a BCI may serve and that have been realized as of this writing.

Electroencephalography

EEG sensors detect the coordinated activity of large groups of neurons—the electrical signature of individual or only a few neurons is not detectable by electrodes outside the skull. EEG sensors are usually placed in an electrode cap that is designed to position the electrodes over specific brain regions. Some work has presented EEG electrodes in headbands, headphones, glasses, or other less obtrusive headwear. For many years, EEG electrodes were usually composed of silver/silver chloride rings that were housed in a plastic disk. Electrode gel was needed to establish an electrical connection between the scalp’s surface and each electrode. Work has validated dry electrodes that eliminate the time and inconvenience of gel ( ), but to what extent dry electrodes provide stable EEG, in particular in uncontrolled environments and when used by nonexperts, is still unclear.

Different types of features can be detected in the EEG and may serve as the basis for BCIs. One of the most important of these features is oscillatory activity in different frequency bands: delta (less than 4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (18–25 Hz), and gamma (greater than 30 Hz). While the origin of oscillatory activity is still debated, oscillations probably reflect interactions between the cortex and the thalamus or other subcortical structures. Delta activity is most prominent during deep sleep when high-amplitude delta waves can be prevalent over many areas. Theta activity is prevalent during light sleep and meditation. Alpha activity increases over occipital areas when people rest with their eyes closed and during light sleep, and (along with theta and beta) may be used in BCIs to indicate workload or concentration. The phenomenon of “alpha blocking” refers to the decrease in alpha activity that occurs when a person is asked to open the eyes and perform a complex task. Because this is one of the most obvious changes in the EEG that people can easily produce, users are often asked to alternate between eyes-closed relaxation and eyes-open concentration to confirm that their EEG system is working properly. The changes in EEG activity during sleep are driven largely by activity in the pons, thalamus, and occipital regions. Activity in the same alpha frequency range, but detected over sensorimotor instead of visual areas, is called the mu rhythm. The mu rhythm is modulated by expected, actual, observed, or imagined motor movements or associated sensations. These changes in mu activity have been called event-related (de-)synchronization or ERD/S (see Fig. 26.1 ), and have been widely used in BCIs.

Figure 26.1, (A and B) The changes in mu activity centered around 12 Hz for (A) actual and (B) imagined right-hand movements. The colors reflect the proportion of the signal variance accounted for by the task. These two images show that imagined movements produce changes that are less pronounced than those resulting from actual movements, but show a similar topographical distribution. (C) EEG power over site C3 for a different subject who rested ( dashed line ) or performed right-hand movement ( solid line ). The mu activity at about 12 Hz and its harmonic around 24 Hz are both greatly reduced by movement. (D) The resulting r 2 correlations for rest versus movement. This image also shows that movement primarily affects power in the mu frequency bands and its harmonics.

Beta and gamma activity is most apparent during concentration and can also include harmonics of mu activity ( ). These frequency bands have been used in BCIs to detect concentration or information overload. Both bands are often divided into high and low, and low and high bands can reflect more details of the brain dynamics underlying cognition and emotion. While the source and purpose of the brain’s different oscillatory activities are not fully understood, they seem to generally reflect thalamocortical interactions (primarily through layers 4 and 5 of the cortex) to coordinate activity across different regions and neural populations ( ).

In addition to oscillatory activity that is detected in the frequency domain, electrophysiological activity in the time domain also reveals useful information. When activity is time locked to a stimulus, activity changes following the stimulus are called event-related potentials (ERPs). Because ERPs that result from only a single stimulus are usually too noisy to be detected, both researchers and BCI systems typically repeat the task and associated stimulus several times to acquire several ERPs that can be averaged together, resulting in a clearer signal. ERPs are often named according to their electrical valence (positive or negative) and time in milliseconds from the relevant event. For example, the P300 ERP reflects a positive change in voltage of about 300 ms after an event, and reflects cognitive processing of that event. The P300 has different subcomponents, notably the P3a and P3b, that each reflect different aspects of task processing. The frontally prominent P3a is largest when processing novel stimuli, and reflects attentional alerting and the need to update working memory. The more parietal P3b reflects memory updating and planning a response, such as pressing a button or counting. Concordantly, the sizes of a person’s frontal and parietal areas are correlated with the amplitude of the P3a and P3b, respectively. The P300 reflects contributions from other cortical and subcortical regions as well, including the hippocampus, anterior cingulate, and medial temporal lobes. Earlier components, such as the P100 and N170, instead convey early perceptual processing, and show activity in earlier processing areas such as V1 (primary visual cortex) ( ).

Time domain activity may also be detected prior to an anticipated event, such as a button press. Before a voluntary movement, the readiness potential (RP; also called Bereitschaftspotential or BP in German) will change across two stages (see Fig. 26.2 ). About 1.5 s prior to the movement, the supplementary motor area (SMA) and related motor preparation areas exhibit a slow bilateral negative change in voltage. About half a second prior to movement, a much sharper change is apparent contralateral to the movement in the SMA and the primary motor cortex (M1). These two stages seem to reflect movement planning and execution, respectively. The RP is one type of movement-related cortical potential (MRCP), a family of signals that can index movement speed, force, effort, precision, training, complexity, concentration, and other factors ( ).

Figure 26.2, Different components of the readiness potential, also called the Bereitschaftspotential (BP), are shown. The rightmost vertical line reflects the onset of a voluntary movement. In this image, the voluntary movement was self-paced tapping of the right index finger. The BP phase shows a slowly developing negativity from about 1.5 to 0.5 s prior to the voluntary movement, which becomes more pronounced during the period 0.5 s prior to the movement ( Castermans et al., 2013 ). MRCP , movement-related cortical potential; NS , negative slope.

Another time domain EEG phenomenon is the contingent negative variation (CNV). The CNV is a bilateral negative change that is prominent over the top of the scalp, and primarily reflects activity from frontal areas. The CNV reflects slow changes, on the order of a few seconds, that can occur between a warning stimulus (which informs someone that a relevant stimulus will soon be shown) and an imperative stimulus (reflecting that someone needs to take action). The CNV was discovered over 50 years ago ( ) and has been extensively studied. It can reflect a variety of factors, including emotional changes, focused attention, general arousal, and the stimuli’s expectancy and perceived relevance, probability, intensity, and timing. However, it has not been widely used in BCIs because other types of signals described here are generally more reliable, require less training, and allow higher bandwidth communication.

Rapid presentation of visual stimuli (such as flickering LEDs or objects on a computer screen) can result in steady-state visual evoked potentials (SSVEPs). SSVEPs reflect the rapid firing of visual cortical areas, primarily V1. If the user focuses attention on one stimulus, EEG signals over visual areas increase in power at that frequency and its harmonics. This allows BCIs to detect which stimulus the user chose to attend to (see Fig. 26.3 ).

Figure 26.3, Steady-state visual evoked potential (SSVEP) activity elicited during selective attention to two oscillating checkerboards, each of which oscillated at 6 or 15 Hz. (A and C) Spectral power for one subject over site O1 (A) or O2 (C). The solid and dotted lines show activity elicited while the subject focused on the 15- or 6-Hz checkerboard, respectively. (B and D) The r 2 values that reflect the correlation between different frequencies and the instruction to focus on either target stimulus. (E) A topographic map of these differences. It is shown that selective attention to a flickering stimulus increases power at the eliciting frequency and, to a lesser extent, the harmonics of that frequency. The SSVEP activity is much more pronounced over occipital areas than over other sites.

If different stimuli are presented at the same frequency but different phases, a BCI may also infer the attended stimulus based on phase measurements in the EEG ( ) or their autocorrelation with an m-sequence in a variant of SSVEPs called code-based VEPs or c-VEPs ( ).

Vibrotactile stimuli can elicit steady-state somatosensory evoked potentials (SSSEPs), and thereby may provide the basis for BCIs for persons without vision ( ). Steady-state auditory evoked potentials (SSAEPs) have also been studied. Consistent with other somatosensory evoked potentials (SEPs), SSSEPs and SSAEPs involve activity in the corresponding primary cortical sensory area in tandem with higher sensory areas and relevant thalamic nuclei (lateral geniculate, visual; medial geniculate, auditory; ventral posterolateral, somatosensory signals from the body). SEPs have many clinical and research applications, primarily exploring lower-level sensory processes. SEP research has also been used to study schizophrenia, depression, attentional deficits, epilepsy, and other conditions ( ).

Metabolic Activity

Techniques that measure metabolic activity detect changes in blood oxygenation or other indirect measurements of neuronal activity. Unlike electrical changes that immediately reflect the activity of neuronal populations, metabolic changes typically occur a few seconds after neuronal activity changes. Despite this inherent lag, some BCIs have used metabolic changes in successful demonstrations.

The two most common imaging techniques that can detect metabolic activity are fMRI and positron emission tomography (PET). FMRI and PET are volumetric imaging techniques, i.e., they can detect changes deep in the brain that are invisible to most electrical methods (see Fig. 26.4 ). At the same time, they require expensive and heavy equipment and they each incur other practical challenges: fMRI requires a very powerful magnetic field that is unsafe for some patients, and PET requires the injection of radioactive tracers. Functional near-infrared spectroscopy (fNIRS) also detects changes in blood flow and does not have these disadvantages. It is safe, portable, and relatively inexpensive, although, like EEG, it is limited to the detection of activity near the brain’s surface. FNIRS requires placing a device on the surface of the scalp that includes an emitter and several detectors. The emitter shines light through the scalp; this light is reflected off of the cortex, and reflection parameters are changed depending on local cortical activity.

Figure 26.4, These fMRI images show how a person with attention deficit hyperactivity disorder (ADHD) exhibits different activity compared to a healthy control. Moreover, they show how fMRI can reveal correlates of brain function well below the surface of the cortex, which are difficult or impossible to detect with most other methods. However, as of this writing, fMRI systems are practical only in hospital settings.

Brain–Computer Interfaces to Replace Function

Introduction

BCIs for replacing lost functions have been explored primarily to help persons with conditions that impair most or all voluntary movements, including persons with late-stage amyotrophic lateral sclerosis (ALS) or tetraplegia. For individuals struck by these conditions, BCIs may replace lost functions (such as communication or movement control) by using brain activity to control an artificial effector (such as a robotic arm or a communication system). The following sections give an overview of BCIs for communication or control that have been developed as of this writing.

Communication Functions

Simple Communication Functions

The simplest type of communication system entails binary communication, such as answering “yes” or “no” or switching a device on or off. One early BCI system provided control of a switch or a ball on a monitor using EEG signals associated with right-finger movement. Data were acquired from six electrode pairs over frontal and central sites, and the system provided asynchronous operation ( ). Another system allowed users to modulate motor imagery to direct a cursor to answer questions ( ). In another early study, a group from the US Air Force trained subjects to use SSVEP activity to bank an aircraft or to perform other tasks ( ). BCIs for switch control have continued to develop, with switches based on MRCPs or hybrid fNIRS–EEG activity for wheelchair control ( ).

BCIs for very basic yes/no communication have gained more attention as tools for persons diagnosed with a disorder of consciousness (DOC). For these patients, even basic communication can confirm conscious awareness ( ). For example, if they can reliably answer yes or no to questions regarding their city of birth or a parent’s name, then doctors and family members have objective proof of the ability and will to communicate. BCIs designed for patients with DOC are designed to interact with patients through auditory and/or tactile stimuli since these patients may be unable to use visual stimuli. These systems often use EEG-based measures of the P300 or motor imagery ( ), though an fNIRS-based system was also demonstrated in 2016 ( ).

Complex Communication Functions

BCIs for spelling often rely on the P300, a positive deflection in the ERP that is dominant over parietal areas and develops about 300 ms after stimuli that convey relevant information and are relatively rare ( ). In the first P300 speller ( ), healthy users observed a 6 × 6 matrix with letters and other characters, and were asked to silently count each time a target letter flashed. Next, each row or column of the matrix flashed sequentially. Because the users counted only the row flash and column flash that contained the target character, only those two flashes generated a P300. The BCI system could thus identify the target character by analyzing the ERPs generated by each flash. Alternatives to the method of flashing a row or column include the single character, checkerboard, and splotch spellers ( ). The P300 BCIs work reliably for nearly all healthy people and even ALS patients ( ).

The P300 BCIs were validated with ALS patients in 2006 ( ). Since then, noteworthy advances include brain painting, noted below; the face speller, in which characters change to faces instead of flashing ( ); and nonvisual implementations of similar P300-based systems that can use auditory or tactile stimuli for patients without adequate vision ( ).

presented an SSVEP BCI system with 12 boxes that each flickered at a different frequency. The numbers 1 through 10 and two special characters were overlaid on the boxes. By focusing on one box, the user could transmit a cell phone number and call that phone. Later work from the same group demonstrated improved performance using a c-VEP approach ( ), and other work showed that phase information can also improve performance ( ). SSVEP BCIs work for nearly all healthy adults ( ), but have not been well explored with patients (but see ).

One of the most prominent BCI research directions in the late 20th century relied on slow cortical potentials (SCPs). These are very slow drifts in the EEG that patients can learn to increase or decrease over months of training, prominent over central sites. Patients with no residual movement learned to modulate their SCPs to move a cursor to iteratively select letters or letter groups ( ). SCPs have not been widely used in BCIs for several years because of the long training time and low communication bandwidth.

BCIs for spelling based on motor imagery gained attention after work showed that patients with ALS can use motor imagery to control a BCI ( ). Several people, including a patient with tetraplegia, were able to use motor imagery to direct a cursor up or down toward different letters or letter groups on the right side of a monitor while the cursor moved steadily from left to right ( ). In the Hex-O-Spell approach ( ), the user views a monitor with a hexagon surrounded by six other hexagons. The central hexagon contains an arrow, while the other hexagons each contain six letters or other characters. At the start of each trial, the arrow begins moving in a clockwise direction. When the arrow points to a hexagon containing the desired group of characters, the user can perform motor imagery (such as left hand grasping) to make the arrow longer until it reaches the desired hexagon. Next, the arrow returns to its starting point, while the other six hexagons’ contents each change to one of the six characters that the user just chose. The arrow begins moving again, and the user can choose one of the six characters in the same fashion. Thus, Hex-O-Spell provides an intuitive two-level spelling interface, with clear trial timing and goals, based on simple binary motor control ( ).

Control Functions

Computer Functions

The first noninvasive BCI publication described an SSVEP-based BCI in which the user could direct a cursor up, left, down, or right by focusing on one of four oscillating boxes on a monitor ( ). Several groups have described noninvasive BCIs for one-, two-, or three-dimensional cursor control ( ).

Cursor movement has been extended to a variety of tasks with noninvasive BCIs, including web browsing ( ) and gaming/virtual navigation ( ). BCI-based control of smart homes can also implement virtual navigation through a home environment ( ).

Another way that BCIs can replace lost functions is through providing a mechanism for creative expression. BCIs have been used to compose music based on EEG measures of emotion ( ). The Brain Painting system allows users to create paintings on a monitor through motor imagery or P300 activity ( ). Several ALS patients have posted their paintings online, and reported significant enjoyment using the system.

Worn Robotic Devices

BCIs have been validated for control of wearable robotic devices such as orthoses, prostheses, and exoskeletons. In and , subjects used SSVEP activity to control a hand orthosis. In addition, in , the system also allowed subjects to use mu activity to activate or deactivate LEDs generating the SSVEP. This hybrid approach allowed users to reduce the annoyance caused by flickering stimuli. Related work with BCIs to control functional electrical stimulation, prostheses, and exoskeletons shows potential to both replace natural mobility and facilitate therapy ( ).

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here