Getting Better Performance from Brains and Computers

Tufts engineers image hotspots in brains to deal with cognitive overload and in computer chips to make computers more efficient

The newest generation of integrated circuits is able to process calculations with blazing speed, but that performance can come at a cost.

Every push of an electron to move 1s and 0s creates a thermal footprint. If the path of those moving electrons begin to concentrate in one part of the chip, that hotspot can cause a variety of unwanted effects, from slower performance to reduced lifespan for the chip.

Finding ways to detect those hotspots and cool them down by adjusting either the design of the chip or the algorithms that run through it could lead to improved chip performance.

We humans have a similar problem. Learning a new skill like playing the cello or computer programming can be fun, but it’s also inevitable that we stumble along the way, because the information may be coming too fast or we don’t have enough time to process what we have learned.

Cognitive overload can show up as hotspots in the brain, marked by changes in blood flow and oxygenation. If there were a way to detect that overload in real time and adjust the task to be more manageable, we could work more efficiently and learn more quickly.

In recently published research, Tufts engineering faculty highlight two very different but analogous methods to detect hotspots in electronic circuits to ease the burden of computation and to improve the efficiency of brain calculation and cognition.

Optimizing Wetware

In the Tufts Human-Computer Interaction lab, a student is given the task to learn a Bach piano chorale for the first time. She puts on a cloth headband that presses glowing optical fibers against her forehead, and although she is a beginner, she is steadily taken through each step of the lesson at a pace that follows her natural ability to learn and retain information.

She finishes the lesson in a short period of time—shorter than she expects. But it doesn’t surprise the researchers who designed the headband device she’s wearing. It has been reading her brain activity to automatically tailor the lesson to her abilities.

In another experiment, an air traffic controller is provided a simulation of a task that he has done countless times—managing multiple incoming and outgoing flights. The task, though, still causes stress and potential errors when the rate and complexity of needed decisions comes on too fast.

But this time, as he handles the tasks, some of the traffic may be automatically off-loaded to a colleague who has capacity for more, or if his workload is low, he may be assigned additional traffic, to steer between boredom and overload. Again, the headband device is reading his cognitive load and adjusting the flow of work.

The brain scanning device was created by Tufts computer science and biomedical engineering faculty Robert Jacob and Sergio Fantini. It’s based on a technology called functional near infrared spectroscopy (fNIRS). Near-infrared light—a bit higher in frequency than straight infrared—is passed through optical fibers in the headband to penetrate harmlessly through the skull to reach the brain cortex.

That’s where much of cognitive processing—or thinking—takes place. Looking at the red to infrared glow that radiates back provides information on how much blood flow, and thus brain activity, is occurring in the tissue below.

The method can potentially provide a map of blood flow activity in the brain to a resolution of about one centimeter. Of course, other imaging methods such as functional MRI provide much higher resolution—to about half a millimeter—but the infrared light does not require large equipment or immobilization of the person, and readings can be made in real time and processed on a laptop computer or phone.

This trade-off means that there are applications available to fNIRS that cannot be achieved with other methods—it can provide information on brain activity while the subject is at home or in an office.

“We are developing what is called an implicit user interface,” says Jacob. “We want to get information from you, from your body’s response, without you pressing a button or turning a knob, to help a computer or device respond so that you can interact more effectively with it.”

Another potential application would be as a diagnostic tool for attention deficit disorder (ADD), says Fantini. Individuals with these conditions tend to self-regulate their cognitive overload by shifting from a task that requires intense focus to other tasks or distractions that are not as demanding of their concentration.

Looking at the patterns of cognitive load and seeing a high frequency shift from higher to lower cognitive effort, or a observing a low cutoff from high cognitive loads when presented with difficult tasks could be indicative of ADD.

Improving the precision of the measurements will be critical to making it a reliable tool for other researchers or even a consumer market. Consistent measurements have taken an hour to gather results of high and low known workloads to calibrate the instrument first, before starting measurement of a new specific task.

In their most recent publications (PDFs here and here), the researchers collaborated with Michael Hughes, an assistant professor of computer science, to develop machine learning methods that can pre-train the brain workload detector on dozens of past subjects. That makes calibration on a new subject much faster. They made the pre-training set public, so it will be easier for others to set up their own workload detectors.

“We’re pushing for portable, wearable, and wireless instrumentation,” says Fantini. “Initial prototypes that are wireless or linked to a smartphone have been developed. Ideas of including these devices within glasses or hats have also been explored, but this is still something for the future.”

Optimizing Hardware

Hotspots on computer microprocessors also indicate an overload of activity that can lead to inefficiencies and even errors. In a recent study, Mark Hempstead, an associate professor of electrical and computer engineering, describes a new computer model called HotGauge, which simulates the computational activity of a CPU chip and the effect of workloads that generate heat throughout the chip.

The simulation can help find ways to redesign the chips, or the programs that run on them, to minimize overheating.

Since the mid-1970s, improvements in microchip capabilities were driven by increased density of transistors, allowing higher rates of operation. They were able to maintain the same density of power use, and thus heat generated, across their surface. This was known as Dennard scaling.

But that era has passed. More recently, power density has increased exponentially with each generation of chip design. Making the situation worse is the trend of cramming more functionality into smaller and smaller sized chips that fit into everything from supercomputers to laptops to phones.

The problem of hotspots has grown to become a significant challenge to the design of each new generation of central processing units (CPUs). The hotspots are not just hotter, they are also appearing and disappearing within as little as 200 microseconds, still long enough to compromise efficiency and degrade the life expectancy of the chip.

For a long time, regulation of temperature was dealt with using physical techniques—heat sinks, fans, and, in large-scale or supercomputing applications, liquid cooling. There is also power-gating—shutting off parts that are not in use and dynamically throttling the voltage and frequency of the current to conserve energy. These approaches are no longer sufficient.

What’s needed today, says Hempstead, is a more active role in the design of chip architecture and applications to minimize the frequency and severity of the hotspots. To do that, you first need to quantify the hotspot effects of the various architecture and application designs. That’s where HotGauge comes in.

HotGauge, created by Hempstead, allows one to create a simulated virtual chip running a simulated application to show the likely heat signatures that will result.

In the recent study, Hempstead demonstrated that HotGauge can pinpoint trouble spots and allow one to adjust the design of the chip or the application to help the chip run cooler, or at least avoid local regions of spiking temperature—proof in principle that it can make a valuable contribution to the design of future efficient CPUs.

Mike Silver can be reached at mike.silver@tufts.edu.

Back to Top