Thin-film microelectrode arrays produced at Lawrence Livermore National Laboratory (LLNL) have enabled development of an automated system to sort brain activity by individual neurons, a technology that could open the door to recording and analyzing unprecedented amounts of neural signals over time, and, ultimately, provide scientists with new clues about how the brain learns and communicates.
The work, part of a joint project between the University of San Francisco (UCSF), LLNL and the Flatiron Institute, featured contributions from several current and former LLNL researchers over the past six years. Utilizing a flexible neural probe initially developed at the Lab for an artificial retina and improved and adapted to several brain-related projects, a team led by UCSF neuroscience professor Loren Frank created an algorithm and open-source software suite called “MountainSort,” capable of automatically sorting the “spiking” signatures of neural activity picked up by the implanted electrodes. The work was recently featured in the journal Neuron .
“Fundamentally, we want to answer the question, ‘How does the brain work?’ ” said LLNL scientist Angela Tooker, a co-author on the paper. “In order to find out how the brain learns or forms memories, you need to be able to see changes in the brain and have the ability to record activity over long periods of time. To really understand all the communication pathways in different regions of the brain, five-minute recordings aren’t going to be able to cut it. We want to get a lot of data on a small area and see how the neurons are communicating. If you had to manually sort it, it would really limit the number of electrodes and how long you can look at changes in the brain over time. It’s just not something anyone can do by hand.”
Typically, researchers manually assign recordings of brain activity to individual neurons by determining which “spikes”— visualizations of the neurons’ potential energy – are coming from which cells, a tedious and time-consuming process. Microelectrodes are increasingly able to detect background noise from hundreds of distant neurons, and when they fire simultaneously, it also can lead to overlapping spikes. For these reasons, most laboratories rely on manual sorting; even when sorting algorithms are implemented, a human must select which clusters to reject, merge or split. As large electrode arrays reach higher and higher densities—up to hundreds of channels of information—manual spike sorting of these new massive datasets is becoming an impossible task for humans.