What Manufacturing and Neuroscience have in common? Data Scientist explains.
par Ilya Prokin

Now is the time when modern machine learning is transforming many businesses (see this report from MIT Sloan Management Review).
At Dataswati, we are contributing to this transformation by working towards an artificial intelligence enhanced manufacturing and reducing its dependence on engineering workforce. What follows is the story about the application of data science and machine learning to neuroscience (the topic of my Ph.D.) with some lessons for manufacturing at the end. Read on.

Long story short

Neurons in the brain communicate with sequences of rapid activation events called spikes. With my co-authors at the Group of Neural Theory at École Normale Supérieure (ENS), Ivan Lazarevich and Boris Gutkin, we were surprised that the problem of classifying these spike sequences (the spiking code) of single neurons was not approached as a pure data science problem. Not until now.

In our recent paper, we have pioneered several approaches to single neuron activity data mining for various classification problems.

Background

To be fair, data science and machine learning methods are no strangers to neuroscience. They are actively used for the analysis of the whole-brain level recordings (e.g. fMRI, EEG, MEG data). This type of data was in the spotlight because the coordinated activity of neural populations is considered to orchestrate global states of the brain, for instance, different phases of learning, phases of sleep, awake resting states, disease-induced vs. normal states. However, the activity of a single neuron was never considered as a predictor of these global states.

Why is the classification of global brain states based on single neuron activity an important problem? Several reasons: i) if solved, it could drastically reduce the amount of data needed for these classification tasks (single neuron data vs. population data), ii) it allows to quantify the amount of predictive information contained in the individual neuron spiking code.

What follows is our solution to the problem of single neuron activity decoding for brain state classification.

A baseline approach

For our baseline approach, we first developed several efficient representations of neuronal spiking activity time-series and used k-nearest neighbors (kNN) methods with a range of distance metrics including ones not standard for neuroscientific data analysis. 
For some problems such as classification of neuron types based on their activity, we found that spike sequence similarity measures sometimes used in neuroscience were outperformed by non-trivial metrics that we applied such as Kolmogorov–Smirnov or Wasserstein distances. You can learn more about these types of distance metrics at https://statweb.stanford.edu/~souravc/Lecture2.pdf.

What if instead of naive baseline methods we use more advanced ones?

The great advantage of modern machine learning approaches over the classical data analysis methods is in their greater flexibility: we can greatly reduce (if not eliminate) manual feature/metric engineering and get rid of human bias.

We therefore extracted a variety of features (properties) from neuronal spiking time-series in an automatic way and used various machine learning models every one of which is capable of learning different kinds of input-output dependencies and extracts different types of information. We tried kNN, logistic regression with different types of regularization, Random Forests and Extremely Randomized Trees, Gradient Boosted Decision Trees (GBM), SAX-VSMBOSSVS, and finally, we used model ensembling (stacking and blending) to get the best out of a bunch of different models.

We found that contemporary machine learning approaches such as Gradient Boosted Decision Trees (xgboost implementation) trained on lots of features outperformed our baseline. Moreover, by combining different methods, we were able to make these results even stronger. We are currently applying the state-of-the-art deep learning approaches and further quantifying the predictive information contained in the activity of single neurons.

Conclusion

In short, what is good for neural code is good for a range of different systems as well, for instance for complex industrial processes. In our day-to-day work at Dataswati, we see the same pattern time and time again: classical naive approaches to time series data analysis are often outperformed by the state-of-the-art machine learning which effectively capture complex temporal patterns in the data.

To read the full story please click here.

À propos de l'auteur
5_author.png

Ilya Prokin has defended his Ph.D. in Computational Neuroscience at INRIA Rhône-Alpes in Lyon under the supervision of Hugues Berry within the BEAGLE Team. In his Ph.D. research, Ilya used mathematical modeling and computer simulation of subcellular signal transduction pathways to study the basis of learning in basal ganglia, the synaptic plasticity of basal ganglia neurons. A year ago, he has attended PhDTalent Career Fair where he found his current job as a Data Scientist at a French start-up, Dataswati.