Rxivist logo

Rxivist combines preprints from bioRxiv with data from Twitter to help you find the papers being discussed in your field. Currently indexing 67,594 bioRxiv papers from 298,144 authors.

Modern machine learning outperforms GLMs at predicting spikes

By Ari S. Benjamin, Hugo L. Fernandes, Tucker Tomlinson, Pavan Ramkumar, Chris VerSteeg, Raeed Chowdhury, Lee Miller, Konrad Paul Kording

Posted 24 Feb 2017
bioRxiv DOI: 10.1101/111450 (published DOI: 10.3389/fncom.2018.00056)

Neuroscience has long focused on finding encoding models that effectively ask "what predicts neural spiking?" and generalized linear models (GLMs) are a typical approach. It is often unknown how much of explainable neural activity is captured, or missed, when fitting a GLM. Here we compared the predictive performance of GLMs to three leading machine learning methods: feedforward neural networks, gradient boosted trees (using XGBoost), and stacked ensembles that combine the predictions of several methods. We predicted spike counts in macaque motor (M1) and somatosensory (S1) cortices from standard representations of reaching kinematics, and in rat hippocampal cells from open field location and orientation. In general, the modern methods (particularly XGBoost and the ensemble) produced more accurate spike predictions and were less sensitive to the preprocessing of features. This discrepancy in performance suggests that standard feature sets may often relate to neural activity in a nonlinear manner not captured by GLMs. Encoding models built with machine learning techniques, which can be largely automated, more accurately predict spikes and can offer meaningful benchmarks for simpler models.

Download data

  • Downloaded 7,477 times
  • Download rankings, all-time:
    • Site-wide: 213 out of 67,594
    • In neuroscience: 35 out of 12,116
  • Year to date:
    • Site-wide: 488 out of 67,594
  • Since beginning of last month:
    • Site-wide: 1,575 out of 67,594

Altmetric data

Downloads over time

Distribution of downloads per paper, site-wide

Sign up for the Rxivist weekly newsletter! (Click here for more details.)