Rxivist logo

Individual differences among deep neural network models

By Johannes Mehrer, Courtney J Spoerer, Nikolaus Kriegeskorte, Tim C Kietzmann

Posted 09 Jan 2020
bioRxiv DOI: 10.1101/2020.01.08.898288

Deep neural networks (DNNs) excel at visual recognition tasks and are increasingly used as a modelling framework for neural computations in the primate brain. However, each DNN instance, just like each individual brain, has a unique connectivity and representational profile. Here, we investigate individual differences among DNN instances that arise from varying only the random initialization of the network weights. Using representational similarity analysis, we demonstrate that this minimal change in initial conditions prior to training leads to substantial differences in intermediate and higher-level network representations, despite achieving indistinguishable network-level classification performance. We locate the origins of the effects in an under-constrained alignment of category exemplars, rather than a misalignment of category centroids. Furthermore, while network regularization can increase the consistency of learned representations, considerable differences remain. These results suggest that computational neuroscientists working with DNNs should base their inferences on multiple networks instances instead of single off-the-shelf networks.

Download data

  • Downloaded 1,522 times
  • Download rankings, all-time:
    • Site-wide: 5,710 out of 89,036
    • In neuroscience: 843 out of 15,842
  • Year to date:
    • Site-wide: 775 out of 89,036
  • Since beginning of last month:
    • Site-wide: 8,658 out of 89,036

Altmetric data

Downloads over time

Distribution of downloads per paper, site-wide


Sign up for the Rxivist weekly newsletter! (Click here for more details.)