Rxivist logo

Diverse deep neural networks all predict human IT well, after training and fitting

By Katherine R. Storrs, Tim C Kietzmann, Alexander Walther, Johannes Mehrer, Nikolaus Kriegeskorte

Posted 08 May 2020
bioRxiv DOI: 10.1101/2020.05.07.082743

Deep neural networks (DNNs) trained on object recognition provide the best current models of high-level visual areas in the brain. What remains unclear is how strongly network design choices, such as architecture, task training, and subsequent fitting to brain data contribute to the observed similarities. Here we compare a diverse set of nine DNN architectures on their ability to explain the representational geometry of 62 isolated object images in human inferior temporal (hIT) cortex, as measured with functional magnetic resonance imaging. We compare untrained networks to their task-trained counterparts, and assess the effect of fitting them to hIT using a cross-validation procedure. To best explain hIT, we fit a weighted combination of the principal components of the features within each layer, and subsequently a weighted combination of layers. We test all models across all stages of training and fitting for their correlation with the hIT representational dissimilarity matrix (RDM) using an independent set of images and subjects. We find that trained models significantly outperform untrained models (accounting for 57% more of the explainable variance), suggesting that features representing natural images are important for explaining hIT. Model fitting further improves the alignment of DNN and hIT representations (by 124%), suggesting that the relative prevalence of different features in hIT does not readily emerge from the particular ImageNet object-recognition task used to train the networks. Finally, all DNN architectures tested achieved equivalent high performance once trained and fitted. Similar ability to explain hIT representations appears to be shared among deep feedforward hierarchies of nonlinear features with spatially restricted receptive fields. ### Competing Interest Statement The authors have declared no competing interest.

Download data

  • Downloaded 609 times
  • Download rankings, all-time:
    • Site-wide: 24,149 out of 89,036
    • In neuroscience: 4,043 out of 15,842
  • Year to date:
    • Site-wide: 2,902 out of 89,036
  • Since beginning of last month:
    • Site-wide: 4,009 out of 89,036

Altmetric data


Downloads over time

Distribution of downloads per paper, site-wide


PanLingua

Sign up for the Rxivist weekly newsletter! (Click here for more details.)


News