Most downloaded biology preprints, all time
in category neuroscience
18,148 results found. For more information, click each entry to expand.
8,550 downloads bioRxiv neuroscience
Chethan Pandarinath, Daniel J. O’Shea, Jasmine Collins, Rafal Jozefowicz, Sergey D. Stavisky, Jonathan C Kao, Eric M. Trautmann, Matthew T. Kaufman, Stephen I. Ryu, Leigh R. Hochberg, Jaimie M. Henderson, Krishna V. Shenoy, L. F. Abbott, David Sussillo
Neuroscience is experiencing a data revolution in which simultaneous recording of many hundreds or thousands of neurons is revealing structure in population activity that is not apparent from single-neuron responses. This structure is typically extracted from trial-averaged data. Single-trial analyses are challenging due to incomplete sampling of the neural population, trial-to-trial variability, and fluctuations in action potential timing. Here we introduce Latent Factor Analysis via Dynamical Systems (LFADS), a deep learning method to infer latent dynamics from single-trial neural spiking data. LFADS uses a nonlinear dynamical system (a recurrent neural network) to infer the dynamics underlying observed population activity and to extract ‘de-noised’ single-trial firing rates from neural spiking data. We apply LFADS to a variety of monkey and human motor cortical datasets, demonstrating its ability to predict observed behavioral variables with unprecedented accuracy, extract precise estimates of neural dynamics on single trials, infer perturbations to those dynamics that correlate with behavioral choices, and combine data from non-overlapping recording sessions (spanning months) to improve inference of underlying dynamics. In summary, LFADS leverages all observations of a neural population's activity to accurately model its dynamics on single trials, opening the door to a detailed understanding of the role of dynamics in performing computation and ultimately driving behavior.
8,485 downloads bioRxiv neuroscience
We have empirically assessed the distribution of published effect sizes and estimated power by extracting more than 100,000 statistical records from about 10,000 cognitive neuroscience and psychology papers published during the past 5 years. The reported median effect size was d=0.93 (inter-quartile range: 0.64-1.46) for nominally statistically significant results and d=0.24 (0.11-0.42) for non-significant results. Median power to detect small, medium and large effects was 0.12, 0.44 and 0.73, reflecting no improvement through the past half-century. Power was lowest for cognitive neuroscience journals. 14% of papers reported some statistically significant results, although the respective F statistic and degrees of freedom proved that these were non-significant; p value errors positively correlated with journal impact factors. False report probability is likely to exceed 50% for the whole literature. In light of our findings the recently reported low replication success in psychology is realistic and worse performance may be expected for cognitive neuroscience.
8,312 downloads bioRxiv neuroscience
Over the last decade, artificial neural networks (ANNs), have undergone a revolution, catalyzed in large part by better tools for supervised learning. However, training such networks requires enormous data sets of labeled examples, whereas young animals (including humans) typically learn with few or no labeled examples. This stark contrast with biological learning has led many in the ANN community posit that instead of supervised paradigms, animals must rely instead primarily on unsupervised learning, leading the search for better unsupervised algorithms. Here we argue that much of an animal's behavioral repertoire is not the result of clever learning algorithms--supervised or unsupervised--but arises instead from behavior programs already present at birth. These programs arise through evolution, are encoded in the genome, and emerge as a consequence of wiring up the brain. Specifically, animals are born with highly structured brain connectivity, which enables them learn very rapidly. Recognizing the importance of the highly structured connectivity suggests a path toward building ANNs capable of rapid learning.
8,036 downloads bioRxiv neuroscience
Zhihao Zheng, J. Scott Lauritzen, Eric Perlman, Camenzind G. Robinson, Matthew Nichols, Daniel Milkie, Omar Torrens, John Price, Corey B. Fisher, Nadiya Sharifi, Steven A. Calle-Schuler, Lucia Kmecova, Iqbal J. Ali, Bill Karsh, Eric T. Trautman, John Bogovic, Philipp Hanslovsky, Gregory S. X. E. Jefferis, Michael Kazhdan, Khaled Khairy, Stephan Saalfeld, Richard D Fetter, Davi D. Bock
Drosophila melanogaster has a rich repertoire of innate and learned behaviors. Its 100,000-neuron brain is a large but tractable target for comprehensive neural circuit mapping. Only electron microscopy (EM) enables complete, unbiased mapping of synaptic connectivity; however, the fly brain is too large for conventional EM. We developed a custom high-throughput EM platform and imaged the entire brain of an adult female fly. We validated the dataset by tracing brain-spanning circuitry involving the mushroom body (MB), intensively studied for its role in learning. Here we describe the complete set of olfactory inputs to the MB; find a new cell type providing driving input to Kenyon cells (the intrinsic MB neurons); identify neurons postsynaptic to Kenyon cell dendrites; and find that axonal arbors providing input to the MB calyx are more tightly clustered than previously indicated by light-level data. This freely available EM dataset will significantly accelerate Drosophila neuroscience.
7,899 downloads bioRxiv neuroscience
Ruixuan Gao, Shoh M Asano, Srigokul Upadhyayula, Pisarev Igor, Daniel E. Milkie, Tsung-Li Liu, Singh Ved, Graves Austin, Grace H Huynh, Yongxin Zhao, John Bogovic, Jennifer Colonell, Jennifer Lippincott-Schwartz, Christopher Zugates, Susan Tappan, Alfredo Rodriguez, Kishore R. Mosaliganti, Sean G. Megason, Adam Hantman, Gerald M. Rubin, Tom Kirchhausen, Stephan Saalfeld, Yoshinori Aso, Edward S. Boyden, Eric Betzig
Optical and electron microscopy have made tremendous inroads in understanding the complexity of the brain, but the former offers insufficient resolution to reveal subcellular details and the latter lacks the throughput and molecular contrast to visualize specific molecular constituents over mm-scale or larger dimensions. We combined expansion microscopy and lattice light sheet microscopy to image the nanoscale spatial relationships between proteins across the thickness of the mouse cortex or the entire Drosophila brain, including synaptic proteins at dendritic spines, myelination along axons, and presynaptic densities at dopaminergic neurons in every fly neuropil domain. The technology should enable statistically rich, large scale studies of neural development, sexual dimorphism, degree of stereotypy, and structural correlations to behavior or neural activity, all with molecular contrast.
7,533 downloads bioRxiv neuroscience
When experts are immersed in a task, do their brains prioritize task-related activity? Most efforts to understand neural activity during well-learned tasks focus on cognitive computations and specific task-related movements. We wondered whether task-performing animals explore a broader movement landscape, and how this impacts neural activity. We characterized movements using video and other sensors and measured neural activity using widefield and two-photon imaging. Cortex-wide activity was dominated by movements, especially uninstructed movements, reflecting unknown priorities of the animal. Some uninstructed movements were aligned to trial events. Accounting for them revealed that neurons with similar trial-averaged activity often reflected utterly different combinations of cognitive and movement variables. Other movements occurred idiosyncratically, accounting for trial-by-trial fluctuations that are often considered “noise”. This held true for extracellular Neuropixels recordings in cortical and subcortical areas. Our observations argue that animals execute expert decisions while performing richly varied, uninstructed movements that profoundly shape neural activity.
7,468 downloads bioRxiv neuroscience
Substantial research has investigated the association between intelligence and psychopathic traits. The findings to date have been inconsistent and have not always considered the multi-dimensional nature of psychopathic traits. Moreover, there has been a tendency to confuse psychopathy with other closely related, clinically significant disorders. The current study represents a meta-analysis conducted to evaluate the direction and magnitude of the association of intelligence with global psychopathy, as well as its factors and facets, and related disorders (Antisocial Personality Disorder, Conduct Disorder, and Oppositional Defiant Disorder). Our analyses revealed a small, significant, negative relationship between intelligence and total psychopathy ( r = -.07, p = .001). Analysis of factors and facets found differential associations, including both significant positive (e.g., interpersonal facet) and negative (e.g., affective facet) associations, further affirming that psychopathy is a multi-dimensional construct. Additionally, intelligence was negatively associated with Antisocial Personality Disorder ( r = -.13, p = .001) and Conduct Disorder ( r = -.11, p = .001), but positively with Oppositional Defiant Disorder ( r = .06, p = .001). There was significant heterogeneity across studies for most effects, but the results of moderator analyses were inconsistent. Finally, bias analyses did not find significant evidence for publication bias or outsized effects of outliers.
7,307 downloads bioRxiv neuroscience
We demonstrate a two-photon imaging system with corrected optics including a custom objective that provides cellular resolution across a 3.5 mm field of view (9.6 mm^2). Temporally multiplexed excitation pathways can be independently repositioned in XY and Z to simultaneously image regions within the expanded field of view. We used this new imaging system to measure activity correlations between neurons in different cortical areas in awake mice.
7,112 downloads bioRxiv neuroscience
Tamara B. Franklin, Bianca A. Silva, Zina Perova, Livia Marrone, Maria E. Masferrer, Yang Zhan, Angie Kaplan, Louise Greetham, Violaine Verrechia, Andreas Halman, Sara Pagella, Alexei L Vyssotski, Anna Illarionova, Valery Grinevich, Tiago Branco, Cornelius T. Gross
The prefrontal cortex plays a critical role in adjusting an organism's behavior to its environment. In particular, numerous studies have implicated the prefrontal cortex in the control of social behavior, but the neural circuits that mediate these effects remain unknown. Here we investigated behavioral adaptation to social defeat in mice and uncovered a critical contribution of neural projections from the medial prefrontal cortex to the dorsal periaqueductal grey, a brainstem area vital for defensive responses. Social defeat caused a weakening of functional connectivity between these two areas and selective inhibition of these projections mimicked the behavioral effects of social defeat. These findings define a specific neural projection by which the prefrontal cortex can control and adapt social behavior.
6,904 downloads bioRxiv neuroscience
Martin Schrimpf, Kubilius Jonas, Ha Hong, Najib J. Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan Prescott-Roy, Franziska Geiger, Kailyn Schmidt, Daniel L. K. Yamins, James J. DiCarlo
The internal representations of early deep artificial neural networks (ANNs) were found to be remarkably similar to the internal neural representations measured experimentally in the primate brain. Here we ask, as deep ANNs have continued to evolve, are they becoming more or less brain-like? ANNs that are most functionally similar to the brain will contain mechanisms that are most like those used by the brain. We therefore developed Brain-Score – a composite of multiple neural and behavioral benchmarks that score any ANN on how similar it is to the brain’s mechanisms for core object recognition – and we deployed it to evaluate a wide range of state-of-the-art deep ANNs. Using this scoring system, we here report that: (1) DenseNet-169, CORnet-S and ResNet-101 are the most brain-like ANNs. (2) There remains considerable variability in neural and behavioral responses that is not predicted by any ANN, suggesting that no ANN model has yet captured all the relevant mechanisms. (3) Extending prior work, we found that gains in ANN ImageNet performance led to gains on Brain-Score. However, correlation weakened at ≥ 70% top-1 ImageNet performance, suggesting that additional guidance from neuroscience is needed to make further advances in capturing brain mechanisms. (4) We uncovered smaller (i.e. less complex) ANNs that are more brain-like than many of the best-performing ImageNet models, which suggests the opportunity to simplify ANNs to better understand the ventral stream. The scoring system used here is far from complete. However, we propose that evaluating and tracking model-benchmark correspondences through a Brain-Score that is regularly updated with new brain data is an exciting opportunity: experimental benchmarks can be used to guide machine network evolution, and machine networks are mechanistic hypotheses of the brain’s network and thus drive next experiments. To facilitate both of these, we release [Brain-Score.org]: a platform that hosts the neural and behavioral benchmarks, where ANNs for visual processing can be submitted to receive a Brain-Score and their rank relative to other models, and where new experimental data can be naturally incorporated. : http://Brain-Score.org
6,878 downloads bioRxiv neuroscience
Bosiljka Tasic, Zizhen Yao, Kimberly A Smith, Lucas T Graybuck, Thuc Nghi Nguyen, Darren Bertagnolli, Jeff Goldy, Emma Garren, Michael N Economo, Sarada Viswanathan, Osnat Penn, Trygve E. Bakken, Vilas Menon, Jeremy Andrew Miller, Olivia Fong, Karla E. Hirokawa, Kanan Lathia, Christine Rimorin, Michael Tieu, Rachael Larsen, Tamara Casper, Eliza Barkan, Matthew Kroll, Seana Parry, Nadiya V Shapovalova, Daniel Hirchstein, Julie Pendergraft, Tae Kyung Kim, Aaron Szafer, Nick Dee, Peter Groblewski, Ian Wickersham, Ali Cetin, Julie A. Harris, Boaz P. Levi, Susan M. Sunkin, Linda Madisen, Tanya L. Daigle, Loren Looger, Amy Bernard, John Phillips, Ed Lein, Michael Hawrylycz, Karel Svoboda, Allan R. Jones, Christof Koch, Hongkui Zeng
Neocortex contains a multitude of cell types segregated into layers and functionally distinct regions. To investigate the diversity of cell types across the mouse neocortex, we analyzed 12,714 cells from the primary visual cortex (VISp), and 9,035 cells from the anterior lateral motor cortex (ALM) by deep single-cell RNA-sequencing (scRNA-seq), identifying 116 transcriptomic cell types. These two regions represent distant poles of the neocortex and perform distinct functions. We define 50 inhibitory transcriptomic cell types, all of which are shared across both cortical regions. In contrast, 49 of 52 excitatory transcriptomic types were found in either VISp or ALM, with only three present in both. By combining single cell RNA-seq and retrograde labeling, we demonstrate correspondence between excitatory transcriptomic types and their region-specific long-range target specificity. This study establishes a combined transcriptomic and projectional taxonomy of cortical cell types from functionally distinct regions of the mouse cortex.
6,695 downloads bioRxiv neuroscience
Thomas Nichols, Samir Das, Simon B. Eickhoff, Alan C. Evans, Tristan Glatard, Michael Hanke, Nikolaus Kriegeskorte, Michael P. Milham, Russell A. Poldrack, Jean-Baptiste Poline, Erika Proal, Bertrand Thirion, David C. Van Essen, Tonya White, B. T. Thomas Yeo
Neuroimaging enables rich noninvasive measurements of human brain activity, but translating such data into neuroscientific insights and clinical applications requires complex analyses and collaboration among a diverse array of researchers. The open science movement is reshaping scientific culture and addressing the challenges of transparency and reproducibility of research. To advance open science in neuroimaging the Organization for Human Brain Mapping created the Committee on Best Practice in Data Analysis and Sharing (COBIDAS), charged with creating a report that collects best practice recommendations from experts and the entire brain imaging community. The purpose of this work is to elaborate the principles of open and reproducible research for neuroimaging using Magnetic Resonance Imaging (MRI), and then distill these principles to specific research practices. Many elements of a study are so varied that practice cannot be prescribed, but for these areas we detail the information that must be reported to fully understand and potentially replicate a study. For other elements of a study, like statistical modelling where specific poor practices can be identified, and the emerging areas of data sharing and reproducibility, we detail both good practice and reporting standards. For each of seven areas of a study we provide tabular listing of over 100 items to help plan, execute, report and share research in the most transparent fashion. Whether for individual scientists, or for editors and reviewers, we hope these guidelines serve as a benchmark, to raise the standards of practice and reporting in neuroimaging using MRI.
6,583 downloads bioRxiv neuroscience
The P3a is an event-related potential comprising an early fronto-central phase and a late fronto-parietal phase. It is observed after novel events and has classically been considered to reflect the attention processing of distracting stimuli. However, novel sounds can lead to behavioral facilitation as much as behavioral distraction. This illustrates the duality of the orienting response which includes both an attentional and an arousal component. Using a paradigm with visual or auditory targets to detect and irrelevant unexpected distracting sounds to ignore, we showed that the facilitation effect by distracting sounds is independent of the target modality and endures more than 1500 ms. These results confirm that the behavioral facilitation observed after distracting sounds is related to an increase in unspecific phasic arousal on top of the attentional capture. Moreover, the amplitude of the early phase of the P3a to distracting sounds positively correlated with subjective arousal ratings, contrary to other event-related potentials. We propose that the fronto-central early phase of the P3a would index the arousing properties of distracting sounds and would be linked to the arousal component of the orienting response. Finally, we discuss the relevance of the P3a as a marker of distraction.
6,544 downloads bioRxiv neuroscience
Studies of amnesic patients and animal models support a systems consolidation model, which posits that explicit memories formed in hippocampus are transferred to cortex over time1-6. Prelimbic cortex (PL), a subregion of the medial prefrontal cortex, is required for the expression of learned fear memories from hours after learning until weeks later7-12. While some studies suggested that prefrontal cortical neurons active during learning are required for memory retrieval13-15, others provided evidence for ongoing cortical circuit reorganization during memory consolidation10,16,17. It has been difficult to causally relate the activity of cortical neurons during learning or recent memory retrieval to their function in remote memory, in part due to a lack of tools18. Here we show that a new version of 'targeted recombination in active populations', TRAP2, has enhanced efficiency over the past version, providing brain-wide access to neurons activated by a particular experience. Using TRAP2, we accessed PL neurons activated during fear conditioning or 1-, 7-, or 14-day memory retrieval, and assessed their contributions to 28-day remote memory. We found that PL neurons TRAPed at later retrieval times were more likely to be reactivated during remote memory retrieval, and more effectively promoted remote memory retrieval. Furthermore, reducing PL activity during learning blunted the ability of TRAPed PL neurons to promote remote memory retrieval. Finally, a series of whole-brain analyses identified a set of cortical regions that were densely innervated by memory-TRAPed PL neurons and preferentially activated by PL neurons TRAPed during 14-day retrieval, and whose activity co-varied with PL and correlated with memory specificity. These findings support a model in which PL ensembles underlying remote memory undergo dynamic changes during the first two weeks after learning, which manifest as increased functional recruitment of cortical targets.
6,532 downloads bioRxiv neuroscience
The hippocampal-entorhinal system is important for spatial and relational memory tasks. We formally link these domains; provide a mechanistic understanding of the hippocampal role in generalisation; and offer unifying principles underlying many entorhinal and hippocampal cell-types. We propose medial entorhinal cells form a basis describing structural knowledge, and hippocampal cells link this basis with sensory representations. Adopting these principles, we introduce the Tolman-Eichenbaum machine (TEM). After learning, TEM entorhinal cells include grid, band, border and object-vector cells. Hippocampal cells include place and landmark cells, remapping between environments. Crucially, TEM also predicts empirically recorded representations in complex non-spatial tasks. TEM predicts hippocampal remapping is not random as previously believed. Rather structural knowledge is preserved across environments. We confirm this in simultaneously recorded place and grid cells. One Sentence Summary Simple principles of representation and generalisation unify spatial and non-spatial accounts of hippocampus and explain many cell representations.
6,496 downloads bioRxiv neuroscience
It is proposed that a cognitive map encoding the relationships between entities in the world supports flexible behaviour, but the majority of the neural evidence for such a system comes from studies of spatial navigation. Recent work describing neuronal parallels between spatial and non-spatial behaviours has rekindled the notion of a systematic organisation of knowledge across multiple domains. We review experimental evidence and theoretical frameworks that point to principles unifying these apparently disparate functions. These principles describe how to learn and use abstract, generalisable knowledge and suggest map-like representations observed in a spatial context may be an instance of general coding mechanisms capable of organising knowledge of all kinds. We highlight how artificial agents endowed with such principles exhibit flexible behaviour and learn map-like representations observed in the brain. Finally, we speculate on how these principles may offer insight into the extreme generalisations, abstractions and inferences that characterise human cognition.
6,432 downloads bioRxiv neuroscience
Adam H Marblestone, Evan R Daugharthy, Reza Kalhor, Ian D Peikon, Justus M. Kebschull, Seth L Shipman, Yuriy Mishchenko, Jehyuk Lee, David A Dalrymple, Bradley M Zamft, Konrad P. Kording, Edward S. Boyden, Anthony Zador, George Church
We analyze the scaling and cost-performance characteristics of current and projected connectomics approaches, with reference to the potential implications of recent advances in diverse contributing fields. Three generalized strategies for dense connectivity mapping at the scale of whole mammalian brains are considered: electron microscopic axon tracing, optical imaging of combinatorial molecular markers at synapses, and bulk DNA sequencing of trans-synaptically exchanged nucleic acid barcode pairs. Due to advances in parallel-beam instrumentation, whole mouse brain electron microscopic image acquisition could cost less than $100 million, with total costs presently limited by image analysis to trace axons through large image stacks. Optical microscopy at 50 to 100 nm isotropic resolution could potentially read combinatorially multiplexed molecular information from individual synapses, which could indicate the identifies of the pre-synaptic and post-synaptic cells without relying on axon tracing. An optical approach to whole mouse brain connectomics may be achievable for less than $10 million and could be enabled by emerging technologies to sequence nucleic acids in-situ in fixed tissue via fluorescent microscopy. Novel strategies relying on bulk DNA sequencing, which would extract the connectome without direct imaging of the tissue, could produce a whole mouse brain connectome for $100k to $1 million or a mouse cortical connectome for $10k to $100k. Anticipated further reductions in the cost of DNA sequencing could lead to a $1000 mouse cortical connectome.
6,361 downloads bioRxiv neuroscience
Julie A. Harris, Stefan Mihalas, Karla E. Hirokawa, Jennifer D. Whitesell, Joseph E. Knox, Amy Bernard, Phillip Bohn, Shiella Caldejon, Linzy Casal, Andrew Cho, David Feng, Nathalie Gaudreault, Charles R. Gerfen, Nile Graddis, Peter Groblewski, Alex Henry, Anh Ho, Robert Howard, Leonard Kuan, Jerome Lecoq, Jennifer Luviano, Stephen McConoghy, Marty T. Mortrud, Maitham Naeemi, Lydia Ng, Seung W Oh, Benjamin Ouellette, Staci Sorensen, Wayne Wakeman, Quanxin Wang, Ali Williford, John W. Phillips, Allan Jones, Christof Koch, Hongkui Zeng
The mammalian cortex is a laminar structure composed of many cell types densely interconnected in complex ways. Recent systematic efforts to map the mouse mesoscale connectome provide comprehensive projection data on interareal connections, but not at the level of specific cell classes or layers within cortical areas. We present here a significant expansion of the Allen Mouse Brain Connectivity Atlas, with ~1,000 new axonal projection mapping experiments across nearly all isocortical areas in 49 Cre driver lines. Using 13 lines selective for cortical layer-specific projection neuron classes, we identify the differential contribution of each layer/class to the overall intracortical connectivity patterns. We find layer 5 (L5) projection neurons account for essentially all intracortical outputs. L2/3, L4, and L6 neurons contact a subset of the L5 cortical targets. We also describe the most common axon lamination patterns in cortical targets. Most patterns are consistent with previous anatomical rules used to determine hierarchical position between cortical areas (feedforward, feedback), with notable exceptions. While diverse target lamination patterns arise from every source layer/class, L2/3 and L4 neurons are primarily associated with feedforward type projection patterns and L6 with feedback. L5 has both feedforward and feedback projection patterns. Finally, network analyses revealed a modular organization of the intracortical connectome. By labeling interareal and intermodule connections as feedforward or feedback, we present an integrated view of the intracortical connectome as a hierarchical network.
6,301 downloads bioRxiv neuroscience
Rotem Botvinik-Nezer, Felix Holzmeister, Colin F. Camerer, Anna Dreber, Juergen Huber, Magnus Johannesson, Michael Kirchler, Roni Iwanir, Jeanette A. Mumford, Alison Adcock, Paolo Avesani, Blazej Baczkowski, Aahana Bajracharya, Leah Bakst, Sheryl Ball, Marco Barilari, Nadège Bault, Derek Beaton, Julia Beitner, Roland Benoit, Ruud Berkers, Jamil Bhanji, Bharat Biswal, Sebastian Bobadilla-Suarez, Tiago Bortolini, Katherine Bottenhorn, Alexander Bowring, Senne Braem, Hayley Brooks, Emily Brudner, Cristian Calderon, Julia Camilleri, Jaime Castrellon, Luca Cecchetti, Edna Cieslik, Zachary Cole, Olivier Collignon, Robert Cox, William Cunningham, Stefan Czoschke, Kamalaker Dadi, Charles Davis, Alberto De Luca, Mauricio Delgado, Lysia Demetriou, Jeffrey Dennison, Xin Di, Erin Dickie, Ekaterina Dobryakova, Claire Donnat, Juergen Dukart, Niall W. Duncan, Joke Durnez, Amr Eed, Simon Eickhoff, Andrew Erhart, Laura Fontanesi, G. Matthew Fricke, Adriana Galvan, Remi Gau, Sarah Genon, Tristan Glatard, Enrico Glerean, Jelle Goeman, Sergej Golowin, Carlos González-García, Krzysztof Gorgolewski, Cheryl Grady, Mikella Green, João Guassi Moreira, Olivia Guest, Shabnam Hakimi, J. Paul Hamilton, Roeland Hancock, Giacomo Handjaras, Bronson Harry, Colin Hawco, Peer Herholz, Gabrielle Herman, Stephan Heunis, Felix Hoffstaedter, Jeremy Hogeveen, S. Holmes, Chuan-Peng Hu, Scott Huettel, Matthew Hughes, Vittorio Iacovella, Alexandru Iordan, Peder Isager, Ayse Ilkay Isik, Andrew Jahn, Matthew Johnson, Tom Johnstone, Michael Joseph, Anthony Juliano, Joseph Kable, Michalis Kassinopoulos, Cemal Koba, Xiang-Zhen Kong, Timothy Koscik, Nuri Erkut Kucukboyaci, Brice Kuhl, Sebastian Kupek, Angela Laird, Claus Lamm, Robert Langner, Nina Lauharatanahirun, Hongmi Lee, Sangil Lee, Alexander Leemans, Andrea Leo, Elise Lesage, Flora Li, Monica Li, Phui Cheng Lim, Evan Lintz, Schuyler Liphardt, Annabel Losecaat Vermeer, Bradley Love, Michael Mack, Norberto Malpica, Theo Marins, Vanessa Sochat, Kelsey McDonald, Joseph McGuire, Helena Melero, Adriana Méndez Leal, Benjamin Meyer, Kristin Meyer, Paul Mihai, Georgios Mitsis, Jorge Moll, Dylan Nielson, Gustav Nilsonne, Michael Notter, Emanuele Olivetti, Adrian Onicas, Paolo Papale, Kaustubh Patil, Jonathan E. Peelle, Alexandre Pérez, Doris Pischedda, Jean-Baptiste Poline, Yanina Prystauka, Shruti Ray, Patricia Reuter-Lorenz, Richard Reynolds, Emiliano Ricciardi, Jenny Rieck, Anais Rodriguez-Thompson, Anthony Romyn, Taylor Salo, Gregory Samanez-Larkin, Emilio Sanz-Morales, Margaret Schlichting, Douglas Schultz, Qiang Shen, Margaret Sheridan, Fu Shiguang, Jennifer Silvers, Kenny Skagerlund, Alec Smith, David Smith, Peter Sokol-Hessner, Simon Steinkamp, Sarah Tashjian, Bertrand Thirion, John Thorp, Gustav Tinghög, Loreen Tisdall, Steven Tompson, Claudio Toro-Serey, Juan Torre, Leonardo Tozzi, Vuong Truong, Luca Turella, Anna E. van’t Veer, Tom Verguts, Jean Vettel, Sagana Vijayarajah, Khoi Vo, Matthew Wall, Wouter D. Weeda, Susanne Weis, David White, David Wisniewski, Alba Xifra-Porxas, Emily Yearling, Sangsuk Yoon, Rui Yuan, Kenneth Yuen, Lei Zhang, Xu Zhang, Joshua Zosky, Thomas Nichols, Russell A. Poldrack, Tom Schonberg
Data analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante hypotheses. The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, meta-analytic approaches that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discussed.
6,253 downloads bioRxiv neuroscience
In many experiments, neuroscientists tightly control behavior, record many trials, and obtain trial-averaged firing rates from hundreds of neurons in circuits containing billions of behaviorally relevant neurons. Dimensionality reduction methods reveal a striking simplicity underlying such multi-neuronal data: they can be reduced to a low-dimensional space, and the resulting neural trajectories in this space yield a remarkably insightful dynamical portrait of circuit computation. This simplicity raises profound and timely conceptual questions. What are its origins and its implications for the complexity of neural dynamics? How would the situation change if we recorded more neurons? When, if at all, can we trust dynamical portraits obtained from measuring an infinitesimal fraction of task relevant neurons? We present a theory that answers these questions, and test it using physiological recordings from reaching monkeys. This theory reveals conceptual insights into how task complexity governs both neural dimensionality and accurate recovery of dynamic portraits, thereby providing quantitative guidelines for future large-scale experimental design.
- 27 Nov 2020: The website and API now include results pulled from medRxiv as well as bioRxiv.
- 18 Dec 2019: We're pleased to announce PanLingua, a new tool that enables you to search for machine-translated bioRxiv preprints using more than 100 different languages.
- 21 May 2019: PLOS Biology has published a community page about Rxivist.org and its design.
- 10 May 2019: The paper analyzing the Rxivist dataset has been published at eLife.
- 1 Mar 2019: We now have summary statistics about bioRxiv downloads and submissions.
- 8 Feb 2019: Data from Altmetric is now available on the Rxivist details page for every preprint. Look for the "donut" under the download metrics.
- 30 Jan 2019: preLights has featured the Rxivist preprint and written about our findings.
- 22 Jan 2019: Nature just published an article about Rxivist and our data.
- 13 Jan 2019: The Rxivist preprint is live!