Rxivist combines preprints from bioRxiv with data from Twitter to help you find the papers being discussed in your field. Currently indexing 84,908 bioRxiv papers from 365,213 authors.
Most downloaded bioRxiv papers, all time
in category neuroscience
14,763 results found. For more information, click each entry to expand.
7,703 downloads neuroscience
Ruixuan Gao, Shoh M Asano, Srigokul Upadhyayula, Pisarev Igor, Daniel E. Milkie, Tsung-Li Liu, Singh Ved, Graves Austin, Grace H Huynh, Yongxin Zhao, John Bogovic, Jennifer Colonell, Jennifer Lippincott-Schwartz, Christopher Zugates, Susan Tappan, Alfredo Rodriguez, Kishore R. Mosaliganti, Sean G. Megason, Adam W. Hantman, Gerald M. Rubin, Tom Kirchhausen, Stephan Saalfeld, Yoshinori Aso, Edward S. Boyden, Eric Betzig
Optical and electron microscopy have made tremendous inroads in understanding the complexity of the brain, but the former offers insufficient resolution to reveal subcellular details and the latter lacks the throughput and molecular contrast to visualize specific molecular constituents over mm-scale or larger dimensions. We combined expansion microscopy and lattice light sheet microscopy to image the nanoscale spatial relationships between proteins across the thickness of the mouse cortex or the entire Drosophila brain, including synaptic proteins at dendritic spines, myelination along axons, and presynaptic densities at dopaminergic neurons in every fly neuropil domain. The technology should enable statistically rich, large scale studies of neural development, sexual dimorphism, degree of stereotypy, and structural correlations to behavior or neural activity, all with molecular contrast.
7,695 downloads neuroscience
Serial and parallel processing in visual search have been long debated in psychology but the processing mechanism remains an open issue. Serial processing allows only one object at a time to be processed, whereas parallel processing assumes that various objects are processed simultaneously. Here we present novel neural models for the two types of processing mechanisms based on analysis of simultaneously recorded spike trains using electrophysiological data from prefrontal cortex of rhesus monkeys while processing task-relevant visual displays. We combine mathematical models describing neuronal attention and point process models for spike trains. The same model can explain both serial and parallel processing by adopting different parameter regimes. We present statistical methods to distinguish between serial and parallel processing based on both maximum likelihood estimates and decoding analysis of the attention when two stimuli are presented simultaneously. Results show that both processing mechanisms are in play for the simultaneously recorded neurons, but neurons tend to follow parallel processing in the beginning after the onset of the stimulus pair, whereas they tend to serial processing later on. This could be explained by parallel processing being related to sensory bottom-up signals or feedforward processing, which typically occur in the beginning after stimulus onset, whereas top-down signals related to cognitive modulatory influences guiding attentional effects in recurrent feedback connections occur after a small delay, and is related to serial processing, where all processing capacities are being directed towards the attended object.
7,233 downloads neuroscience
We demonstrate a two-photon imaging system with corrected optics including a custom objective that provides cellular resolution across a 3.5 mm field of view (9.6 mm^2). Temporally multiplexed excitation pathways can be independently repositioned in XY and Z to simultaneously image regions within the expanded field of view. We used this new imaging system to measure activity correlations between neurons in different cortical areas in awake mice.
7,178 downloads neuroscience
Electrophysiological signals across species and recording scales exhibit both periodic and aperiodic features. Periodic oscillations have been widely studied and linked to numerous physiological, cognitive, behavioral, and disease states, while the aperiodic "background" 1/f component of neural power spectra has received far less attention. Most analyses of oscillations are conducted on a priori, canonically-defined frequency bands without consideration of the underlying aperiodic structure, or verification that a periodic signal even exists in addition to the aperiodic signal. This is problematic, as recent evidence shows that the aperiodic signal is dynamic, changing with age, task demands, and cognitive state. It has also been linked to the relative excitation/inhibition of the underlying neuronal population. This means that standard analytic approaches easily conflate changes in the periodic and aperiodic signals with one another because the aperiodic parameters--along with oscillation center frequency, power, and bandwidth--are all dynamic in physiologically meaningful, but likely different, ways. In order to overcome the limitations of traditional narrowband analyses and to reduce the potentially deleterious effects of conflating these features, we introduce a novel algorithm for automatic parameterization of neural power spectral densities (PSDs) as a combination of the aperiodic signal and putative periodic oscillations. Notably, this algorithm requires no a priori specification of band limits and accounts for potentially-overlapping oscillations while minimizing the degree to which they are confounded with one another. This algorithm is amenable to large-scale data exploration and analysis, providing researchers with a tool to quickly and accurately parameterize neural power spectra.
7,091 downloads neuroscience
When experts are immersed in a task, do their brains prioritize task-related activity? Most efforts to understand neural activity during well-learned tasks focus on cognitive computations and specific task-related movements. We wondered whether task-performing animals explore a broader movement landscape, and how this impacts neural activity. We characterized movements using video and other sensors and measured neural activity using widefield and two-photon imaging. Cortex-wide activity was dominated by movements, especially uninstructed movements, reflecting unknown priorities of the animal. Some uninstructed movements were aligned to trial events. Accounting for them revealed that neurons with similar trial-averaged activity often reflected utterly different combinations of cognitive and movement variables. Other movements occurred idiosyncratically, accounting for trial-by-trial fluctuations that are often considered “noise”. This held true for extracellular Neuropixels recordings in cortical and subcortical areas. Our observations argue that animals execute expert decisions while performing richly varied, uninstructed movements that profoundly shape neural activity.
7,035 downloads neuroscience
Tamara B. Franklin, Bianca A. Silva, Zina Perova, Livia Marrone, Maria E. Masferrer, Yang Zhan, Angie Kaplan, Louise Greetham, Violaine Verrechia, Andreas Halman, Sara Pagella, Alexei L. Vyssotski, Anna Illarionova, Valery Grinevich, Tiago Branco, Cornelius T. Gross
The prefrontal cortex plays a critical role in adjusting an organism's behavior to its environment. In particular, numerous studies have implicated the prefrontal cortex in the control of social behavior, but the neural circuits that mediate these effects remain unknown. Here we investigated behavioral adaptation to social defeat in mice and uncovered a critical contribution of neural projections from the medial prefrontal cortex to the dorsal periaqueductal grey, a brainstem area vital for defensive responses. Social defeat caused a weakening of functional connectivity between these two areas and selective inhibition of these projections mimicked the behavioral effects of social defeat. These findings define a specific neural projection by which the prefrontal cortex can control and adapt social behavior.
6,878 downloads neuroscience
Morlet wavelets are frequently used for time-frequency analysis of non-stationary time series data, such as neuroelectrical signals recorded from the brain. The crucial parameter of Morlet wavelets is the width of the Gaussian that tapers the sine wave. This width parameter controls the trade-off between temporal precision and frequency precision. It is typically defined as the "number of cycles," but this parameter is opaque, and often leads to uncertainty and suboptimal analysis choices, as well as being difficult to interpret and evaluate. The purpose of this paper is to present alternative formulations of Morlet wavelets in time and in frequency that allow parameterizing the wavelets directly in terms of the desired temporal and spectral smoothing (as full-width at half-maximum). This formulation provides clarity on an important data analysis parameter, and should facilitate proper analyses, reporting, and interpretation of results. MATLAB code is provided.
6,648 downloads neuroscience
Bosiljka Tasic, Zizhen Yao, Kimberly A Smith, Lucas Graybuck, Thuc Nghi Nguyen, Darren Bertagnolli, Jeff Goldy, Emma Garren, Michael N Economo, Sarada Viswanathan, Osnat Penn, Trygve E. Bakken, Vilas Menon, Jeremy Miller, Olivia Fong, Karla E Hirokawa, Kanan Lathia, Christine Rimorin, Michael Tieu, Rachael Larsen, Tamara Casper, Eliza Barkan, Matthew Kroll, Seana Parry, Nadiya V Shapovalova, Daniel Hirchstein, Julie Pendergraft, Tae Kyung Kim, Aaron Szafer, Nick Dee, Peter Groblewski, Ian Wickersham, Ali Cetin, Julie A Harris, Boaz P. Levi, Susan M. Sunkin, Linda Madisen, Tanya L. Daigle, Loren Looger, Amy Bernard, John Phillips, Ed S. Lein, Michael Hawrylycz, Karel Svoboda, Allan R. Jones, Christof Koch, Hongkui Zeng
Neocortex contains a multitude of cell types segregated into layers and functionally distinct regions. To investigate the diversity of cell types across the mouse neocortex, we analyzed 12,714 cells from the primary visual cortex (VISp), and 9,035 cells from the anterior lateral motor cortex (ALM) by deep single-cell RNA-sequencing (scRNA-seq), identifying 116 transcriptomic cell types. These two regions represent distant poles of the neocortex and perform distinct functions. We define 50 inhibitory transcriptomic cell types, all of which are shared across both cortical regions. In contrast, 49 of 52 excitatory transcriptomic types were found in either VISp or ALM, with only three present in both. By combining single cell RNA-seq and retrograde labeling, we demonstrate correspondence between excitatory transcriptomic types and their region-specific long-range target specificity. This study establishes a combined transcriptomic and projectional taxonomy of cortical cell types from functionally distinct regions of the mouse cortex.
6,578 downloads neuroscience
Substantial research has investigated the association between intelligence and psychopathic traits. The findings to date have been inconsistent and have not always considered the multi-dimensional nature of psychopathic traits. Moreover, there has been a tendency to confuse psychopathy with other closely related, clinically significant disorders. The current study represents a meta-analysis conducted to evaluate the direction and magnitude of the association of intelligence with global psychopathy, as well as its factors and facets, and related disorders (Antisocial Personality Disorder, Conduct Disorder, and Oppositional Defiant Disorder). Our analyses revealed a small, significant, negative relationship between intelligence and total psychopathy ( r = -.07, p = .001). Analysis of factors and facets found differential associations, including both significant positive (e.g., interpersonal facet) and negative (e.g., affective facet) associations, further affirming that psychopathy is a multi-dimensional construct. Additionally, intelligence was negatively associated with Antisocial Personality Disorder ( r = -.13, p = .001) and Conduct Disorder ( r = -.11, p = .001), but positively with Oppositional Defiant Disorder ( r = .06, p = .001). There was significant heterogeneity across studies for most effects, but the results of moderator analyses were inconsistent. Finally, bias analyses did not find significant evidence for publication bias or outsized effects of outliers.
6,472 downloads neuroscience
Thomas E. Nichols, Samir Das, Simon B. Eickhoff, Alan C. Evans, Tristan Glatard, Michael Hanke, Nikolaus Kriegeskorte, Michael P. Milham, Russell A. Poldrack, Jean-Baptiste Poline, Erika Proal, Bertrand Thirion, David C. Van Essen, Tonya White, B. T. Thomas Yeo
Neuroimaging enables rich noninvasive measurements of human brain activity, but translating such data into neuroscientific insights and clinical applications requires complex analyses and collaboration among a diverse array of researchers. The open science movement is reshaping scientific culture and addressing the challenges of transparency and reproducibility of research. To advance open science in neuroimaging the Organization for Human Brain Mapping created the Committee on Best Practice in Data Analysis and Sharing (COBIDAS), charged with creating a report that collects best practice recommendations from experts and the entire brain imaging community. The purpose of this work is to elaborate the principles of open and reproducible research for neuroimaging using Magnetic Resonance Imaging (MRI), and then distill these principles to specific research practices. Many elements of a study are so varied that practice cannot be prescribed, but for these areas we detail the information that must be reported to fully understand and potentially replicate a study. For other elements of a study, like statistical modelling where specific poor practices can be identified, and the emerging areas of data sharing and reproducibility, we detail both good practice and reporting standards. For each of seven areas of a study we provide tabular listing of over 100 items to help plan, execute, report and share research in the most transparent fashion. Whether for individual scientists, or for editors and reviewers, we hope these guidelines serve as a benchmark, to raise the standards of practice and reporting in neuroimaging using MRI.
6,294 downloads neuroscience
Adam H Marblestone, Evan R Daugharthy, Reza Kalhor, Ian D Peikon, Justus M. Kebschull, Seth L Shipman, Yuriy Mishchenko, Jehyuk Lee, David A Dalrymple, Bradley M Zamft, Konrad Paul Kording, Edward S. Boyden, Anthony M. Zador, George M. Church
We analyze the scaling and cost-performance characteristics of current and projected connectomics approaches, with reference to the potential implications of recent advances in diverse contributing fields. Three generalized strategies for dense connectivity mapping at the scale of whole mammalian brains are considered: electron microscopic axon tracing, optical imaging of combinatorial molecular markers at synapses, and bulk DNA sequencing of trans-synaptically exchanged nucleic acid barcode pairs. Due to advances in parallel-beam instrumentation, whole mouse brain electron microscopic image acquisition could cost less than $100 million, with total costs presently limited by image analysis to trace axons through large image stacks. Optical microscopy at 50 to 100 nm isotropic resolution could potentially read combinatorially multiplexed molecular information from individual synapses, which could indicate the identifies of the pre-synaptic and post-synaptic cells without relying on axon tracing. An optical approach to whole mouse brain connectomics may be achievable for less than $10 million and could be enabled by emerging technologies to sequence nucleic acids in-situ in fixed tissue via fluorescent microscopy. Novel strategies relying on bulk DNA sequencing, which would extract the connectome without direct imaging of the tissue, could produce a whole mouse brain connectome for $100k to $1 million or a mouse cortical connectome for $10k to $100k. Anticipated further reductions in the cost of DNA sequencing could lead to a $1000 mouse cortical connectome.
6,145 downloads neuroscience
It is proposed that a cognitive map encoding the relationships between entities in the world supports flexible behaviour, but the majority of the neural evidence for such a system comes from studies of spatial navigation. Recent work describing neuronal parallels between spatial and non-spatial behaviours has rekindled the notion of a systematic organisation of knowledge across multiple domains. We review experimental evidence and theoretical frameworks that point to principles unifying these apparently disparate functions. These principles describe how to learn and use abstract, generalisable knowledge and suggest map-like representations observed in a spatial context may be an instance of general coding mechanisms capable of organising knowledge of all kinds. We highlight how artificial agents endowed with such principles exhibit flexible behaviour and learn map-like representations observed in the brain. Finally, we speculate on how these principles may offer insight into the extreme generalisations, abstractions and inferences that characterise human cognition.
5,932 downloads neuroscience
Julie A Harris, Stefan Mihalas, Karla E Hirokawa, Jennifer D. Whitesell, Joseph E. Knox, Amy Bernard, Phillip Bohn, Shiella Caldejon, Linzy Casal, Andrew Cho, David Feng, Nathalie Gaudreault, Charles R. Gerfen, Nile Graddis, Peter A. Groblewski, Alex Henry, Anh Ho, Robert Howard, Leonard Kuan, Jerome Lecoq, Jennifer Luviano, Stephen McConoghy, Marty T. Mortrud, Maitham Naeemi, Lydia Ng, Seung W Oh, Benjamin Ouellette, Staci Sorensen, Wayne Wakeman, Quanxin Wang, Ali Williford, John W. Phillips, Allan Jones, Christof Koch, Hongkui Zeng
The mammalian cortex is a laminar structure composed of many cell types densely interconnected in complex ways. Recent systematic efforts to map the mouse mesoscale connectome provide comprehensive projection data on interareal connections, but not at the level of specific cell classes or layers within cortical areas. We present here a significant expansion of the Allen Mouse Brain Connectivity Atlas, with ~1,000 new axonal projection mapping experiments across nearly all isocortical areas in 49 Cre driver lines. Using 13 lines selective for cortical layer-specific projection neuron classes, we identify the differential contribution of each layer/class to the overall intracortical connectivity patterns. We find layer 5 (L5) projection neurons account for essentially all intracortical outputs. L2/3, L4, and L6 neurons contact a subset of the L5 cortical targets. We also describe the most common axon lamination patterns in cortical targets. Most patterns are consistent with previous anatomical rules used to determine hierarchical position between cortical areas (feedforward, feedback), with notable exceptions. While diverse target lamination patterns arise from every source layer/class, L2/3 and L4 neurons are primarily associated with feedforward type projection patterns and L6 with feedback. L5 has both feedforward and feedback projection patterns. Finally, network analyses revealed a modular organization of the intracortical connectome. By labeling interareal and intermodule connections as feedforward or feedback, we present an integrated view of the intracortical connectome as a hierarchical network.
5,750 downloads neuroscience
Deep neural networks (DNNs) have recently been applied successfully to brain decoding and image reconstruction from functional magnetic resonance imaging (fMRI) activity. However, direct training of a DNN with fMRI data is often avoided because the size of available data is thought to be insufficient to train a complex network with numerous parameters. Instead, a pre-trained DNN has served as a proxy for hierarchical visual representations, and fMRI data were used to decode individual DNN features of a stimulus image using a simple linear model, which were then passed to a reconstruction module. Here, we present our attempt to directly train a DNN model with fMRI data and the corresponding stimulus images to build an end-to-end reconstruction model. We trained a generative adversarial network with an additional loss term defined in a high-level feature space (feature loss) using up to 6,000 training data points (natural images and the fMRI responses). The trained deep generator network was tested on an independent dataset, directly producing a reconstructed image given an fMRI pattern as the input. The reconstructions obtained from the proposed method showed resemblance with both natural and artificial test stimuli. The accuracy increased as a function of the training data size, though not outperforming the decoded feature-based method with the available data size. Ablation analyses indicated that the feature loss played a critical role to achieve accurate reconstruction. Our results suggest a potential for the end-to-end framework to learn a direct mapping between brain activity and perception given even larger datasets.
5,733 downloads neuroscience
In response to reports of inflated false positive rate (FPR) in FMRI group analysis tools, a series of replications, investigations, and software modifications were made to address this issue. While these investigations continue, significant progress has been made to adapt AFNI to fix such problems. Two separate lines of changes have been made. First, a long-tailed model for the spatial correlation of the FMRI noise characterized by autocorrelation function (ACF) was developed and implemented into the 3dClustSim tool for determining the cluster-size threshold to use for a given voxel-wise threshold. Second, the 3dttest++ program was modified to do randomization of the voxel-wise t-tests and then to feed those randomized t-statistic maps into 3dClustSim directly for cluster-size threshold determination﹣without any spatial model for the ACF. These approaches were tested with the Beijing subset of the FCON-1000 data collection. The first approach shows markedly improved (reduced) FPR, but in many cases is still above the nominal 5%. The second approach shows FPRs clustered tightly about 5% across all per-voxel p-value thresholds ≤ 0.01. If t-tests from a univariate GLM are adequate for the group analysis in question, the second approach is what the AFNI group currently recommends for thresholding. If more complex per-voxel statistical analyses are required (where permutation/randomization is impracticable), then our current recommendation is to use the new ACF modeling approach coupled with a per-voxel p-threshold of 0.001 or below. Simulations were also repeated with the now infamously "buggy" version of 3dClustSim: the effect of the bug on FPRs was minimal (of order a few percent).
5,636 downloads neuroscience
Martin Schrimpf, Jonas Kubilius, Ha Hong, Najib J. Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan Prescott-Roy, Franziska Geiger, Kailyn Schmidt, Daniel L. K. Yamins, James J. DiCarlo
The internal representations of early deep artificial neural networks (ANNs) were found to be remarkably similar to the internal neural representations measured experimentally in the primate brain. Here we ask, as deep ANNs have continued to evolve, are they becoming more or less brain-like? ANNs that are most functionally similar to the brain will contain mechanisms that are most like those used by the brain. We therefore developed Brain-Score – a composite of multiple neural and behavioral benchmarks that score any ANN on how similar it is to the brain’s mechanisms for core object recognition – and we deployed it to evaluate a wide range of state-of-the-art deep ANNs. Using this scoring system, we here report that: (1) DenseNet-169, CORnet-S and ResNet-101 are the most brain-like ANNs. (2) There remains considerable variability in neural and behavioral responses that is not predicted by any ANN, suggesting that no ANN model has yet captured all the relevant mechanisms. (3) Extending prior work, we found that gains in ANN ImageNet performance led to gains on Brain-Score. However, correlation weakened at ≥ 70% top-1 ImageNet performance, suggesting that additional guidance from neuroscience is needed to make further advances in capturing brain mechanisms. (4) We uncovered smaller (i.e. less complex) ANNs that are more brain-like than many of the best-performing ImageNet models, which suggests the opportunity to simplify ANNs to better understand the ventral stream. The scoring system used here is far from complete. However, we propose that evaluating and tracking model-benchmark correspondences through a Brain-Score that is regularly updated with new brain data is an exciting opportunity: experimental benchmarks can be used to guide machine network evolution, and machine networks are mechanistic hypotheses of the brain’s network and thus drive next experiments. To facilitate both of these, we release [Brain-Score.org]: a platform that hosts the neural and behavioral benchmarks, where ANNs for visual processing can be submitted to receive a Brain-Score and their rank relative to other models, and where new experimental data can be naturally incorporated. : http://Brain-Score.org
5,589 downloads neuroscience
In many experiments, neuroscientists tightly control behavior, record many trials, and obtain trial-averaged firing rates from hundreds of neurons in circuits containing billions of behaviorally relevant neurons. Dimensionality reduction methods reveal a striking simplicity underlying such multi-neuronal data: they can be reduced to a low-dimensional space, and the resulting neural trajectories in this space yield a remarkably insightful dynamical portrait of circuit computation. This simplicity raises profound and timely conceptual questions. What are its origins and its implications for the complexity of neural dynamics? How would the situation change if we recorded more neurons? When, if at all, can we trust dynamical portraits obtained from measuring an infinitesimal fraction of task relevant neurons? We present a theory that answers these questions, and test it using physiological recordings from reaching monkeys. This theory reveals conceptual insights into how task complexity governs both neural dimensionality and accurate recovery of dynamic portraits, thereby providing quantitative guidelines for future large-scale experimental design.
5,587 downloads neuroscience
Rotem Botvinik-Nezer, Felix Holzmeister, Colin F. Camerer, Anna Dreber, Juergen Huber, Magnus Johannesson, Michael Kirchler, Roni Iwanir, Jeanette A. Mumford, Alison Adcock, Paolo Avesani, Blazej Baczkowski, Aahana Bajracharya, Leah Bakst, Sheryl Ball, Marco Barilari, Nadège Bault, Derek Beaton, Julia Beitner, Roland Benoit, Ruud Berkers, Jamil Bhanji, Bharat Biswal, Sebastian Bobadilla-Suarez, Tiago Bortolini, Katherine Bottenhorn, Alexander Bowring, Senne Braem, Hayley Brooks, Emily Brudner, Cristian Calderon, Julia Camilleri, Jaime Castrellon, Luca Cecchetti, Edna Cieslik, Zachary Cole, Olivier Collignon, Robert Cox, William Cunningham, Stefan Czoschke, Kamalaker Dadi, Charles Davis, Alberto De Luca, Mauricio Delgado, Lysia Demetriou, Jeffrey Dennison, Xin Di, Erin Dickie, Ekaterina Dobryakova, Claire Donnat, Juergen Dukart, Niall W. Duncan, Joke Durnez, Amr Eed, Simon Eickhoff, Andrew Erhart, Laura Fontanesi, G. Matthew Fricke, Adriana Galvan, Remi Gau, Sarah Genon, Tristan Glatard, Enrico Glerean, Jelle Goeman, Sergej Golowin, Carlos González-García, Krzysztof Gorgolewski, Cheryl Grady, Mikella Green, João Guassi Moreira, Olivia Guest, Shabnam Hakimi, J. Paul Hamilton, Roeland Hancock, Giacomo Handjaras, Bronson Harry, Colin Hawco, Peer Herholz, Gabrielle Herman, Stephan Heunis, Felix Hoffstaedter, Jeremy Hogeveen, Susan Holmes, Chuan-Peng Hu, Scott Huettel, Matthew Hughes, Vittorio Iacovella, Alexandru Iordan, Peder Isager, Ayse Ilkay Isik, Andrew Jahn, Matthew Johnson, Tom Johnstone, Michael Joseph, Anthony Juliano, Joseph Kable, Michalis Kassinopoulos, Cemal Koba, Xiang-Zhen Kong, Timothy Koscik, Nuri Erkut Kucukboyaci, Brice Kuhl, Sebastian Kupek, Angela Laird, Claus Lamm, Robert Langner, Nina Lauharatanahirun, Hongmi Lee, Sangil Lee, Alexander Leemans, Andrea Leo, Elise Lesage, Flora Li, Monica Li, Phui Cheng Lim, Evan Lintz, Schuyler Liphardt, Annabel Losecaat Vermeer, Bradley Love, Michael Mack, Norberto Malpica, Theo Marins, Camille Maumet, Kelsey McDonald, Joseph McGuire, Helena Melero, Adriana Méndez Leal, Benjamin Meyer, Kristin Meyer, Paul Mihai, Georgios Mitsis, Jorge Moll, Dylan Nielson, Gustav Nilsonne, Michael Notter, Emanuele Olivetti, Adrian Onicas, Paolo Papale, Kaustubh Patil, Jonathan E. Peelle, Alexandre Pérez, Doris Pischedda, Jean-Baptiste Poline, Yanina Prystauka, Shruti Ray, Patricia Reuter-Lorenz, Richard Reynolds, Emiliano Ricciardi, Jenny Rieck, Anais Rodriguez-Thompson, Anthony Romyn, Taylor Salo, Gregory Samanez-Larkin, Emilio Sanz-Morales, Margaret Schlichting, Douglas Schultz, Qiang Shen, Margaret Sheridan, Fu Shiguang, Jennifer Silvers, Kenny Skagerlund, Alec Smith, David Smith, Peter Sokol-Hessner, Simon Steinkamp, Sarah Tashjian, Bertrand Thirion, John Thorp, Gustav Tinghög, Loreen Tisdall, Steven Tompson, Claudio Toro-Serey, Juan Torre, Leonardo Tozzi, Vuong Truong, Luca Turella, Anna E. van’t Veer, Tom Verguts, Jean Vettel, Sagana Vijayarajah, Khoi Vo, Matthew Wall, Wouter D. Weeda, Susanne Weis, David White, David Wisniewski, Alba Xifra-Porxas, Emily Yearling, Sangsuk Yoon, Rui Yuan, Kenneth Yuen, Lei Zhang, Xu Zhang, Joshua Zosky, Thomas E. Nichols, Russell A. Poldrack, Tom Schonberg
Data analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante hypotheses. The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, meta-analytic approaches that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discussed.
5,515 downloads neuroscience
Johan Winnubst, Erhan Bas, Tiago A. Ferreira, Zhuhao Wu, Michael N Economo, Patrick Edson, Ben J. Arthur, Christopher Bruns, Konrad Rokicki, David Schauder, Donald J. Olbris, Sean D. Murphy, David G. Ackerman, Cameron Arshadi, Perry Baldwin, Regina Blake, Ahmad Elsayed, Mashtura Hasan, Daniel Ramirez, Bruno Dos Santos, Monet Weldon, Amina Zafar, Joshua T. Dudmann, Charles R. Gerfen, Adam W Hantman, Wyatt Korff, Scott M. Sternson, Nelson Spruston, Karel Svoboda, Jayaram Chandrashekar
Neuronal cell types are the nodes of neural circuits that determine the flow of information within the brain. Neuronal morphology, especially the shape of the axonal arbor, provides an essential descriptor of cell type and reveals how individual neurons route their output across the brain. Despite the importance of morphology, few projection neurons in the mouse brain have been reconstructed in their entirety. Here we present a robust and efficient platform for imaging and reconstructing complete neuronal morphologies, including axonal arbors that span substantial portions of the brain. We used this platform to reconstruct more than 1,000 projection neurons in the motor cortex, thalamus, subiculum, and hypothalamus. Together, the reconstructed neurons comprise more than 75 meters of axonal length and are available in a searchable online database. Axonal shapes revealed previously unknown subtypes of projection neurons and suggest organizational principles of long-range connectivity.
5,495 downloads neuroscience
Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the spatial pooler, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the spatial pooler outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the spatial pooler in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.
- 18 Dec 2019: We're pleased to announce PanLingua, a new tool that enables you to search for machine-translated bioRxiv preprints using more than 100 different languages.
- 21 May 2019: PLOS Biology has published a community page about Rxivist.org and its design.
- 10 May 2019: The paper analyzing the Rxivist dataset has been published at eLife.
- 1 Mar 2019: We now have summary statistics about bioRxiv downloads and submissions.
- 8 Feb 2019: Data from Altmetric is now available on the Rxivist details page for every preprint. Look for the "donut" under the download metrics.
- 30 Jan 2019: preLights has featured the Rxivist preprint and written about our findings.
- 22 Jan 2019: Nature just published an article about Rxivist and our data.
- 13 Jan 2019: The Rxivist preprint is live!