Most downloaded biology preprints, all time
in category neuroscience
19,274 results found. For more information, click each entry to expand.
6,439 downloads bioRxiv neuroscience
Rotem Botvinik-Nezer, Felix Holzmeister, Colin F. Camerer, Anna Dreber, Juergen Huber, Magnus Johannesson, Michael Kirchler, Roni Iwanir, Jeanette A. Mumford, Alison Adcock, Paolo Avesani, Blazej Baczkowski, Aahana Bajracharya, Leah Bakst, Sheryl Ball, Marco Barilari, Nadège Bault, Derek Beaton, Julia Beitner, Roland Benoit, Ruud Berkers, Jamil Bhanji, Bharat Biswal, Sebastian Bobadilla-Suarez, Tiago Bortolini, Katherine Bottenhorn, Alexander Bowring, Senne Braem, Hayley Brooks, Emily Brudner, Cristian Calderon, Julia Camilleri, Jaime Castrellon, Luca Cecchetti, Edna Cieslik, Zachary Cole, Olivier Collignon, Robert Cox, William Cunningham, Stefan Czoschke, Kamalaker Dadi, Charles Davis, Alberto De Luca, Mauricio Delgado, Lysia Demetriou, Jeffrey Dennison, Xin Di, Erin Dickie, Ekaterina Dobryakova, Claire Donnat, Juergen Dukart, Niall W Duncan, Joke Durnez, Amr Eed, Simon Eickhoff, Andrew Erhart, Laura Fontanesi, G. Matthew Fricke, Adriana Galvan, Remi Gau, Sarah Genon, Tristan Glatard, Enrico Glerean, Jelle Goeman, Sergej Golowin, Carlos González-García, Krzysztof Gorgolewski, Cheryl Grady, Mikella Green, João Guassi Moreira, Olivia Guest, Shabnam Hakimi, J. Paul Hamilton, Roeland Hancock, Giacomo Handjaras, Bronson Harry, Colin Hawco, Peer Herholz, Gabrielle Herman, Stephan Heunis, Felix Hoffstaedter, Jeremy Hogeveen, Susan P Holmes, Chuan-Peng Hu, Scott Huettel, Matthew Hughes, Vittorio Iacovella, Alexandru Iordan, Peder Isager, Ayse Ilkay Isik, Andrew Jahn, Matthew Johnson, Tom Johnstone, Michael Joseph, Anthony Juliano, Joseph Kable, Michalis Kassinopoulos, Cemal Koba, Xiang-Zhen Kong, Timothy Koscik, Nuri Erkut Kucukboyaci, Brice Kuhl, Sebastian Kupek, Angela Laird, Claus Lamm, Robert Langner, Nina Lauharatanahirun, Hongmi Lee, Sangil Lee, Alexander Leemans, Andrea Leo, Elise Lesage, Flora Li, Monica Li, Phui Cheng Lim, Evan Lintz, Schuyler Liphardt, Annabel Losecaat Vermeer, Bradley Love, Michael Mack, Norberto Malpica, Theo Marins, Vanessa Sochat, Kelsey McDonald, Joseph McGuire, Helena Melero, Adriana Méndez Leal, Benjamin Meyer, Kristin Meyer, Paul Mihai, Georgios Mitsis, Jorge Moll, Dylan Nielson, Gustav Nilsonne, Michael Notter, Emanuele Olivetti, Adrian Onicas, Paolo Papale, Kaustubh Patil, Jonathan E Peelle, Alexandre Pérez, Doris Pischedda, Jean-Baptiste Poline, Yanina Prystauka, Shruti Ray, Patricia Reuter-Lorenz, Richard Reynolds, Emiliano Ricciardi, Jenny Rieck, Anais Rodriguez-Thompson, Anthony Romyn, Taylor Salo, Gregory Samanez-Larkin, Emilio Sanz-Morales, Margaret Schlichting, Douglas Schultz, Qiang Shen, Margaret Sheridan, Fu Shiguang, Jennifer Silvers, Kenny Skagerlund, Alec Smith, David Smith, Peter Sokol-Hessner, Simon Steinkamp, Sarah Tashjian, Bertrand Thirion, John Thorp, Gustav Tinghög, Loreen Tisdall, Steven Tompson, Claudio Toro-Serey, Juan Torre, Leonardo Tozzi, Vuong Truong, Luca Turella, Anna E. van’t Veer, Tom Verguts, Jean Vettel, Sagana Vijayarajah, Khoi Vo, Matthew Wall, Wouter D. Weeda, Susanne Weis, David White, David Wisniewski, Alba Xifra-Porxas, Emily Yearling, Sangsuk Yoon, Rui Yuan, Kenneth Yuen, Lei Zhang, Xu Zhang, Joshua Zosky, Thomas E. Nichols, Russell A. Poldrack, Tom Schonberg
Data analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante hypotheses. The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, meta-analytic approaches that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discussed.
6,394 downloads bioRxiv neuroscience
We introduce a novel approach to study neurons as sophisticated I/O information processing units by utilizing recent advances in the field of machine learning. We trained deep neural networks (DNNs) to mimic the I/O behavior of a detailed nonlinear model of a layer 5 cortical pyramidal cell, receiving rich spatio-temporal patterns of input synapse activations. A Temporally Convolutional DNN (TCN) with seven layers was required to accurately, and very efficiently, capture the I/O of this neuron at the millisecond resolution. This complexity primarily arises from local NMDA-based nonlinear dendritic conductances. The weight matrices of the DNN provide new insights into the I/O function of cortical pyramidal neurons, and the approach presented can provide a systematic characterization of the functional complexity of different neuron types. Our results demonstrate that cortical neurons can be conceptualized as multi-layered “deep” processing units, implying that the cortical networks they form have a non-classical architecture and are potentially more computationally powerful than previously assumed.
6,298 downloads bioRxiv neuroscience
The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. Here we characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.
6,017 downloads bioRxiv neuroscience
A decade after the first successful attempt to decode speech directly from human brain signals, accuracy and speed remain far below that of natural speech or typing. Here we show how to achieve high accuracy from the electrocorticogram at natural-speech rates, even with few data (on the order of half an hour of spoken speech). Taking a cue from recent advances in machine translation and automatic speech recognition, we train a recurrent neural network to map neural signals directly to word sequences (sentences). In particular, the network first encodes a sentence-length sequence of neural activity into an abstract representation, and then decodes this representation, word by word, into an English sentence. For each participant, training data consist of several spoken repeats of a set of some 30-50 sentences, along with the corresponding neural signals at each of about 250 electrodes distributed over peri-Sylvian speech cortices. Average word error rates across a validation (held-out) sentence set are as low as 7% for some participants, as compared to the previous state of the art of greater than 60%. Finally, we show how to use transfer learning to overcome limitations on data availability: Training certain components of the network under multiple participants' data, while keeping other components (e.g., the first hidden layer) "proprietary," can improve decoding performance--despite very different electrode coverage across participants.
6,001 downloads bioRxiv neuroscience
Despite the central role of sleep in our lives and the high prevalence of sleep disorders, sleep is still poorly understood. The development of ambulatory technologies capable of monitoring brain activity during sleep longitudinally is critical to advancing sleep science and facilitating the diagnosis of sleep disorders. We introduced the Dreem headband (DH) as an affordable, comfortable, and user-friendly alternative to polysomnography (PSG). The purpose of this study was to assess the signal acquisition of the DH and the performance of its embedded automatic sleep staging algorithms compared to the gold-standard clinical PSG scored by 5 sleep experts. Thirty-one subjects completed an over-night sleep study at a sleep center while wearing both a PSG and the DH simultaneously. We assessed 1) the EEG signal quality between the DH and the PSG, 2) the heart rate, breathing frequency, and respiration rate variability (RRV) agreement between the DH and the PSG, and 3) the performance of the DH's automatic sleep staging according to AASM guidelines vs. PSG sleep experts manual scoring. Results demonstrate a strong correlation between the EEG signals acquired by the DH and those from the PSG, and the signals acquired by the DH enable monitoring of alpha (r= 0.71 ± 0.13), beta (r= 0.71 ± 0.18), delta (r = 0.76 ± 0.14), and theta (r = 0.61 ± 0.12) frequencies during sleep. The mean absolute error for heart rate, breathing frequency and RRV was 1.2 ± 0.5 bpm, 0.3 ± 0.2 cpm and 3.2 ± 0.6%, respectively. Automatic Sleep Staging reached an overall accuracy of 83.5 ± 6.4% (F1 score : 83.8 ± 6.3) for the DH to be compared with an average of 86.4 ± 8.0% (F1 score: 86.3 ± 7.4) for the five sleep experts. These results demonstrate the capacity of the DH to both precisely monitor sleep-related physiological signals and process them accurately into sleep stages. This device paves the way for high-quality, large-scale, longitudinal sleep studies.
5,990 downloads bioRxiv neuroscience
Johan Winnubst, Erhan Bas, Tiago A Ferreira, Zhuhao Wu, Michael N Economo, Patrick Edson, Ben J. Arthur, Christopher Bruns, Konrad Rokicki, David Schauder, Donald J. Olbris, Sean D. Murphy, David G. Ackerman, Cameron Arshadi, Perry Baldwin, Regina Blake, Ahmad Elsayed, Mashtura Hasan, Daniel Ramirez, Bruno Dos Santos, Monet Weldon, Amina Zafar, Joshua T. Dudmann, Charles R. Gerfen, Adam W Hantman, Wyatt Korff, Scott M. Sternson, Nelson Spruston, Karel Svoboda, Jayaram Chandrashekar
Neuronal cell types are the nodes of neural circuits that determine the flow of information within the brain. Neuronal morphology, especially the shape of the axonal arbor, provides an essential descriptor of cell type and reveals how individual neurons route their output across the brain. Despite the importance of morphology, few projection neurons in the mouse brain have been reconstructed in their entirety. Here we present a robust and efficient platform for imaging and reconstructing complete neuronal morphologies, including axonal arbors that span substantial portions of the brain. We used this platform to reconstruct more than 1,000 projection neurons in the motor cortex, thalamus, subiculum, and hypothalamus. Together, the reconstructed neurons comprise more than 75 meters of axonal length and are available in a searchable online database. Axonal shapes revealed previously unknown subtypes of projection neurons and suggest organizational principles of long-range connectivity.
5,951 downloads bioRxiv neuroscience
Single neurons in visual cortex provide unreliable measurements of visual features due to their high trial-to-trial variability. It is not known if this “noise” extends its effects over large neural populations to impair the global encoding of stimuli. We recorded simultaneously from ∼20,000 neurons in mouse primary visual cortex (V1) and found that the neural populations had discrimination thresholds of ∼0.34° in an orientation decoding task. These thresholds were nearly 100 times smaller than those reported behaviorally in mice. The discrepancy between neural and behavioral discrimination could not be explained by the types of stimuli we used, by behavioral states or by the sequential nature of perceptual learning tasks. Furthermore, higher-order visual areas lateral to V1 could be decoded equally well. These results imply that the limits of sensory perception in mice are not set by neural noise in sensory cortex, but by the limitations of downstream decoders.
5,937 downloads bioRxiv neuroscience
CLARITY is a tissue clearing method, which enables immunostaining and imaging of large volumes for 3D-reconstruction. The method was initially time-consuming, expensive and relied on electrophoresis to remove lipids to make the tissue transparent. Since then several improvements and simplifications have emerged, such as passive clearing (PACT) and methods to improve tissue staining. Here, we review advances and compare current applications with the aim of highlighting needed improvements as well as aiding selection of the specific protocol for use in future investigations.
5,897 downloads bioRxiv neuroscience
Kristen R. Maynard, Leonardo Collado-Torres, Lukas M Weber, Cedric Uytingco, Brianna K. Barry, Stephen R. Williams, Joseph L. Catallini, Matthew N. Tran, Zachary Besich, Madhavi Tippani, Jennifer Chew, Yifeng Yin, Joel E Kleinman, Thomas M. Hyde, Nikhil Rao, Stephanie C Hicks, Keri Martinowich, Andrew E. Jaffe
We used the 10x Genomics Visium platform to define the spatial topography of gene expression in the six-layered human dorsolateral prefrontal cortex (DLPFC). We identified extensive layer-enriched expression signatures, and refined associations to previous laminar markers. We overlaid our laminar expression signatures onto large-scale single nuclei RNA sequencing data, enhancing spatial annotation of expression-driven clusters. By integrating neuropsychiatric disorder gene sets, we showed differential layer-enriched expression of genes associated with schizophrenia and autism spectrum disorder, highlighting the clinical relevance of spatially-defined expression. We then developed a data-driven framework to define unsupervised clusters in spatial transcriptomics data, which can be applied to other tissues or brain regions where morphological architecture is not as well-defined as cortical laminae. We lastly created a web application for the scientific community to explore these raw and summarized data to augment ongoing neuroscience and spatial transcriptomics research (http://research.libd.org/spatialLIBD)
5,890 downloads bioRxiv neuroscience
EEG microstate analysis offers a sparse characterisation of the spatio-temporal features of large-scale brain network activity. However, despite the concept of microstates is straight-forward and offers various quantifications of the EEG signal with a relatively clear neurophysiological interpretation, a few important aspects about the currently applied methods are not readily comprehensible. Here we aim to increase the transparency about the methods to facilitate widespread application and reproducibility of EEG microstate analysis by introducing a new EEGlab toolbox for Matlab. EEGlab and the Microstate toolbox are open source, allowing the user to keep track of all details in every analysis step. The toolbox is specifically designed to facilitate the development of new methods. While the toolbox can be controlled with a graphical user interface (GUI), making it easier for newcomers to take their first steps in exploring the possibilities of microstate analysis, the Matlab framework allows advanced users to create scripts to automatise analysis for multiple subjects to avoid tediously repeating steps for every subject. This manuscript provides an overview of the most commonly applied microstate methods as well as a tutorial consisting of a comprehensive walk-through of the analysis of a small, publicly available dataset.
5,870 downloads bioRxiv neuroscience
Deep neural networks (DNNs) have recently been applied successfully to brain decoding and image reconstruction from functional magnetic resonance imaging (fMRI) activity. However, direct training of a DNN with fMRI data is often avoided because the size of available data is thought to be insufficient to train a complex network with numerous parameters. Instead, a pre-trained DNN has served as a proxy for hierarchical visual representations, and fMRI data were used to decode individual DNN features of a stimulus image using a simple linear model, which were then passed to a reconstruction module. Here, we present our attempt to directly train a DNN model with fMRI data and the corresponding stimulus images to build an end-to-end reconstruction model. We trained a generative adversarial network with an additional loss term defined in a high-level feature space (feature loss) using up to 6,000 training data points (natural images and the fMRI responses). The trained deep generator network was tested on an independent dataset, directly producing a reconstructed image given an fMRI pattern as the input. The reconstructions obtained from the proposed method showed resemblance with both natural and artificial test stimuli. The accuracy increased as a function of the training data size, though not outperforming the decoded feature-based method with the available data size. Ablation analyses indicated that the feature loss played a critical role to achieve accurate reconstruction. Our results suggest a potential for the end-to-end framework to learn a direct mapping between brain activity and perception given even larger datasets.
5,856 downloads bioRxiv neuroscience
In response to reports of inflated false positive rate (FPR) in FMRI group analysis tools, a series of replications, investigations, and software modifications were made to address this issue. While these investigations continue, significant progress has been made to adapt AFNI to fix such problems. Two separate lines of changes have been made. First, a long-tailed model for the spatial correlation of the FMRI noise characterized by autocorrelation function (ACF) was developed and implemented into the 3dClustSim tool for determining the cluster-size threshold to use for a given voxel-wise threshold. Second, the 3dttest++ program was modified to do randomization of the voxel-wise t-tests and then to feed those randomized t-statistic maps into 3dClustSim directly for cluster-size threshold determination﹣without any spatial model for the ACF. These approaches were tested with the Beijing subset of the FCON-1000 data collection. The first approach shows markedly improved (reduced) FPR, but in many cases is still above the nominal 5%. The second approach shows FPRs clustered tightly about 5% across all per-voxel p-value thresholds ≤ 0.01. If t-tests from a univariate GLM are adequate for the group analysis in question, the second approach is what the AFNI group currently recommends for thresholding. If more complex per-voxel statistical analyses are required (where permutation/randomization is impracticable), then our current recommendation is to use the new ACF modeling approach coupled with a per-voxel p-threshold of 0.001 or below. Simulations were also repeated with the now infamously "buggy" version of 3dClustSim: the effect of the bug on FPRs was minimal (of order a few percent).
5,838 downloads bioRxiv neuroscience
When a neuron is driven beyond its threshold it spikes, and the fact that it does not communicate its continuous membrane potential is usually seen as a computational liability. Here we show that this spiking mechanism allows neurons to produce an unbiased estimate of their causal influence, and a way of approximating gradient descent learning. Importantly, neither activity of upstream neurons, which act as confounders, nor downstream non-linearities bias the results. By introducing a local discontinuity with respect to their input drive, we show how spiking enables neurons to solve causal estimation and learning problems.
5,798 downloads bioRxiv neuroscience
Reconstruction of neural circuits from volume electron microscopy data requires the tracing of complete cells including all their neurites. Automated approaches have been developed to perform the tracing, but without costly human proofreading their error rates are too high to obtain reliable circuit diagrams. We present a method for automated segmentation that, like the majority of previous efforts, employs convolutional neural networks, but contains in addition a recurrent pathway that allows the iterative optimization and extension of the reconstructed shape of individual neural processes. We used this technique, which we call flood-filling networks, to trace neurons in a data set obtained by serial block-face electron microscopy from a male zebra finch brain. Our method achieved a mean error-free neurite path length of 1.1 mm, an order of magnitude better than previously published approaches applied to the same dataset. Only 4 mergers were observed in a neurite test set of 97 mm path length.
5,790 downloads bioRxiv neuroscience
"Neural coding" is a popular metaphor in neuroscience, where objective properties of the world are communicated to the brain in the form of spikes. Here I argue that this metaphor is often inappropriate and misleading. First, when neurons are said to encode experimental parameters, the neural code depends on experimental details that are not carried by the coding variable. Thus, the representational power of neural codes is much more limited than generally implied. Second, neural codes carry information only by reference to things with known meaning. In contrast, perceptual systems must build information from relations between sensory signals and actions, forming a structured internal model. Neural codes are inadequate for this purpose because they are unstructured. Third, coding variables are observables tied to the temporality of experiments, while spikes are timed actions that mediate coupling in a distributed dynamical system. The coding metaphor tries to fit the dynamic, circular and distributed causal structure of the brain into a linear chain of transformations between observables, but the two causal structures are incongruent. I conclude that the neural coding metaphor cannot provide a basis for theories of brain function, because it is incompatible with both the causal structure of the brain and the informational requirements of cognition.
5,705 downloads bioRxiv neuroscience
Rebecca D Hodge, Trygve E. Bakken, Jeremy A. Miller, Kimberly A Smith, Eliza R Barkan, Lucas T. Graybuck, Jennie L. Close, Brian Long, Osnat Penn, Zizhen Yao, Jeroen Eggermont, Thomas Hollt, Boaz P. Levi, Soraya I Shehata, Brian Aevermann, Allison Beller, Darren Bertagnolli, Krissy Brouner, Tamara Casper, Charles Cobbs, Rachel Dalley, Nick Dee, Song-Lin Ding, Richard G. Ellenbogen, Olivia Fong, Emma Garren, Jeff Goldy, Ryder P Gwinn, Daniel Hirschstein, C. Dirk Keene, Mohamed Keshk, Andrew L. Ko, Kanan Lathia, Ahmed Mahfouz, Zoe Maltzer, Medea McGraw, Thuc Nghi Nguyen, Julie Nyhus, Jeffrey G Ojemann, Aaron Oldre, Sheana Parry, Shannon Reynolds, Christine Rimorin, Nadiya V Shapovalova, Saroja Somasundaram, Aaron Szafer, Elliot R. Thomsen, Michael Tieu, Richard H. Scheuermann, Rafael Yuste, Susan M. Sunkin, Boudewijn Lelieveldt, David Feng, Lydia Ng, Amy Bernard, Michael Hawrylycz, John W. Phillips, Bosiljka Tasic, Hongkui Zeng, Allan R. Jones, Christof Koch, Ed S Lein
Elucidating the cellular architecture of the human neocortex is central to understanding our cognitive abilities and susceptibility to disease. Here we applied single nucleus RNA-sequencing to perform a comprehensive analysis of cell types in the middle temporal gyrus of human cerebral cortex. We identify a highly diverse set of excitatory and inhibitory neuronal types that are mostly sparse, with excitatory types being less layer-restricted than expected. Comparison to a similar mouse cortex single cell RNA-sequencing dataset revealed a surprisingly well-conserved cellular architecture that enables matching of homologous types and predictions of human cell type properties. Despite this general conservation, we also find extensive differences between homologous human and mouse cell types, including dramatic alterations in proportions, laminar distributions, gene expression, and morphology. These species-specific features emphasize the importance of directly studying human brain.
5,692 downloads bioRxiv neuroscience
To understand brain functions, it is important to observe directly how multiple neural circuits are performing in living brains. However, due to tissue opaqueness, observable depth and spatiotemporal resolution are severely degraded in vivo. Here, we propose an optical brain clearing method for in vivo fluorescence microscopy, termed MAGICAL (Magical Additive Glycerol Improves Clear Alive Luminance). MAGICAL enabled two-photon microscopy to capture vivid images with fast speed, at cortical layer V and hippocampal CA1 in vivo. Moreover, MAGICAL promoted conventional confocal microscopy to visualize finer neuronal structures including synaptic boutons and spines in unprecedented deep regions, without intensive illumination leading to phototoxic effects. Fluorescence Emission Spectrum Transmissive Analysis (FESTA) showed that MAGICAL improved in vivo transmittance of shorter wavelength light, which is vulnerable to optical scattering thus unsuited for in vivo microscopy. These results suggest that MAGICAL would transparentize living brains via scattering reduction.
5,664 downloads bioRxiv neuroscience
Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the spatial pooler, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the spatial pooler outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the spatial pooler in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.
5,609 downloads bioRxiv neuroscience
The correct subcellular distribution of protein complexes establishes the complex morphology of neurons and is fundamental to their functioning. Thus, determining the dynamic distribution of proteins is essential to understand neuronal processes. Fluorescence imaging, in particular super-resolution microscopy, has become invaluable to investigate subcellular protein distribution. However, these approaches suffer from the limited ability to efficiently and reliably label endogenous proteins. We developed ORANGE: an Open Resource for the Application of Neuronal Genome Editing, that mediates targeted genomic integration of fluorescent tags in neurons. This toolbox includes a knock-in library for in-depth investigation of endogenous protein distribution, and a detailed protocol explaining how knock-in can be developed for novel targets. In combination with super-resolution microscopy, ORANGE revealed the dynamic nanoscale organization of endogenous neuronal signaling molecules, synaptic scaffolding proteins, and neurotransmitter receptors. Thus, ORANGE enables quantitation of expression and distribution for virtually any protein in neurons at high resolution and will significantly further our understanding of neuronal cell biology.
5,574 downloads bioRxiv neuroscience
The International Brain Laboratory, Valeria Aguillon-Rodriguez, Dora E. Angelaki, Hannah M. Bayer, Niccolò Bonacchi, Matteo Carandini, Fanny Cazettes, Gaelle A. Chapuis, Anne K Churchland, Yang Dan, Eric E. J. Dewitt, Mayo Faulkner, Hamish Forrest, Laura M. Haetzel, Michael Hausser, Sonja B. Hofer, Fei Hu, Anup Khanal, Christopher S. Krasniak, Inês Laranjeira, Zachary Mainen, Guido T. Meijer, Nathaniel J. Miska, Thomas Mrsic-Flogel, Masayoshi Murakami, Jean-Paul G Noel, Alejandro Pan-Vazquez, Cyrille Rossant, Joshua I. Sanders, Karolina Z. Socha, Rebecca Terry, Anne E Urai, Hernando M. Vergara, Miles J. Wells, Christian J. Wilson, Ilana B. Witten, Lauren E. Wool, Anthony Zador
Progress in science requires standardized assays whose results can be readily shared, compared, and reproduced across laboratories. Reproducibility, however, has been a concern in neuroscience, particularly for measurements of mouse behavior. Here we show that a standardized task to probe decision-making in mice produces reproducible results across multiple laboratories. We designed a task for head-fixed mice that combines established assays of perceptual and value-based decision making, and we standardized training protocol and experimental hardware, software, and procedures. We trained 140 mice across seven laboratories in three countries, and we collected 5 million mouse choices into a publicly available database. Learning speed was variable across mice and laboratories, but once training was complete there were no significant differences in behavior across laboratories. Mice in different laboratories adopted similar reliance on visual stimuli, on past successes and failures, and on estimates of stimulus prior probability to guide their choices. These results reveal that a complex mouse behavior can be successfully reproduced across multiple laboratories. They establish a standard for reproducible rodent behavior, and provide an unprecedented dataset and open-access tools to study decision-making in mice. More generally, they indicate a path towards achieving reproducibility in neuroscience through collaborative open-science approaches. ### Competing Interest Statement J.I.S. is the owner of Sanworks LLC which provides hardware and consulting for the experimental set-up described in this work.
- 27 Nov 2020: The website and API now include results pulled from medRxiv as well as bioRxiv.
- 18 Dec 2019: We're pleased to announce PanLingua, a new tool that enables you to search for machine-translated bioRxiv preprints using more than 100 different languages.
- 21 May 2019: PLOS Biology has published a community page about Rxivist.org and its design.
- 10 May 2019: The paper analyzing the Rxivist dataset has been published at eLife.
- 1 Mar 2019: We now have summary statistics about bioRxiv downloads and submissions.
- 8 Feb 2019: Data from Altmetric is now available on the Rxivist details page for every preprint. Look for the "donut" under the download metrics.
- 30 Jan 2019: preLights has featured the Rxivist preprint and written about our findings.
- 22 Jan 2019: Nature just published an article about Rxivist and our data.
- 13 Jan 2019: The Rxivist preprint is live!