Rxivist logo

Rxivist combines biology preprints from bioRxiv and medRxiv with data from Twitter to help you find the papers being discussed in your field. Currently indexing 193,167 papers from 783,228 authors.

Most downloaded biology preprints, all time

in category neuroscience

27,848 results found. For more information, click each entry to expand.

1: An integrated brain-machine interface platform with thousands of channels
more details view paper

Posted 17 Jul 2019

An integrated brain-machine interface platform with thousands of channels
208,910 downloads bioRxiv neuroscience

Elon Musk, Neuralink

Brain-machine interfaces (BMIs) hold promise for the restoration of sensory and motor function and the treatment of neurological disorders, but clinical BMIs have not yet been widely adopted, in part because modest channel counts have limited their potential. In this white paper, we describe Neuralink’s first steps toward a scalable high-bandwidth BMI system. We have built arrays of small and flexible electrode “threads”, with as many as 3,072 electrodes per array distributed across 96 threads. We have also built a neurosurgical robot capable of inserting six threads (192 electrodes) per minute. Each thread can be individually inserted into the brain with micron precision for avoidance of surface vasculature and targeting specific brain regions. The electrode array is packaged into a small implantable device that contains custom chips for low-power on-board amplification and digitization: the package for 3,072 channels occupies less than (23 × 18.5 × 2) mm3. A single USB-C cable provides full-bandwidth data streaming from the device, recording from all channels simultaneously. This system has achieved a spiking yield of up to 70% in chronically implanted electrodes. Neuralink’s approach to BMI has unprecedented packaging density and scalability in a clinically relevant package.

2: Deep image reconstruction from human brain activity
more details view paper

Posted 28 Dec 2017

Deep image reconstruction from human brain activity
139,306 downloads bioRxiv neuroscience

Guohua Shen, Tomoyasu Horikawa, Kei Majima, Yukiyasu Kamitani

Machine learning-based analysis of human functional magnetic resonance imaging (fMRI) patterns has enabled the visualization of perceptual content. However, it has been limited to the reconstruction with low-level image bases or to the matching to exemplars. Recent work showed that visual cortical activity can be decoded (translated) into hierarchical features of a deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features. Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers. We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery. While our model was solely trained with natural images, our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed reconstructs or generates images from brain activity, not simply matches to exemplars. A natural image prior introduced by another deep neural network effectively rendered semantically meaningful details to reconstructions by constraining reconstructed images to be similar to natural images. Furthermore, human judgment of reconstructions suggests the effectiveness of combining multiple DNN layers to enhance visual quality of generated images. The results suggest that hierarchical visual information in the brain can be effectively combined to reconstruct perceptual and subjective images.

3: Could a neuroscientist understand a microprocessor?
more details view paper

Posted 26 May 2016

Could a neuroscientist understand a microprocessor?
106,742 downloads bioRxiv neuroscience

Eric Jonas, Konrad Paul Kording

There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.

4: Prefrontal cortex as a meta-reinforcement learning system
more details view paper

Posted 06 Apr 2018

Prefrontal cortex as a meta-reinforcement learning system
37,316 downloads bioRxiv neuroscience

Jane X Wang, Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Demis Hassabis, Matthew Botvinick

Over the past twenty years, neuroscience research on reward-based learning has converged on a canonical model, under which the neurotransmitter dopamine 'stamps in' associations between situations, actions and rewards by modulating the strength of synaptic connections between neurons. However, a growing number of recent findings have placed this standard model under strain. In the present work, we draw on recent advances in artificial intelligence to introduce a new theory of reward-based learning. Here, the dopamine system trains another part of the brain, the prefrontal cortex, to operate as its own free-standing learning system. This new perspective accommodates the findings that motivated the standard model, but also deals gracefully with a wider range of observations, providing a fresh foundation for future research.

5: Non-neuronal expression of SARS-CoV-2 entry genes in the olfactory system suggests mechanisms underlying COVID-19-associated anosmia
more details view paper

Posted 27 Mar 2020

Non-neuronal expression of SARS-CoV-2 entry genes in the olfactory system suggests mechanisms underlying COVID-19-associated anosmia
35,615 downloads bioRxiv neuroscience

David H. Brann, Tatsuya Tsukahara, Caleb Weinreb, Marcela Lipovsek, Koen Van den Berge, Boying Gong, Rebecca Chance, Iain C Macaulay, Hsin-jung Chou, Russell Fletcher, Diya Das, Kelly Street, Hector Roux de Bezieux, Yoon-Gi Choi, Davide Risso, Sandrine Dudoit, Elizabeth Purdom, Jonathan S Mill, Ralph Abi Hachem, Hiroaki Matsunami, Darren W. Logan, Bradley J Goldstein, Matthew S Grubb, John Ngai, Sandeep Robert Datta

Altered olfactory function is a common symptom of COVID-19, but its etiology is unknown. A key question is whether SARS-CoV-2 (CoV-2) - the causal agent in COVID-19 - affects olfaction directly by infecting olfactory sensory neurons or their targets in the olfactory bulb, or indirectly, through perturbation of supporting cells. Here we identify cell types in the olfactory epithelium and olfactory bulb that express SARS-CoV-2 cell entry molecules. Bulk sequencing revealed that mouse, non-human primate and human olfactory mucosa expresses two key genes involved in CoV-2 entry, ACE2 and TMPRSS2. However, single cell sequencing and immunostaining demonstrated ACE2 expression in support cells, stem cells, and perivascular cells; in contrast, neurons in both the olfactory epithelium and bulb did not express ACE2 message or protein. These findings suggest that CoV-2 infection of non-neuronal cell types leads to anosmia and related disturbances in odor perception in COVID-19 patients. ### Competing Interest Statement DL is an employee of Mars, Inc. None of the other authors have competing interests to declare.

6: Towards an integration of deep learning and neuroscience
more details view paper

Posted 13 Jun 2016

Towards an integration of deep learning and neuroscience
30,864 downloads bioRxiv neuroscience

Adam H Marblestone, Greg Wayne, Konrad P. Kording

Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) these cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.

7: Using DeepLabCut for 3D markerless pose estimation across species and behaviors
more details view paper

Posted 24 Nov 2018

Using DeepLabCut for 3D markerless pose estimation across species and behaviors
29,331 downloads bioRxiv neuroscience

Tanmay Nath, Mackenzie Weygandt Mathis, An Chi Chen, Amir Patel, Andreas S. Tolias

Noninvasive behavioral tracking of animals during experiments is crucial to many scientific pursuits. Extracting the poses of animals without using markers is often essential for measuring behavioral effects in biomechanics, genetics, ethology & neuroscience. Yet, extracting detailed poses without markers in dynamically changing backgrounds has been challenging. We recently introduced an open source toolbox called DeepLabCut that builds on a state-of-the-art human pose estimation algorithm to allow a user to train a deep neural network using limited training data to precisely track user-defined features that matches human labeling accuracy. Here, with this paper we provide an updated toolbox that is self contained within a Python package that includes new features such as graphical user interfaces and active-learning based network refinement. Lastly, we provide a step-by-step guide for using DeepLabCut.

8: Sex Differences In The Adult Human Brain: Evidence From 5,216 UK Biobank Participants
more details view paper

Posted 04 Apr 2017

Sex Differences In The Adult Human Brain: Evidence From 5,216 UK Biobank Participants
28,022 downloads bioRxiv neuroscience

Stuart J. Ritchie, Simon R Cox, Xueyi Shen, Michael V. Lombardo, Lianne Reus, Clara Alloza, Mathew A Harris, Helen L Alderson, Stuart Hunter, Emma Neilson, David C. M. Liewald, Bonnie Auyeung, Heather C Whalley, Stephen M Lawrie, Catharine Gale, Mark E. Bastin, Andrew M McIntosh, Ian J Deary

Sex differences in the human brain are of interest, for example because of sex differences in the observed prevalence of psychiatric disorders and in some psychological traits. We report the largest single-sample study of structural and functional sex differences in the human brain (2,750 female, 2,466 male participants; 44-77 years). Males had higher volumes, surface areas, and white matter fractional anisotropy; females had thicker cortices and higher white matter tract complexity. There was considerable distributional overlap between the sexes. Subregional differences were not fully attributable to differences in total volume or height. There was generally greater male variance across structural measures. Functional connectome organization showed stronger connectivity for males in unimodal sensorimotor cortices, and stronger connectivity for females in the default mode network. This large-scale study provides a foundation for attempts to understand the causes and consequences of sex differences in adult brain structure and function.

9: Moonstruck sleep: Synchronization of Human Sleep with the Moon Cycle under Natural Conditions
more details view paper

Posted 02 Jun 2020

Moonstruck sleep: Synchronization of Human Sleep with the Moon Cycle under Natural Conditions
22,268 downloads bioRxiv neuroscience

Leandro Casiraghi, Ignacio Spiousas, Gideon Dunster, Kaitlyn McGlothlen, Eduardo Fernández-Duque, Claudia Valeggia, Horacio O. de la Iglesia

As humans transitioned from hunter-gatherer to agricultural to highly urbanized post-industrial communities they progressively created environments that isolated sleep from its ancestral regulators, including the natural light-dark cycle. A prominent feature of this isolation is the availability of artificial light during the night, which delays the onset of sleep and shortens its duration. Before artificial light, moonlight was the only source of natural light sufficient to stimulate activity during the night; still, evidence for the modulation of sleep timing by lunar phases under natural conditions is controversial. Here we use data collected with wrist actimeters that measure daily sleep to show a clear synchronization of nocturnal sleep timing with the lunar cycle in participants who live in environments that range from a rural setting without access to electricity to a highly urbanized post-industrial one. The onset of sleep is delayed and sleep duration shortened as much as 1.5 hours on nights that precede the full moon night. Our data suggests that moonlight may have exerted selective pressure for nocturnal activity and sleep inhibition and that access to artificial evening light may emulate the ancestral effect of early-night moonlight. ### Competing Interest Statement The authors have declared no competing interest.

10: Why Does the Neocortex Have Columns, A Theory of Learning the Structure of the World
more details view paper

Posted 12 Jul 2017

Why Does the Neocortex Have Columns, A Theory of Learning the Structure of the World
22,143 downloads bioRxiv neuroscience

Jeff Hawkins, Subutai Ahmad, Yuwei Cui

Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed.

11: Natural image reconstruction from brain waves: a novel visual BCI system with native feedback
more details view paper

Posted 01 Oct 2019

Natural image reconstruction from brain waves: a novel visual BCI system with native feedback
21,077 downloads bioRxiv neuroscience

Grigory Rashkov, Anatoly Bobe, Dmitry Fastovets, Maria Komarova

Here we hypothesize that observing the visual stimuli of different categories trigger distinct brain states that can be decoded from noninvasive EEG recordings. We introduce an effective closed-loop BCI system that reconstructs the observed or imagined stimuli images from the co-occurring brain wave parameters. The reconstructed images are presented to the subject as a visual feedback. The developed system is applicable to training BCI-naïve subjects because of the user-friendly and intuitive way the visual patterns are employed to modify the brain states.

12: SARS-CoV-2 invades cognitive centers of the brain and induces Alzheimer's-like neuropathology
more details view paper

Posted 01 Feb 2022

SARS-CoV-2 invades cognitive centers of the brain and induces Alzheimer's-like neuropathology
20,513 downloads bioRxiv neuroscience

Wei-Bin Shen, James Logue, Penghua Yang, Lauren Baracco, Montasir Elahi, E. Albert Reece, BingBing Wang, Ling Li, Thomas Blanchard, Zhe Han, Matthew Frieman, Robert A. Rissman, Peixin Yang

Major cell entry factors of SARS-CoV-2 are present in neurons; however, the neurotropism of SARS-CoV-2 and the phenotypes of infected neurons are still unclear. Acute neurological disorders occur in many patients, and one-third of COVID-19 survivors suffer from brain diseases. Here, we show that SARS-CoV-2 invades the brains of five patients with COVID-19 and Alzheimers, autism, frontotemporal dementia or no underlying condition by infecting neurons and other cells in the cortex. SARS-CoV-2 induces or enhances Alzheimers-like neuropathology with manifestations of beta-amyloid aggregation and plaque formation, tauopathy, neuroinflammation and cell death. SARS-CoV-2 infects mature but not immature neurons derived from inducible pluripotent stem cells from healthy and Alzheimers individuals through its receptor ACE2 and facilitator neuropilin-1. SARS-CoV-2 triggers Alzheimers-like gene programs in healthy neurons and exacerbates Alzheimers neuropathology. A gene signature defined as an Alzheimers infectious etiology is identified through SARS-CoV-2 infection, and silencing the top three downregulated genes in human primary neurons recapitulates the neurodegenerative phenotypes of SARS-CoV-2. Thus, SARS-CoV-2 invades the brain and activates an Alzheimers-like program.

13: The hippocampus as a predictive map
more details view paper

Posted 28 Dec 2016

The hippocampus as a predictive map
18,797 downloads bioRxiv neuroscience

Kimberly Stachenfeld, Matthew M. Botvinick, Samuel J Gershman

A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity, and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensional basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.

14: A connectomic study of a petascale fragment of human cerebral cortex
more details view paper

Posted 30 May 2021

A connectomic study of a petascale fragment of human cerebral cortex
18,368 downloads bioRxiv neuroscience

Alexander Shapson-Coe, Michal Januszewski, Daniel R Berger, Art Pope, Yuelong Wu, Tim Blakely, Richard L. Schalek, Peter Li, Shuohong Wang, Jeremy Maitin-Shepard, Neha Karlupia, Sven Dorkenwald, Evelina Sjostedt, Laramie Leavitt, Dongil Lee, Luke Bailey, Angerica Fitzmaurice, Rohin Kar, Benjamin Field, Hank Wu, Julian Wagner-Carena, David Aley, Joanna Lau, Zudi Lin, Donglai Wei, Hanspeter Pfister, Adi Peleg, Viren Jain, Jeff W Lichtman

We acquired a rapidly preserved human surgical sample from the temporal lobe of the cerebral cortex. We stained a 1 mm3 volume with heavy metals, embedded it in resin, cut more than 5000 slices at ~30 nm and imaged these sections using a high-speed multibeam scanning electron microscope. We used computational methods to render the three-dimensional structure containing 57,216 cells, hundreds of millions of neurites and 133.7 million synaptic connections. The 1.4 petabyte electron microscopy volume, the segmented cells, cell parts, blood vessels, myelin, inhibitory and excitatory synapses, and 104 manually proofread cells are available to peruse online. Many interesting and unusual features were evident in this dataset. Glia outnumbered neurons 2:1 and oligodendrocytes were the most common cell type in the volume. Excitatory spiny neurons comprised 69% of the neuronal population, and excitatory synapses also were in the majority (76%). The synaptic drive onto spiny neurons was biased more strongly toward excitation (70%) than was the case for inhibitory interneurons (48%). Despite incompleteness of the automated segmentation caused by split and merge errors, we could automatically generate (and then validate) connections between most of the excitatory and inhibitory neuron types both within and between layers. In studying these neurons we found that deep layer excitatory cell types can be classified into new subsets, based on structural and connectivity differences, and that chandelier interneurons not only innervate excitatory neuron initial segments as previously described, but also each others initial segments. Furthermore, among the thousands of weak connections established on each neuron, there exist rarer highly powerful axonal inputs that establish multi-synaptic contacts (up to ~20 synapses) with target neurons. Our analysis indicates that these strong inputs are specific, and allow small numbers of axons to have an outsized role in the activity of some of their postsynaptic partners.

15: Suite2p: beyond 10,000 neurons with standard two-photon microscopy
more details view paper

Posted 30 Jun 2016

Suite2p: beyond 10,000 neurons with standard two-photon microscopy
17,852 downloads bioRxiv neuroscience

Marius Pachitariu, Carsen Stringer, Mario Dipoppa, Sylvia Schröder, L. Federico Rossi, Henry Dalgleish, Matteo Carandini, Kenneth D. Harris

Two-photon microscopy of calcium-dependent sensors has enabled unprecedented recordings from vast populations of neurons. While the sensors and microscopes have matured over several generations of development, computational methods to process the resulting movies remain inefficient and can give results that are hard to interpret. Here we introduce Suite2p: a fast, accurate and complete pipeline that registers raw movies, detects active cells, extracts their calcium traces and infers their spike times. Suite2p runs on standard workstations, operates faster than real time, and recovers ~2 times more cells than the previous state-of-the-art method. Its low computational load allows routine detection of ~10,000 cells simultaneously with standard two-photon resonant-scanning microscopes. Recordings at this scale promise to reveal the fine structure of activity in large populations of neurons or large populations of subcellular structures such as synaptic boutons.

16: 7 Tesla MRI of the ex vivo human brain at 100 micron resolution
more details view paper

Posted 31 May 2019

7 Tesla MRI of the ex vivo human brain at 100 micron resolution
16,293 downloads bioRxiv neuroscience

Brian L Edlow, Azma Mareyam, Andreas Horn, Jonathan Polimeni, Thomas Witzel, M. Dylan Tisdall, Jean Augustinack, Jason P. Stockmann, Bram R. Diamond, Allison Stevens, Lee S. Tirrell, Rebecca D Folkerth, Lawrence L Wald, Bruce Fischl, Andre van der Kouwe

We present an ultra-high resolution MRI dataset of an ex vivo human brain specimen. The brain specimen was donated by a 58-year-old woman who had no history of neurological disease and died of non-neurological causes. After fixation in 10% formalin, the specimen was imaged on a 7 Tesla MRI scanner at 100 μm isotropic resolution using a custom-built 31-channel receive array coil. Single-echo multi-flip Fast Low-Angle SHot (FLASH) data were acquired over 100 hours of scan time (25 hours per flip angle), allowing derivation of a T1 parameter map and synthesized FLASH volumes. This dataset provides an unprecedented view of the three-dimensional neuroanatomy of the human brain. To optimize the utility of this resource, we warped the dataset into standard stereotactic space. We now distribute the dataset in both native space and stereotactic space to the academic community via multiple platforms. We envision that this dataset will have a broad range of investigational, educational, and clinical applications that will advance understanding of human brain anatomy in health and disease. View this table:

17: A better way to define and describe Morlet wavelets for time-frequency analysis
more details view paper

Posted 21 Aug 2018

A better way to define and describe Morlet wavelets for time-frequency analysis
16,184 downloads bioRxiv neuroscience

Mike X Cohen

Morlet wavelets are frequently used for time-frequency analysis of non-stationary time series data, such as neuroelectrical signals recorded from the brain. The crucial parameter of Morlet wavelets is the width of the Gaussian that tapers the sine wave. This width parameter controls the trade-off between temporal precision and frequency precision. It is typically defined as the "number of cycles," but this parameter is opaque, and often leads to uncertainty and suboptimal analysis choices, as well as being difficult to interpret and evaluate. The purpose of this paper is to present alternative formulations of Morlet wavelets in time and in frequency that allow parameterizing the wavelets directly in terms of the desired temporal and spectral smoothing (as full-width at half-maximum). This formulation provides clarity on an important data analysis parameter, and should facilitate proper analyses, reporting, and interpretation of results. MATLAB code is provided.

18: Deep Neural Networks In Computational Neuroscience
more details view paper

Posted 04 May 2017

Deep Neural Networks In Computational Neuroscience
15,419 downloads bioRxiv neuroscience

Tim C. Kietzmann, Patrick McClure, Nikolaus Kriegeskorte

The goal of computational neuroscience is to find mechanistic explanations of how the nervous system processes information to give rise to cognitive function and behaviour. At the heart of the field are its models, i.e. mathematical and computational descriptions of the system being studied, which map sensory stimuli to neural responses and/or neural to behavioural responses. These models range from simple to complex. Recently, deep neural networks (DNNs) have come to dominate several domains of artificial intelligence (AI). As the term 'neural network' suggests, these models are inspired by biological brains. However, current DNNs neglect many details of biological neural networks. These simplifications contribute to their computational efficiency, enabling them to perform complex feats of intelligence, ranging from perceptual (e.g. visual object and auditory speech recognition) to cognitive tasks (e.g. machine translation), and on to motor control (e.g. playing computer games or controlling a robot arm). In addition to their ability to model complex intelligent behaviours, DNNs excel at predicting neural responses to novel sensory stimuli with accuracies well beyond any other currently available model type. DNNs can have millions of parameters, which are required to capture the domain knowledge needed for successful task performance. Contrary to the intuition that this renders them into impenetrable black boxes, the computational properties of the network units are the result of four directly manipulable elements: input statistics, network structure, functional objective, and learning algorithm. With full access to the activity and connectivity of all units, advanced visualization techniques, and analytic tools to map network representations to neural data, DNNs represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.

19: Automated analysis of whole brain vasculature using machine learning
more details view paper

Posted 18 Apr 2019

Automated analysis of whole brain vasculature using machine learning
15,361 downloads bioRxiv neuroscience

Mihail Ivilinov Todorov, Johannes C. Paetzold, Oliver Schoppe, Giles Tetteh, Velizar Efremov, Katalin Völgyi, Marco Düring, Martin Dichgans, Marie Piraud, Bjoern Menze, Ali Ertürk

Tissue clearing methods enable imaging of intact biological specimens without sectioning. However, reliable and scalable analysis of such large imaging data in 3D remains a challenge. Towards this goal, we developed a deep learning-based framework to quantify and analyze the brain vasculature, named Vessel Segmentation & Analysis Pipeline (VesSAP). Our pipeline uses a fully convolutional network with a transfer learning approach for segmentation. We systematically analyzed vascular features of the whole brains including their length, bifurcation points and radius at the micrometer scale by registering them to the Allen mouse brain atlas. We reported the first evidence of secondary intracranial collateral vascularization in CD1-Elite mice and found reduced vascularization in the brainstem as compared to the cerebrum. VesSAP thus enables unbiased and scalable quantifications for the angioarchitecture of the cleared intact mouse brain and yields new biological insights related to the vascular brain function.

20: Deep neural networks: a new framework for modelling biological vision and brain information processing
more details view paper

Posted 26 Oct 2015

Deep neural networks: a new framework for modelling biological vision and brain information processing
15,232 downloads bioRxiv neuroscience

Nikolaus Kriegeskorte

Recent advances in neural network modelling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals and not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build neurobiologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

Previous page 1 2 3 4 5 . . . 1393 Next page

PanLingua

News