Most downloaded biology preprints, all time
in category scientific communication and education
743 results found. For more information, click each entry to expand.
75,634 downloads bioRxiv scientific communication and education
Good scientific writing is essential to career development and to the progress of science. A well-structured manuscript allows readers and reviewers to get excited about the subject matter, to understand and verify the paper's contributions, and to integrate these contributions into a broader context. However, many scientists struggle with producing high-quality manuscripts and typically get little training in paper writing. Focusing on how readers consume information, we present a set of 10 simple rules to help you get across the main idea of your paper. These rules are designed to make your paper more influential and the process of writing more efficient and pleasurable.
32,238 downloads bioRxiv scientific communication and education
Although the Journal Impact Factor (JIF) is widely acknowledged to be a poor indicator of the quality of individual papers, it is used routinely to evaluate research and researchers. Here, we present a simple method for generating the citation distributions that underlie JIFs. Application of this straightforward protocol reveals the full extent of the skew of these distributions and the variation in citations received by published papers that is characteristic of all scientific journals. Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF. We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.
21,747 downloads bioRxiv scientific communication and education
We wish to answer this question If you observe a “significant” P value after doing a single unbiased experiment, what is the probability that your result is a false positive? The weak evidence provided by P values between 0.01 and 0.05 is explored by exact calculations of false positive risks. When you observe P = 0.05, the odds in favour of there being a real effect (given by the likelihood ratio) are about 3:1. This is far weaker evidence than the odds of 19 to 1 that might, wrongly, be inferred from the P value. And if you want to limit the false positive risk to 5%, you would have to assume that you were 87% sure that there was a real effect before the experiment was done. If you observe P = 0.001 in a well-powered experiment, it gives a likelihood ratio of almost 100:1 odds on there being a real effect. That would usually be regarded as conclusive, But the false positive risk would still be 8% if the prior probability of a real effect was only 0.1. And, in this case, if you wanted to achieve a false positive risk of 5% you would need to observe P = 0.00045. It is recommended that the terms “significant” and “non-significant” should never be used. Rather, P values should be supplemented by specifying the prior probability that would be needed to produce a specified (e.g. 5%) false positive risk. It may also be helpful to specify the minimum false positive risk associated with the observed P value. Despite decades of warnings, many areas of science still insist on labelling a result of P < 0.05 as “statistically significant”. This practice must account for a substantial part of the lack of reproducibility in some areas of science. And this is before you get to the many other well-known problems, like multiple comparisons, lack of randomisation and P-hacking. Science is endangered by statistical misunderstanding, and by university presidents and research funders who impose perverse incentives on scientists.
16,632 downloads bioRxiv scientific communication and education
Jason D. Fernandes, Sarvenaz Sarabipour, Christopher T. Smith, Natalie M. Niemi, Nafisa M Jadavji, Ariangela J. Kozik, Alex S Holehouse, Vikas Pejaver, Orsolya Symmons, Alexandre W. Bisson Filho, Amanda Haage
Applying for a faculty position is a critical phase of many postdoctoral careers, but most postdoctoral researchers in STEM fields enter the academic job market with little knowledge of the process and expectations. A lack of data has made it difficult for applicants to assess their qualifications relative to the general applicant pool and for institutions to develop effective hiring policies. We analyzed responses to a survey of faculty job applicants between May 2018 and May 2019. We establish various background scholarly metrics for a typical faculty applicant and present an analysis of the interplay between those metrics and hiring outcomes. Traditional benchmarks of a positive research track record above a certain threshold of qualifications were unable to completely differentiate applicants with and without offers. Our findings suggest that there is no single clear path to a faculty job offer and that metrics such as career transition awards and publications in high impact factor journals were neither necessary nor sufficient for landing a faculty position. The applicants perceived the process as unnecessarily stressful, time-consuming, and largely lacking in feedback, irrespective of a successful outcome. Our findings emphasize the need to improve the transparency of the faculty job application process. In addition, we hope these and future data will help empower trainees to enter the academic job market with clearer expectations and improved confidence.
10,666 downloads bioRxiv scientific communication and education
The data in this report summarises the responses gathered from 365 principal investigators of academic laboratories, who started their independent positions in the UK within the last 6 years up to 2018. We find that too many new investigators express frustration and poor optimism for the future. These data also reveal, that many of these individuals lack the support required to make a successful transition to independence and that simple measures could be put in place by both funders and universities in order to better support these early career researchers. We use these data to make both recommendations of good practice and for changes to policies that would make significant improvements to those currently finding independence challenging. We find that some new investigators face significant obstacles when building momentum and hiring a research team. In particular, access to PhD students. We also find some important areas such as starting salaries where significant gender differences persist, which cannot be explained by seniority. Our data also underlines the importance of support networks, within and outside the department, and the positive influence of good mentorship through this difficult career stage.
10,403 downloads bioRxiv scientific communication and education
Scientific publications enable results and ideas to be transmitted throughout the scientific community. The number and type of journal publications also have become the primary criteria used in evaluating career advancement. Our analysis suggests that publication practices have changed considerably in the life sciences over the past thirty years. More experimental data is now required for publication, and the average time required for graduate students to publish their first paper has increased and is approaching the desirable duration of Ph.D. training. Since publication is generally a requirement for career progression, schemes to reduce the time of graduate student and postdoctoral training may be difficult to implement without also considering new mechanisms for accelerating communication of their work. The increasing time to publication also delays potential catalytic effects that ensue when many scientists have access to new information. The time has come for life scientists, funding agencies, and publishers to discuss how to communicate new findings in a way that best serves the interests of the public and the scientific community.
9,910 downloads bioRxiv scientific communication and education
Despite their recognized limitations, bibliometric assessments of scientific productivity have been widely adopted. We describe here an improved method that makes novel use of the co-citation network of each article to field-normalize the number of citations it has received. The resulting Relative Citation Ratio is article-level and field-independent, and provides an alternative to the invalid practice of using Journal Impact Factors to identify influential papers. To illustrate one application of our method, we analyzed 88,835 articles published between 2003 and 2010, and found that the National Institutes of Health awardees who authored those papers occupy relatively stable positions of influence across all disciplines. We demonstrate that the values generated by this method strongly correlate with the opinions of subject matter experts in biomedical research, and suggest that the same approach should be generally applicable to articles published in all areas of science. A beta version of iCite, our web tool for calculating Relative Citation Ratios of articles listed in PubMed, is available at https://icite.od.nih.gov .
9,212 downloads bioRxiv scientific communication and education
Clarity and accuracy of reporting are fundamental to the scientific process. The understandability of written language can be estimated using readability formulae. Here, in a corpus consisting of 707 452 scientific abstracts published between 1881 and 2015 from 122 influential biomedical journals, we show that the readability of science is steadily decreasing. Further, we demonstrate that this trend is indicative of a growing usage of general scientific jargon. These results are concerning for scientists and for the wider public, as they impact both the reproducibility and accessibility of research findings.
8,934 downloads bioRxiv scientific communication and education
Background: Evidence-based clinical practice relies on unbiased reporting of negative results. Meta-analysis of drug safety and efficacy across many clinical trials is difficult given the unconstrained nature of reasons that are provided to ClinicalTrials.gov to explain clinical trial terminations. Methods and Findings: We scanned all trials in ClinicalTrials.gov marked with the “terminated” status (N=3122), meaning the trial had been stopped before the scheduled end date. Under the current reporting framework, any number of reasons may be given for termination, and these need not conform to a controlled vocabulary. Here we develop a controlled vocabulary for trial termination, and map each terminated trial to as many as three vocabulary terms. Mapping to this “ontology of termination” allows further analysis and conclusions. First, we identify the subset of terminated trials that ended citing safety concerns (6.2%) or failure to establish efficacy (10.8%), and were further able to stratify these rates across trials of different phases. Second, we examine termination reasons where a stricter data model could have preserved more evidentiary value, either because the data model was misused (7.6%) or because the given reason left unclear whether the decision to terminate was based on analysis of the data (74.9%, with 20.4% mentioning a decision-maker that may have had access to the data). Third, we show that imposing a controlled vocabulary of reasons for termination would avoid ambiguity and improve the evidentiary value of clinical trials. Conclusions: We encourage wider use of an “ontology of termination” and propose four questions that should be posed on trial termination. These simple steps would promote transparency and enable ready access to negative trial results for meta-analysis.
8,597 downloads bioRxiv scientific communication and education
Researchers in the life sciences are posting work to preprint servers at an unprecedented and increasing rate, sharing papers online before (or instead of) publication in peer-reviewed journals. Though the increasing acceptance of preprints is driving policy changes for journals and funders, there is little information about their usage. Here, we collected and analyzed data on all 37,648 preprints uploaded to bioRxiv.org, the largest biology-focused preprint server, in its first five years. We find preprints are being downloaded more than ever before (1.1 million tallied in October 2018 alone) and that the rate of preprints being posted has increased to a recent high of 2,100 per month. We also find that two-thirds of preprints posted before 2017 were later published in peer-reviewed journals, and find a relationship between journal impact factor and preprint downloads. Lastly, we developed Rxivist.org, a web application providing multiple ways of interacting with preprint metadata.
7,911 downloads bioRxiv scientific communication and education
The fairness of scholarly peer review has been challenged by evidence of disparities in publication outcomes based on author demographic characteristics. To assess this, we conducted an exploratory analysis of peer review outcomes of 23,876 initial submissions and 7,192 full submissions that were submitted to the biosciences journal eLife between 2012 and 2017. Women and authors from nations outside of North America and Europe were underrepresented both as gatekeepers (editors and peer reviewers) and authors. We found evidence of a homophilic relationship between the demographics of the gatekeepers and authors and the outcome of peer review; that is, there were higher rates of acceptance in the case of gender and country homophily. The acceptance rate for manuscripts with male last authors was seven percent, or 3.5 percentage points, greater than for female last authors (95% CI = [0.5, 6.4]); this gender inequity was greatest, at nine percent or about 4.8 percentage points (95% CI = [0.3, 9.1]), when the team of reviewers was all male; this difference was smaller and not significantly different for mixed-gender reviewer teams. Homogeny between countries of the gatekeeper and the corresponding author was also associated with higher acceptance rates for many countries. To test for the persistence of these effects after controlling for potentially confounding variables, we conducted a logistic regression including document and author metadata. Disparities in acceptance rates associated with gender and country of affiliation and the homophilic associations remained. We conclude with a discussion of mechanisms that could contribute to this effect, directions for future research, and policy implications. Code and anonymized data have been made available at <https://github.com/murrayds/elife-analysis> Author summary Peer review, the primary method by which scientific work is evaluated, is ideally a fair and equitable process in which scientific work is judged solely on its own merit. However, the integrity of peer review has been called into question based on evidence that outcomes often differ between male and female authors, and for authors in different countries. We investigated such disparities at the biosciences journal eLife by analyzing the demographics of authors and gatekeepers (editors and peer reviewers), and peer review outcomes of all submissions between 2012 and 2017. Outcomes were more favorable for male authors and those affiliated with institutions in North America and Europe; these groups were also over-represented among gatekeepers. There was evidence that peer review outcomes were influenced by homophily —a preference of gatekeepers for manuscripts from authors with shared characteristics. We discuss mechanisms that could contribute to this effect, directions for future research, and policy implications.
7,818 downloads bioRxiv scientific communication and education
Inaccurate data in scientific papers can result from honest error or intentional falsification. This study attempted to determine the percentage of published papers containing inappropriate image duplication, a specific type of inaccurate data. The images from a total of 20,621 papers in 40 scientific journals from 1995-2014 were visually screened. Overall, 3.8% of published papers contained problematic figures, with at least half exhibiting features suggestive of deliberate manipulation. The prevalence of papers with problematic images rose markedly during the past decade. Additional papers written by authors of papers with problematic images had an increased likelihood of containing problematic images as well. As this analysis focused only on one type of data, it is likely that the actual prevalence of inaccurate data in the published literature is higher. The marked variation in the frequency of problematic images among journals suggest that journal practices, such as pre-publication image screening, influence the quality of the scientific literature.
7,201 downloads bioRxiv scientific communication and education
The quality of evidence in meta-analysis of randomized controlled trials is the degree to which the estimated effect represents the "truth." Current approaches to assessing the quality of evidence focus on trial design and methods. I describe a new quality of evidence index composed of four sub-indexes that measure pre-registration, independent replication, data availability, and trial design and methods, respectively. This index is systematic, objective, and quantitative. I illustrate the index with an empirical example and provide a spreadsheet for easy calculation.
7,176 downloads bioRxiv scientific communication and education
Background: Insufficient research is a major impediment to growth, development and advancement of health in Africa. Africa produces less than 1% of global research output. Meanwhile, African countries face some of the toughest challenges worldwide, most of which can only be tackled through robust and efficient research. Addressing the barriers to conducting research in Africa is a step towards improving research capacity and output. This study aimed to identify the key challenges affecting research practice and output in Africa; and to highlight priority areas for improvement. Methods: A cross-sectional survey was administered through an online questionnaire, including participants from six countries in Sub-Saharan Africa. Participants included research professionals, research students, research groups and academics. Results: A total of 424 participants responded to this survey. The ability to conduct and produce high-quality research was seen to be influenced by multiple factors, most of which were related to the research environment in African countries. Priority areas for improvement included providing more training, raising awareness on the importance of research in Africa, encouraging governments to commit to research and increasing collaboration between researchers in Africa. Conclusion: The conditions under which research is done in Africa are severely flawed and do not encourage engagement in research, or continuity of research activity. African governments need to develop initiatives that accelerate and support research and research-based education in Africa, in order to build a solid foundation for research, increase research capacity, and enable institutions to provide valuable training and develop sustainable research opportunities in Africa.
6,683 downloads bioRxiv scientific communication and education
This article presents a practical roadmap for scholarly data repositories to implement data citation in accordance with the Joint Declaration of Data Citation Principles, a synopsis and harmonization of the recommendations of major science policy bodies. The roadmap was developed by the Repositories Expert Group, as part of the Data Citation Implementation Pilot (DCIP) project, an initiative of FORCE11.org and the NIH BioCADDIE (https://biocaddie.org) program. The roadmap makes 11 specific recommendations, grouped into three phases of implementation: a) required steps needed to support the Joint Declaration of Data Citation Principles, b) recommended steps that facilitate article/data publication workflows, and c) optional steps that further improve data citation support provided by data repositories.
6,506 downloads bioRxiv scientific communication and education
Background: Previous research shows that men often receive more research funding than women, but does not provide empirical evidence as to why this occurs. In 2014, the Canadian Institutes of Health Research (CIHR) created a natural experiment by dividing all investigator-initiated funding into two new grant programs: one with and one without an explicit review focus on the caliber of the principal investigator. Methods: We analyzed application success among 23,918 grant applications from 7,093 unique principal investigators in a 5-year natural experiment across all investigator-initiated CIHR grant programs in 2011-2016. We used Generalized Estimating Equations to account for multiple applications by the same applicant and an interaction term between each principal investigator's self-reported sex and grant programs to compare success rates between male and female applicants under different review criteria. Results: The overall grant success rate across all competitions was 15.8%. After adjusting for age and research domain, the predicted probability of funding success in traditional programs was 0.9 percentage points higher for male than for female principal investigators (OR 0.934, 95% CI 0.854-1.022). In the new program focused on the proposed science, the gap was 0.9 percentage points in favour of male principal investigators (OR 0.998, 95% CI 0.794-1.229). In the new program with an explicit review focus on the caliber of the principal investigator, the gap was 4.0 percentage points in favour of male principal investigators (OR 0.705, 95% CI 0.519-0.960). Interpretation: This study suggests gender gaps in grant funding are attributable to less favourable assessments of women as principal investigators, not differences in assessments of the quality of science led by women. We propose ways for funders to avoid allowing gender bias to influence research funding. Funding: This study was unfunded.
6,503 downloads bioRxiv scientific communication and education
Functional neuroimaging techniques have transformed our ability to probe the neurobiological basis of behaviour and are increasingly being applied by the wider neuroscience community. However, concerns have recently been raised that the conclusions drawn from some human neuroimaging studies are either spurious or not generalizable. Problems such as low statistical power, flexibility in data analysis, software errors, and lack of direct replication apply to many fields, but perhaps particularly to fMRI. Here we discuss these problems, outline current and suggested best practices, and describe how we think the field should evolve to produce the most meaningful answers to neuroscientific questions.
5,514 downloads bioRxiv scientific communication and education
Understanding the growth of open access (OA) is important for deciding funder policy, subscription allocation, and infrastructure planning. This study analyses the number of papers available as OA over time. The models includes both OA embargo data and the relative growth rates of different OA types over time, based on the OA status of 70 million journal articles published between 1950 and 2019. The study also looks at article usage data, analyzing the proportion of views to OA articles vs views to articles which are closed access. Signal processing techniques are used to model how these viewership patterns change over time. Viewership data is based on 2.8 million uses of the Unpaywall browser extension in July 2019. We found that Green, Gold, and Hybrid papers receive more views than their Closed or Bronze counterparts, particularly Green papers made available within a year of publication. We also found that the proportion of Green, Gold, and Hybrid articles is growing most quickly. In 2019:- 31% of all journal articles are available as OA. - 52% of article views are to OA articles. Given existing trends, we estimate that by 2025: - 44% of all journal articles will be available as OA. - 70% of article views will be to OA articles. The declining relevance of closed access articles is likely to change the landscape of scholarly communication in the years to come.
5,386 downloads bioRxiv scientific communication and education
Current attempts at methodological reform in sciences come in response to an overall lack of rigor in methodological and scientific practices in experimental sciences. However, most methodological reform attempts suffer from similar mistakes and over-generalizations to the ones they aim to address. We argue that this can be attributed in part to lack of formalism and first principles. Considering the costs of allowing false claims to become canonized, we argue for formal statistical rigor and scientific nuance in methodological reform. To attain this rigor and nuance, we propose a five-step formal approach for solving methodological problems. To illustrate the use and benefits of such formalism, we present a formal statistical analysis of three popular claims in the metascientific literature: (a) that reproducibility is the cornerstone of science; (b) that data must not be used twice in any analysis; and (c) that exploratory projects imply poor statistical practice. We show how our formal approach can inform and shape debates about such methodological claims.
5,219 downloads bioRxiv scientific communication and education
The present study analyzed 960 papers published in Molecular and Cellular Biology (MCB) from 2009-2016 and found 59 (6.1%) to contain inappropriately duplicated images. The 59 instances of inappropriate image duplication led to 42 corrections, 5 retractions and 12 instances in which no action was taken. Our experience suggests that the majority of inappropriate image duplications result from errors during figure preparation that can be remedied by correction. Nevertheless, ~10% of papers with inappropriate image duplications in MCB were retracted. If this proportion is representative, then as many as 35,000 papers in the literature are candidates for retraction due to image duplication. The resolution of inappropriate image duplication concerns after publication required an average of 6 h of journal staff time per published paper. MCB instituted a pilot program to screen images of accepted papers prior to publication that identified 12 manuscripts (14.5% out of 83) with image concerns in two months. The screening and correction of papers before publication required an average of 30 min of staff time per problematic paper. Image screening can identify papers with problematic images prior to publication, reduces post-publication problems and requires significantly less staff time than the correction of problems after publication.
- 27 Nov 2020: The website and API now include results pulled from medRxiv as well as bioRxiv.
- 18 Dec 2019: We're pleased to announce PanLingua, a new tool that enables you to search for machine-translated bioRxiv preprints using more than 100 different languages.
- 21 May 2019: PLOS Biology has published a community page about Rxivist.org and its design.
- 10 May 2019: The paper analyzing the Rxivist dataset has been published at eLife.
- 1 Mar 2019: We now have summary statistics about bioRxiv downloads and submissions.
- 8 Feb 2019: Data from Altmetric is now available on the Rxivist details page for every preprint. Look for the "donut" under the download metrics.
- 30 Jan 2019: preLights has featured the Rxivist preprint and written about our findings.
- 22 Jan 2019: Nature just published an article about Rxivist and our data.
- 13 Jan 2019: The Rxivist preprint is live!