Rxivist logo

Measuring error rates in genomic perturbation screens: gold standards for human functional genomics

By Traver Hart, Kevin R. Brown, Fabrice Sircoulomb, Robert Rottapel, Jason Moffat

Posted 17 Mar 2014
bioRxiv DOI: 10.1101/003327 (published DOI: 10.15252/msb.20145216)

Technological advancement has opened the door to systematic genetics in mammalian cells. Genome-scale loss-of-function screens can assay fitness defects induced by partial gene knockdown, using RNA interference, or complete gene knockout, using new CRISPR techniques. These screens can reveal the basic blueprint required for cellular proliferation. Moreover, comparing healthy to cancerous tissue can uncover genes that are essential only in the tumor; these genes are targets for the development of specific anticancer therapies. Unfortunately, progress in this field has been hampered by off-target effects of perturbation reagents and poorly quantified error rates in large-scale screens. To improve the quality of information derived from these screens, and to provide a framework for understanding the capabilities and limitations of CRISPR technology, we derive gold-standard reference sets of essential and nonessential genes, and provide a Bayesian classifier of gene essentiality that outperforms current methods on both RNAi and CRISPR screens. Our results indicate that CRISPR technology is more sensitive than RNAi, and that both techniques have nontrivial false discovery rates that can be mitigated by rigorous analytical methods.

Download data

  • Downloaded 701 times
  • Download rankings, all-time:
    • Site-wide: 28,515 out of 118,977
    • In systems biology: 680 out of 2,651
  • Year to date:
    • Site-wide: 109,249 out of 118,977
  • Since beginning of last month:
    • Site-wide: 95,982 out of 118,977

Altmetric data


Downloads over time

Distribution of downloads per paper, site-wide


PanLingua

Sign up for the Rxivist weekly newsletter! (Click here for more details.)


News