title
stringlengths
6
470
doi
stringlengths
14
25
authors
stringlengths
5
600
abstract
stringlengths
0
65.5k
date
stringlengths
10
10
journal
stringclasses
1 value
U2AF1 mutations alter splice site recognition in hematological malignancies
10.1101/001107
Janine O Ilagan;Aravind Ramakrishnan;Brian Hayes;Michele E Murphy;Ahmad S Zebari;Philip Bradley;Robert K Bradley;
Whole-exome sequencing studies have identified common mutations affecting genes encoding components of the RNA splicing machinery in hematological malignancies. Here, we sought to determine how mutations affecting the 3' splice site recognition factor U2AF1 alter its normal role in RNA splicing. We find that U2AF1 mutations influence the similarity of splicing programs in leukemias, but do not give rise to widespread splicing failure. U2AF1 mutations cause differential splicing of hundreds of genes, affecting biological pathways such as DNA methylation (DNMT3B), X chromosome inactivation (H2AFY), the DNA damage response (ATR, FANCA), and apoptosis (CASP8). We show that U2AF1 mutations alter the preferred 3' splice site motif in patients, in cell culture, and in vitro. Mutations affecting the first and second zinc fingers give rise to different alterations in splice site preference and largely distinct downstream splicing programs. These allele-specific effects are consistent with a computationally predicted model of U2AF1 in complex with RNA. Our findings suggest that U2AF1 mutations contribute to pathogenesis by causing quantitative changes in splicing that affect diverse cellular pathways, and give insight into the normal function of U2AF1s zinc finger domains.
2013-12-03
U2AF1 mutations alter splice site recognition in hematological malignancies
10.1101/001107
Janine O Ilagan;Aravind Ramakrishnan;Brian Hayes;Michele E Murphy;Ahmad S Zebari;Philip Bradley;Robert K Bradley;
Whole-exome sequencing studies have identified common mutations affecting genes encoding components of the RNA splicing machinery in hematological malignancies. Here, we sought to determine how mutations affecting the 3' splice site recognition factor U2AF1 alter its normal role in RNA splicing. We find that U2AF1 mutations influence the similarity of splicing programs in leukemias, but do not give rise to widespread splicing failure. U2AF1 mutations cause differential splicing of hundreds of genes, affecting biological pathways such as DNA methylation (DNMT3B), X chromosome inactivation (H2AFY), the DNA damage response (ATR, FANCA), and apoptosis (CASP8). We show that U2AF1 mutations alter the preferred 3' splice site motif in patients, in cell culture, and in vitro. Mutations affecting the first and second zinc fingers give rise to different alterations in splice site preference and largely distinct downstream splicing programs. These allele-specific effects are consistent with a computationally predicted model of U2AF1 in complex with RNA. Our findings suggest that U2AF1 mutations contribute to pathogenesis by causing quantitative changes in splicing that affect diverse cellular pathways, and give insight into the normal function of U2AF1s zinc finger domains.
2014-06-28
U2AF1 mutations alter splice site recognition in hematological malignancies
10.1101/001107
Janine O Ilagan;Aravind Ramakrishnan;Brian Hayes;Michele E Murphy;Ahmad S Zebari;Philip Bradley;Robert K Bradley;
Whole-exome sequencing studies have identified common mutations affecting genes encoding components of the RNA splicing machinery in hematological malignancies. Here, we sought to determine how mutations affecting the 3' splice site recognition factor U2AF1 alter its normal role in RNA splicing. We find that U2AF1 mutations influence the similarity of splicing programs in leukemias, but do not give rise to widespread splicing failure. U2AF1 mutations cause differential splicing of hundreds of genes, affecting biological pathways such as DNA methylation (DNMT3B), X chromosome inactivation (H2AFY), the DNA damage response (ATR, FANCA), and apoptosis (CASP8). We show that U2AF1 mutations alter the preferred 3' splice site motif in patients, in cell culture, and in vitro. Mutations affecting the first and second zinc fingers give rise to different alterations in splice site preference and largely distinct downstream splicing programs. These allele-specific effects are consistent with a computationally predicted model of U2AF1 in complex with RNA. Our findings suggest that U2AF1 mutations contribute to pathogenesis by causing quantitative changes in splicing that affect diverse cellular pathways, and give insight into the normal function of U2AF1s zinc finger domains.
2014-09-29
Effect of glycogen synthase kinase-3 inactivation on mouse mammary gland development and oncogenesis
10.1101/001321
Joanna Dembowy;Hibret A Adissu;Jeff C Liu;Eldad Zacksenhaus;James Robert Woodgett;
Many components of Wnt/{beta}-catenin signaling pathway have critical functions in mammary gland development and tumor formation, yet the contribution of glycogen synthase kinase-3 (GSK-3 and GSK-3{beta}) to mammopoiesis and oncogenesis is unclear. Here, we report that WAP-Cre-mediated deletion of GSK-3 in the mammary epithelium results in activation of Wnt/{beta}-catenin signaling and induces mammary intraepithelial neoplasia that progresses to squamous transdifferentiation and development of adenosquamous carcinomas at 6 months. To uncover possible {beta}-catenin-independent activities of GSK-3, we generated mammary-specific knock-outs of GSK-3 and {beta}-catenin. Squamous transdifferentiation of the mammary epithelium was largely attenuated, however mammary epithelial cells lost the ability to form mammospheres suggesting perturbation of stem cell properties unrelated to loss of {beta}-catenin alone. At 10 months, adenocarcinomas that developed in glands lacking GSK-3 and {beta}-catenin displayed elevated levels of {gamma}-catenin/plakoglobin as well as activation of the Hedgehog and Notch pathways. Collectively these results establish the two isoforms of GSK-3 as essential integrators of multiple developmental signals that act to maintain normal mammary gland function and suppress tumorigenesis.
2013-12-10
Effect of glycogen synthase kinase-3 inactivation on mouse mammary gland development and oncogenesis
10.1101/001321
Joanna Dembowy;Hibret A Adissu;Jeff C Liu;Eldad Zacksenhaus;James Robert Woodgett;
Many components of Wnt/{beta}-catenin signaling pathway have critical functions in mammary gland development and tumor formation, yet the contribution of glycogen synthase kinase-3 (GSK-3 and GSK-3{beta}) to mammopoiesis and oncogenesis is unclear. Here, we report that WAP-Cre-mediated deletion of GSK-3 in the mammary epithelium results in activation of Wnt/{beta}-catenin signaling and induces mammary intraepithelial neoplasia that progresses to squamous transdifferentiation and development of adenosquamous carcinomas at 6 months. To uncover possible {beta}-catenin-independent activities of GSK-3, we generated mammary-specific knock-outs of GSK-3 and {beta}-catenin. Squamous transdifferentiation of the mammary epithelium was largely attenuated, however mammary epithelial cells lost the ability to form mammospheres suggesting perturbation of stem cell properties unrelated to loss of {beta}-catenin alone. At 10 months, adenocarcinomas that developed in glands lacking GSK-3 and {beta}-catenin displayed elevated levels of {gamma}-catenin/plakoglobin as well as activation of the Hedgehog and Notch pathways. Collectively these results establish the two isoforms of GSK-3 as essential integrators of multiple developmental signals that act to maintain normal mammary gland function and suppress tumorigenesis.
2014-07-26
Power-law Null Model for Bystander Mutations in Cancer
10.1101/001651
Loes Olde Loohuis;Andreas Witzel;Bud Mishra;
In this paper we study Copy Number Variation (CNV) data. The underlying process generating CNV segments is generally assumed to be memory-less, giving rise to an exponential distribution of segment lengths. In this paper, we provide evidence from cancer patient data, which suggests that this generative model is too simplistic, and that segment lengths follow a power-law distribution instead. We conjecture a simple preferential attachment generative model that provides the basis for the observed power-law distribution. We then show how an existing statistical method for detecting cancer driver genes can be improved by incorporating the power-law distribution in the null model.
2014-01-02
Cycling Physicochemical Gradients as ‘Evolutionary Drivers’: From Complex Matter to Complex Living States
10.1101/000786
Jan Spitzer;
HighlightsO_LIBiological complexity cannot be reduced to chemistry and physics\nC_LIO_LIComplex living states are: multicomponent, multiphase, crowded, and re-emergent\nC_LIO_LILiving states arise naturally only by the action of cycling physicochemical gradients\nC_LIO_LIBacterial cells can be modeled as viscoelastic capacitors with sol-gel transitions\nC_LIO_LIEvolving living states can be investigated via biotic soup experimentation\nC_LIO_LIDarwinian evolution arises from the process errors of the cell cycle\nC_LIO_LISynthetic biology heralds the transition from unintentional Darwinian evolution to intentional anthropic evolution\nC_LI\n\nAbstractWithin the overlap of physics, chemistry and biology, complex matter becomes more deeply understood when high level mathematics converts regularities of experimental data into scientific laws, theories, and models (Krakauer et al., 2011. The challenges and scope of theoretical biology. J. Theoret. Biol. 276: 269-276). The simplest kinds of complex biological matter are bacterial cells; they appear complex-from a physicochemical standpoint-because they are multicomponent, multiphase, biomacromolecularly crowded, and re-emergent; the property of re-emergence differentiates biological matter from complex chemical and physical matter.\n\nBacterial cells cannot self-reassemble spontaneously from their biomolecules and biomacromolecules (via non-covalent molecular forces) without the action of external drivers; on Earth, such drivers have been diurnal (cycling) physicochemical gradients, i.e. temperature, water activity, etc. brought about by solar radiation striking the Earths rotating surface. About 3.5 billion years ago, these cycling gradients drove complex chemical prebiotic soups toward progenotic living states from which extant bacteria evolved (Spitzer and Poolman, 2009; The role of biomacromolecular crowding, ionic strength and physicochemical gradients in the complexities of lifes emergence. Microbiol. Mol. Biol. Revs. 73:371-388). Thus there is historical non-equilibrium continuity between complex dead chemical matter and complex living states of bacterial cells. This historical continuity becomes accessible to present-day experimentation, when cycling physicochemical gradients act on dead biomacromolecules obtained from (suitably) killed bacterial populations - on a biotic soup of chemicals (Harold, 2005, Molecules into cells: specifying spatial architecture. Microbiol. Mol. Biol. Rev. 69:544-564). The making of biotic soups and recovering living states from them is briefly discussed in terms of novel concepts and experimental possibilities.\n\nIn principle, emergent living states contingently arise and evolve when cycling physicochemical gradients continuously act on complex chemical mass; once living states become dynamically stabilized, the inevitable process errors of primitive cell cycles become the roots of Darwinian evolution.
2013-11-20
Aerodynamic characteristics of a feathered dinosaur measured using physical models. Effects of form on static stability and control effectiveness.
10.1101/001297
Dennis Evangelista;Griselda Cardona;Eric Guenther-Gleason;Tony Huynh;Austin Kwong;Dylan Marks;Neil Ray;Adrian Tisbe;Kyle Tse;Mimi Kohl;
We report the effects of posture and morphology on the static aerodynamic stability and control effectiveness of physical models based on the feathered dinosaur, {dagger}Microraptor gui, from the Cretaceous of China. Postures had similar lift and drag coefficients and were broadly similar when simplified metrics of gliding were considered, but they exhibited different stability characteristics depending on the position of the legs and the presence of feathers on the legs and the tail. Both stability and the function of appendages in generating maneuvering forces and torques changed as the glide angle or angle of attack were changed. These are significant because they represent an aerial environment that may have shifted during the evolution of directed aerial descent and other aerial behaviors. Certain movements were particularly effective (symmetric movements of the wings and tail in pitch, asymmetric wing movements, some tail movements). Other appendages altered their function from creating yaws at high angle of attack to rolls at low angle of attack, or reversed their function entirely. While {dagger}M. gui lived after {dagger}Archaeopteryx and likely represents a side experiment with feathered morphology, the general patterns of stability and control effectiveness suggested from the manipulations of forelimb, hindlimb and tail morphology here may help understand the evolution of flight control aerodynamics in vertebrates. Though these results rest on a single specimen, as further fossils with different morphologies tested, the findings here could be applied in a phylogenetic context to reveal biomechanical constraints on extinct flyers arising from the need to maneuver.
2013-12-10
Aerodynamic characteristics of a feathered dinosaur measured using physical models. Effects of form on static stability and control effectiveness.
10.1101/001297
Dennis Evangelista;Griselda Cardona;Eric Guenther-Gleason;Tony Huynh;Austin Kwong;Dylan Marks;Neil Ray;Adrian Tisbe;Kyle Tse;Mimi Kohl;
We report the effects of posture and morphology on the static aerodynamic stability and control effectiveness of physical models based on the feathered dinosaur, {dagger}Microraptor gui, from the Cretaceous of China. Postures had similar lift and drag coefficients and were broadly similar when simplified metrics of gliding were considered, but they exhibited different stability characteristics depending on the position of the legs and the presence of feathers on the legs and the tail. Both stability and the function of appendages in generating maneuvering forces and torques changed as the glide angle or angle of attack were changed. These are significant because they represent an aerial environment that may have shifted during the evolution of directed aerial descent and other aerial behaviors. Certain movements were particularly effective (symmetric movements of the wings and tail in pitch, asymmetric wing movements, some tail movements). Other appendages altered their function from creating yaws at high angle of attack to rolls at low angle of attack, or reversed their function entirely. While {dagger}M. gui lived after {dagger}Archaeopteryx and likely represents a side experiment with feathered morphology, the general patterns of stability and control effectiveness suggested from the manipulations of forelimb, hindlimb and tail morphology here may help understand the evolution of flight control aerodynamics in vertebrates. Though these results rest on a single specimen, as further fossils with different morphologies tested, the findings here could be applied in a phylogenetic context to reveal biomechanical constraints on extinct flyers arising from the need to maneuver.
2014-01-16
Parametric inference in the large data limit using maximally informative models
10.1101/001396
Justin B. Kinney;Gurinder S. Atwal;
Motivated by data-rich experiments in transcriptional regulation and sensory neuro-science, we consider the following general problem in statistical inference. When exposed to a high-dimensional signal S, a system of interest computes a representation R of that signal which is then observed through a noisy measurement M. From a large number of signals and measurements, we wish to infer the \"filter\" that maps S to R. However, the standard method for solving such problems, likelihood-based inference, requires perfect a priori knowledge of the \"noise function\" mapping R to M. In practice such noise functions are usually known only approximately, if at all, and using an incorrect noise function will typically bias the inferred filter. Here we show that, in the large data limit, this need for a pre-characterized noise function can be circumvented by searching for filters that instead maximize the mutual information I[M; R] between observed measurements and predicted representations. Moreover, if the correct filter lies within the space of filters being explored, maximizing mutual information becomes equivalent to simultaneously maximizing every dependence measure that satisfies the Data Processing Inequality. It is important to note that maximizing mutual information will typically leave a small number of directions in parameter space unconstrained. We term these directions \"diffeomorphic modes\" and present an equation that allows these modes to be derived systematically. The presence of diffeomorphic modes reflects a fundamental and nontrivial substructure within parameter space, one that is obscured by standard likelihood-based inference.
2013-12-13
RNA Structure Refinement using the ERRASER-Phenix pipeline
10.1101/001461
Fang-Chieh Chou;Nathaniel Echols;Thomas C. Terwilliger;Rhiju Das;
The final step of RNA crystallography involves the fitting of coordinates into electron density maps. The large number of backbone atoms in RNA presents a difficult and tedious challenge, particularly when experimental density is poor. The ERRASER-Phenix pipeline can improve an initial set of RNA coordinates automatically based on a physically realistic model of atomic-level RNA interactions. The pipeline couples diffraction-based refinement in Phenix with the Rosetta-based real-space refinement protocol ERRASER (Enumerative Real-Space Refinement ASsisted by Electron density under Rosetta). The combination of ERRASER and Phenix can improve the geometrical quality of RNA crystallographic models while maintaining or improving the fit to the diffraction data (as measured by Rfree). Here we present a complete tutorial for running ERRASER-Phenix through the Phenix GUI, from the command-line, and via an application in the Rosetta On-line Server that Includes Everyone (ROSIE).
2013-12-19
A null model for Pearson coexpression networks
10.1101/001065
Andrea Gobbi;Giuseppe Jurman;
Gene coexpression networks inferred by correlation from high-throughput profiling such as microarray data represent a simple but effective technique for discovering and interpreting linear gene relationships. In the last years several approach have been proposed to tackle the problem of deciding when the resulting correlation values are statistically significant. This is mostly crucial when the number of samples is small, yielding a non negligible chance that even high correlation values are due to random effects. Here we introduce a novel hard thresholding solution based on the assumption that a coexpression network inferred by randomly generated data is expected to be empty. The theoretical derivation of the new bound by geometrical methods is shown together with applications in onco- and neurogenomics.
2013-12-02
A null model for Pearson coexpression networks
10.1101/001065
Andrea Gobbi;Giuseppe Jurman;
Gene coexpression networks inferred by correlation from high-throughput profiling such as microarray data represent a simple but effective technique for discovering and interpreting linear gene relationships. In the last years several approach have been proposed to tackle the problem of deciding when the resulting correlation values are statistically significant. This is mostly crucial when the number of samples is small, yielding a non negligible chance that even high correlation values are due to random effects. Here we introduce a novel hard thresholding solution based on the assumption that a coexpression network inferred by randomly generated data is expected to be empty. The theoretical derivation of the new bound by geometrical methods is shown together with applications in onco- and neurogenomics.
2013-12-03
PyRAD: assembly of de novo RADseq loci for phylogenetic analyses
10.1101/001081
Deren A. R. Eaton;
Restriction-site associated genomic markers are a powerful tool for investigating evolutionary questions at the population level, but are limited in their utility at deeper phylogenetic scales where fewer orthologous loci are typically recovered across disparate taxa. While this limitation stems in part from mutations to restriction recognition sites that disrupt data generation, an alternative source of data loss comes from the failure to identify homology during bioinformatic analyses. Clustering methods that allow for lower similarity thresholds and the inclusion of indel variation will perform better at assembling RADseq loci at the phylogenetic scale.\n\nPyRAD is a pipeline to assemble de novo RADseq loci with the aim of optimizing coverage across phylogenetic data sets. It utilizes a wrapper around an alignment-clustering algorithm which allows for indel variation within and between samples, as well as for incomplete overlap among reads (e.g., paired-end). Here I compare PyRAD with the program Stacks in their performance analyzing a simulated RADseq data set that includes indel variation. Indels disrupt clustering of homologous loci in Stacks but not in PyRAD, such that the latter recovers more shared loci across disparate taxa. I show through re-analysis of an empirical RADseq data set that indels are a common feature of such data, even at shallow phylogenetic scales. PyRAD utilizes parallel processing as well as an optional hierarchical clustering method which allow it to rapidly assemble phylogenetic data sets with hundreds of sampled individuals.\n\nAvailabilitySoftware is written in Python and freely available at http://www.dereneaton.com/software/\n\nSupplementScripts to completely reproduce all simulated and empirical analyses are available in the Supplementary Materials.
2013-12-03
A Bayesian Method to Incorporate Hundreds of Functional Characteristics with Association Evidence to Improve Variant Prioritization
10.1101/000984
Sarah A Gagliano;Michael R Barnes;Michael E Weale;Jo Knight;
The increasing quantity and quality of functional genomic information motivate the assessment and integration of these data with association data, including data originating from genome-wide association studies (GWAS). We used previously described GWAS signals (\"hits\") to train a regularized logistic model in order to predict SNP causality on the basis of a large multivariate functional dataset. We show how this model can be used to derive Bayes factors for integrating functional and association data into a combined Bayesian analysis. Functional characteristics were obtained from the Encyclopedia of DNA Elements (ENCODE), from published expression quantitative trait loci (eQTL), and from other sources of genome-wide characteristics. We trained the model using all GWAS signals combined, and also using phenotype specific signals for autoimmune, brain-related, cancer, and cardiovascular disorders. The non-phenotype specific and the autoimmune GWAS signals gave the most reliable results. We found SNPs with higher probabilities of causality from functional characteristics showed an enrichment of more significant p-values compared to all GWAS SNPs in three large GWAS studies of complex traits. We investigated the ability of our Bayesian method to improve the identification of true causal signals in a psoriasis GWAS dataset and found that combining functional data with association data improves the ability to prioritise novel hits. We used the predictions from the penalized logistic regression model to calculate Bayes factors relating to functional characteristics and supply these online alongside resources to integrate these data with association data.\n\nAuthor SummaryLarge-scale genetic studies have had success identifying genes that play a role in complex traits. Advanced statistical procedures suggest that there are still genetic variants to be discovered, but these variants are difficult to detect. Incorporating biological information that affect the amount of protein or other product produced can be used to prioritise the genetic variants in order to identify which are likely to be causal. The method proposed here uses such biological characteristics to predict which genetic variants are most likely to be causal for complex traits.
2013-12-04
libRoadRunner: A High Performance SBML Compliant Simulator
10.1101/001230
Herbert M Sauro;Totte T Karlsson;Maciej Swat;Michal Galdzicki;Andy Somogyi;
SummaryWe describe libRoadRunner, a cross-platform, open-source, high performance C++ library for running and analyzing SBML-compliant models. libRoadRunner was created primarily to achieve high performance, ease of use, portability and an extensible architecture. libRoadRunner includes a comprehensive API, Plugin support, Python scripting and additional functionality such as stoichio-metric and metabolic control analysis.\n\nAccessibility and ImplementationTo maximize collaboration, we made libRoadRunner open source and released it under the Apache License, Version 2.0. To facilitate reuse, we have developed comprehensive Python bindings using SWIG (swig.org) and a C API. Li-bRoadRunner uses a number of statically linked third party libraries including: LLVM [4], libSBML [1], CVODE, NLEQ2, LAPACK and Poco. LibRoadRunner is supported on Windows, Mac OS X and Linux.\n\nSupplementary informationOnline documentation, build instructions and git source repository are available at http://www.libroadrunner.org
2013-12-12
Accurate detection of de novo and transmitted INDELs within exome-capture data using micro-assembly
10.1101/001370
Giuseppe Narzisi;Jason A O'Rawe;Ivan Iossifov;Han Fang;Yoon-ha Lee;Zihua Wang;Yiyang Wu;Gholson J Lyon;Michael Wigler;Michael C Schatz;
We present a new open-source algorithm, Scalpel, for sensitive and specific discovery of INDELs in exome-capture data. By combining the power of mapping and assembly, Scalpel searches the de Bruijn graph for sequence paths (contigs) that span each exon. The algorithm creates a single path for exons with no INDEL, two paths for an exon with a heterozygous mutation, and multiple paths for more exotic variations. A detailed repeat composition analysis coupled with a self-tuning k-mer strategy allows Scalpel to outperform other state-of-the-art approaches for INDEL discovery. We extensively compared Scalpel with a battery of >10000 simulated and >1000 experimentally validated INDELs between 1 and 100bp against two recent algorithms for INDEL discovery: GATK HaplotypeCaller and SOAPindel. We report anomalies for these tools in their ability to detect INDELs, especially in regions containing near-perfect repeats which contribute to high false positive rates. In contrast, Scalpel demonstrates superior specificity while maintaining high sensitivity. We also present a large-scale application of Scalpel for detecting de novo and transmitted INDELs in 593 families with autistic children from the Simons Simplex Collection. Scalpel demonstrates enhanced power to detect long ([≥]20bp) transmitted events, and strengthens previous reports of enrichment for de novo likely gene-disrupting INDEL mutations in children with autism with many new candidate genes. The source code and documentation for the algorithm is available at http://scalpel.sourceforge.net.
2013-12-13
Accurate detection of de novo and transmitted INDELs within exome-capture data using micro-assembly
10.1101/001370
Giuseppe Narzisi;Jason A O'Rawe;Ivan Iossifov;Han Fang;Yoon-ha Lee;Zihua Wang;Yiyang Wu;Gholson J Lyon;Michael Wigler;Michael C Schatz;
We present a new open-source algorithm, Scalpel, for sensitive and specific discovery of INDELs in exome-capture data. By combining the power of mapping and assembly, Scalpel searches the de Bruijn graph for sequence paths (contigs) that span each exon. The algorithm creates a single path for exons with no INDEL, two paths for an exon with a heterozygous mutation, and multiple paths for more exotic variations. A detailed repeat composition analysis coupled with a self-tuning k-mer strategy allows Scalpel to outperform other state-of-the-art approaches for INDEL discovery. We extensively compared Scalpel with a battery of >10000 simulated and >1000 experimentally validated INDELs between 1 and 100bp against two recent algorithms for INDEL discovery: GATK HaplotypeCaller and SOAPindel. We report anomalies for these tools in their ability to detect INDELs, especially in regions containing near-perfect repeats which contribute to high false positive rates. In contrast, Scalpel demonstrates superior specificity while maintaining high sensitivity. We also present a large-scale application of Scalpel for detecting de novo and transmitted INDELs in 593 families with autistic children from the Simons Simplex Collection. Scalpel demonstrates enhanced power to detect long ([≥]20bp) transmitted events, and strengthens previous reports of enrichment for de novo likely gene-disrupting INDEL mutations in children with autism with many new candidate genes. The source code and documentation for the algorithm is available at http://scalpel.sourceforge.net.
2014-04-15
Accurate detection of de novo and transmitted INDELs within exome-capture data using micro-assembly
10.1101/001370
Giuseppe Narzisi;Jason A O'Rawe;Ivan Iossifov;Han Fang;Yoon-ha Lee;Zihua Wang;Yiyang Wu;Gholson J Lyon;Michael Wigler;Michael C Schatz;
We present a new open-source algorithm, Scalpel, for sensitive and specific discovery of INDELs in exome-capture data. By combining the power of mapping and assembly, Scalpel searches the de Bruijn graph for sequence paths (contigs) that span each exon. The algorithm creates a single path for exons with no INDEL, two paths for an exon with a heterozygous mutation, and multiple paths for more exotic variations. A detailed repeat composition analysis coupled with a self-tuning k-mer strategy allows Scalpel to outperform other state-of-the-art approaches for INDEL discovery. We extensively compared Scalpel with a battery of >10000 simulated and >1000 experimentally validated INDELs between 1 and 100bp against two recent algorithms for INDEL discovery: GATK HaplotypeCaller and SOAPindel. We report anomalies for these tools in their ability to detect INDELs, especially in regions containing near-perfect repeats which contribute to high false positive rates. In contrast, Scalpel demonstrates superior specificity while maintaining high sensitivity. We also present a large-scale application of Scalpel for detecting de novo and transmitted INDELs in 593 families with autistic children from the Simons Simplex Collection. Scalpel demonstrates enhanced power to detect long ([≥]20bp) transmitted events, and strengthens previous reports of enrichment for de novo likely gene-disrupting INDEL mutations in children with autism with many new candidate genes. The source code and documentation for the algorithm is available at http://scalpel.sourceforge.net.
2014-06-18
A coarse-grained elastic network atom contact model and its use in the simulation of protein dynamics and the prediction of the effect of mutations
10.1101/001495
Vincent Frappier;Rafael Najmanovich;
Normal mode analysis (NMA) methods are widely used to study dynamic aspects of protein structures. Two critical components of NMA methods are coarse-graining in the level of simplification used to represent protein structures and the choice of potential energy functional form. There is a trade-off between speed and accuracy in different choices. In one extreme one finds accurate but slow molecular-dynamics based methods with all-atom representations and detailed atom potentials. On the other extreme, fast elastic network model (ENM) methods with Conly representations and simplified potentials that based on geometry alone, thus oblivious to protein sequence. Here we present ENCoM, an Elastic Network Contact Model that employs a potential energy function that includes a pairwise atom-type non-bonded interaction term and thus makes it possible to consider the effect of the specific nature of amino-acids on dynamics within the context of NMA. ENCoM is as fast as existing ENM methods and outperforms such methods in the generation of conformational ensembles. Here we introduce a new application for NMA methods with the use of ENCoM in the prediction of the effect of mutations on protein stability. While existing methods are based on machine learning or enthalpic considerations, the use of ENCoM, based on vibrational normal modes, is based on entropic considerations. This represents a novel area of application for NMA methods and a novel approach for the prediction of the effect of mutations. We compare ENCoM to a large number of methods in terms of accuracy and self-consistency. We show that the accuracy of ENCoM is comparable to that of the best existing methods. We show that existing methods are biased towards the prediction of destabilizing mutations and that ENCoM is less biased at predicting stabilizing mutations.
2013-12-20
On the optimal trimming of high-throughput mRNAseq data
10.1101/000422
Matthew D MacManes;
The widespread and rapid adoption of high-throughput sequencing technologies has afforded researchers the opportunity to gain a deep understanding of genome level processes that underlie evolutionary change, and perhaps more importantly, the links between genotype and phenotype. In particular, researchers interested in functional biology and adaptation have used these technologies to sequence mRNA transcriptomes of specific tissues, which in turn are often compared to other tissues, or other individuals with different phenotypes. While these techniques are extremely powerful, careful attention to data quality is required. In particular, because high-throughput sequencing is more error-prone than traditional Sanger sequencing, quality trimming of sequence reads should be an important step in all data processing pipelines. While several software packages for quality trimming exist, no general guidelines for the specifics of trimming have been developed. Here, using empirically derived sequence data, I provide general recommendations regarding the optimal strength of trimming, specifically in mRNA-Seq studies. Although very aggressive quality trimming is common, this study suggests that a more gentle trimming, specifically of those nucleotides whose PO_SCPLOWHREDC_SCPLOW score <2 or <5, is optimal for most studies across a wide variety of metrics.
2013-11-14
On the optimal trimming of high-throughput mRNAseq data
10.1101/000422
Matthew D MacManes;
The widespread and rapid adoption of high-throughput sequencing technologies has afforded researchers the opportunity to gain a deep understanding of genome level processes that underlie evolutionary change, and perhaps more importantly, the links between genotype and phenotype. In particular, researchers interested in functional biology and adaptation have used these technologies to sequence mRNA transcriptomes of specific tissues, which in turn are often compared to other tissues, or other individuals with different phenotypes. While these techniques are extremely powerful, careful attention to data quality is required. In particular, because high-throughput sequencing is more error-prone than traditional Sanger sequencing, quality trimming of sequence reads should be an important step in all data processing pipelines. While several software packages for quality trimming exist, no general guidelines for the specifics of trimming have been developed. Here, using empirically derived sequence data, I provide general recommendations regarding the optimal strength of trimming, specifically in mRNA-Seq studies. Although very aggressive quality trimming is common, this study suggests that a more gentle trimming, specifically of those nucleotides whose PO_SCPLOWHREDC_SCPLOW score <2 or <5, is optimal for most studies across a wide variety of metrics.
2013-12-23
On the optimal trimming of high-throughput mRNAseq data
10.1101/000422
Matthew D MacManes;
The widespread and rapid adoption of high-throughput sequencing technologies has afforded researchers the opportunity to gain a deep understanding of genome level processes that underlie evolutionary change, and perhaps more importantly, the links between genotype and phenotype. In particular, researchers interested in functional biology and adaptation have used these technologies to sequence mRNA transcriptomes of specific tissues, which in turn are often compared to other tissues, or other individuals with different phenotypes. While these techniques are extremely powerful, careful attention to data quality is required. In particular, because high-throughput sequencing is more error-prone than traditional Sanger sequencing, quality trimming of sequence reads should be an important step in all data processing pipelines. While several software packages for quality trimming exist, no general guidelines for the specifics of trimming have been developed. Here, using empirically derived sequence data, I provide general recommendations regarding the optimal strength of trimming, specifically in mRNA-Seq studies. Although very aggressive quality trimming is common, this study suggests that a more gentle trimming, specifically of those nucleotides whose PO_SCPLOWHREDC_SCPLOW score <2 or <5, is optimal for most studies across a wide variety of metrics.
2014-01-14
Exploring community structure in biological networks with random graphs
10.1101/001545
Pratha Sah;Lisa O. Singh;Aaron Clauset;Shweta Bansal;
BackgroundCommunity structure is ubiquitous in biological networks. There has been an increased interest in unraveling the community structure of biological systems as it may provide important insights into a systems functional components and the impact of local structures on dynamics at a global scale. Choosing an appropriate community detection algorithm to identify the community structure in an empirical network can be difficult, however, as the many algorithms available are based on a variety of cost functions and are difficult to validate. Even when community structure is identified in an empirical system, disentangling the effect of community structure from other network properties such as clustering coefficient and assortativity can be a challenge.\n\nResultsHere, we develop a generative model to produce undirected, simple, connected graphs with a specified degrees and pattern of communities, while maintaining a graph structure that is as random as possible. Additionally, we demonstrate two important applications of our model: (a) to generate networks that can be used to benchmark existing and new algorithms for detecting communities in biological networks; and (b) to generate null models to serve as random controls when investigating the impact of complex network features beyond the byproduct of degree and modularity in empirical biological networks.\n\nConclusionOur model allows for the systematic study of the presence of community structure and its impact on network function and dynamics. This process is a crucial step in unraveling the functional consequences of the structural properties of biological systems and uncovering the mechanisms that drive these systems.
2013-12-22
Exploring community structure in biological networks with random graphs
10.1101/001545
Pratha Sah;Lisa O. Singh;Aaron Clauset;Shweta Bansal;
BackgroundCommunity structure is ubiquitous in biological networks. There has been an increased interest in unraveling the community structure of biological systems as it may provide important insights into a systems functional components and the impact of local structures on dynamics at a global scale. Choosing an appropriate community detection algorithm to identify the community structure in an empirical network can be difficult, however, as the many algorithms available are based on a variety of cost functions and are difficult to validate. Even when community structure is identified in an empirical system, disentangling the effect of community structure from other network properties such as clustering coefficient and assortativity can be a challenge.\n\nResultsHere, we develop a generative model to produce undirected, simple, connected graphs with a specified degrees and pattern of communities, while maintaining a graph structure that is as random as possible. Additionally, we demonstrate two important applications of our model: (a) to generate networks that can be used to benchmark existing and new algorithms for detecting communities in biological networks; and (b) to generate null models to serve as random controls when investigating the impact of complex network features beyond the byproduct of degree and modularity in empirical biological networks.\n\nConclusionOur model allows for the systematic study of the presence of community structure and its impact on network function and dynamics. This process is a crucial step in unraveling the functional consequences of the structural properties of biological systems and uncovering the mechanisms that drive these systems.
2013-12-24
Exploring community structure in biological networks with random graphs
10.1101/001545
Pratha Sah;Lisa O. Singh;Aaron Clauset;Shweta Bansal;
BackgroundCommunity structure is ubiquitous in biological networks. There has been an increased interest in unraveling the community structure of biological systems as it may provide important insights into a systems functional components and the impact of local structures on dynamics at a global scale. Choosing an appropriate community detection algorithm to identify the community structure in an empirical network can be difficult, however, as the many algorithms available are based on a variety of cost functions and are difficult to validate. Even when community structure is identified in an empirical system, disentangling the effect of community structure from other network properties such as clustering coefficient and assortativity can be a challenge.\n\nResultsHere, we develop a generative model to produce undirected, simple, connected graphs with a specified degrees and pattern of communities, while maintaining a graph structure that is as random as possible. Additionally, we demonstrate two important applications of our model: (a) to generate networks that can be used to benchmark existing and new algorithms for detecting communities in biological networks; and (b) to generate null models to serve as random controls when investigating the impact of complex network features beyond the byproduct of degree and modularity in empirical biological networks.\n\nConclusionOur model allows for the systematic study of the presence of community structure and its impact on network function and dynamics. This process is a crucial step in unraveling the functional consequences of the structural properties of biological systems and uncovering the mechanisms that drive these systems.
2014-06-02
Algorithms in Stringomics (I): Pattern-Matching against &quot;Stringomes&quot;
10.1101/001669
Paolo Ferragina;Bud (Bhubaneswar) Mishra;
This paper reports an initial design of new data-structures that generalizes the idea of pattern-matching in stringology, from its traditional usage in an (unstructured) set of strings to the arena of a well-structured family of strings. In particular, the object of interest is a family of strings composed of blocks/classes of highly similar \"stringlets,\" and thus mimic a population of genomes made by concatenating haplotype-blocks, further constrained by haplotype-phasing. Such a family of strings, which we dub \"stringomes,\" is formalized in terms of a multi-partite directed acyclic graph with a source and a sink. The most interesting property of stringomes is probably the fact that they can be represented efficiently with compression up to their k-th order empirical entropy, while ensuring that the compression does not hinder the pattern-matching counting and reporting queries - either internal to a block or spanning two (or a few constant) adjacent blocks. The solutions proposed here have immediate applications to next-generation sequencing technologies, base-calling, expression profiling, variant-calling, population studies, onco-genomics, cyber security trace analysis and text retrieval.
2014-01-02
Ordered, Random, Monotonic, and Non-Monotonic Digital Nanodot Gradients
10.1101/001305
Grant Ongo;Sebastien G Ricoult;Timothy E Kennedy;David Juncker;
Cell navigation is directed by inhomogeneous distributions of extracellular cues. It is well known that noise plays a key role in biology and is present in naturally occurring gradients at the micro- and nanoscale, yet it has not been studied with gradients in vitro. Here, we introduce novel algorithms to produce ordered and random gradients of discrete nanodots - called digital nanodot gradients (DNGs) - according to monotonic and non-monotonic density functions. The algorithms generate continuous DNGs, with dot spacing changing in two dimensions along the gradient direction according to arbitrary mathematical functions, with densities ranging from 0.02% to 44.44%. The random gradient algorithm compensates for random nanodot overlap, and the randomness and spatial homogeneity of the DNGs were confirmed with Ripleys K function. An array of 100 DNGs, each 400 x 400 {micro}m2, comprising a total of 57 million 200 x 200 nm2 dots was designed and patterned into silicon using electron-beam lithography, then patterned as fluorescently labeled IgGs on glass using lift-off nanocontact printing. DNGs will facilitate the study of the effects of noise and randomness at the micro- and nanoscales on cell migration and growth.
2013-12-10
Ordered, Random, Monotonic, and Non-Monotonic Digital Nanodot Gradients
10.1101/001305
Grant Ongo;Sebastien G Ricoult;Timothy E Kennedy;David Juncker;
Cell navigation is directed by inhomogeneous distributions of extracellular cues. It is well known that noise plays a key role in biology and is present in naturally occurring gradients at the micro- and nanoscale, yet it has not been studied with gradients in vitro. Here, we introduce novel algorithms to produce ordered and random gradients of discrete nanodots - called digital nanodot gradients (DNGs) - according to monotonic and non-monotonic density functions. The algorithms generate continuous DNGs, with dot spacing changing in two dimensions along the gradient direction according to arbitrary mathematical functions, with densities ranging from 0.02% to 44.44%. The random gradient algorithm compensates for random nanodot overlap, and the randomness and spatial homogeneity of the DNGs were confirmed with Ripleys K function. An array of 100 DNGs, each 400 x 400 {micro}m2, comprising a total of 57 million 200 x 200 nm2 dots was designed and patterned into silicon using electron-beam lithography, then patterned as fluorescently labeled IgGs on glass using lift-off nanocontact printing. DNGs will facilitate the study of the effects of noise and randomness at the micro- and nanoscales on cell migration and growth.
2013-12-13
Ordered, Random, Monotonic, and Non-Monotonic Digital Nanodot Gradients
10.1101/001305
Grant Ongo;Sebastien G Ricoult;Timothy E Kennedy;David Juncker;
Cell navigation is directed by inhomogeneous distributions of extracellular cues. It is well known that noise plays a key role in biology and is present in naturally occurring gradients at the micro- and nanoscale, yet it has not been studied with gradients in vitro. Here, we introduce novel algorithms to produce ordered and random gradients of discrete nanodots - called digital nanodot gradients (DNGs) - according to monotonic and non-monotonic density functions. The algorithms generate continuous DNGs, with dot spacing changing in two dimensions along the gradient direction according to arbitrary mathematical functions, with densities ranging from 0.02% to 44.44%. The random gradient algorithm compensates for random nanodot overlap, and the randomness and spatial homogeneity of the DNGs were confirmed with Ripleys K function. An array of 100 DNGs, each 400 x 400 {micro}m2, comprising a total of 57 million 200 x 200 nm2 dots was designed and patterned into silicon using electron-beam lithography, then patterned as fluorescently labeled IgGs on glass using lift-off nanocontact printing. DNGs will facilitate the study of the effects of noise and randomness at the micro- and nanoscales on cell migration and growth.
2014-03-28
Varing chemical equilibrium gives kinetic parameters
10.1101/000547
Edward Flach;Santiago Schnell;
We are interested in finding the kinetic parameters of a chemical reaction. Previous methods for finding these parameters rely on the dynamic behaviour of the system. This means that the methods are time-sensitive and often rely on non-linear curve fitting.
2013-11-16
Correcting a SHAPE-directed RNA structure by a mutate-map-rescue approach
10.1101/001966
Siqi Tian;Pablo Cordero;Wipapat Kladwang;Rhiju Das;
The three-dimensional conformations of non-coding RNAs underpin their biochemical functions but have largely eluded experimental characterization. Here, we report that integrating a classic mutation/rescue strategy with high-throughput chemical mapping enables rapid RNA structure inference with unusually strong validation. We revisit a paradigmatic 16S rRNA domain for which SHAPE (selective 2'-hydroxyl acylation with primer extension) suggested a conformational change between apo-and holo-ribosome conformations. Computational support estimates, data from alternative chemical probes, and mutate-and-map (M2) experiments expose limitations of prior methodology and instead give a near-crystallographic secondary structure. Systematic interrogation of single base pairs via a high-throughput mutation/rescue approach then permits incisive validation and refinement of the M2-based secondary structure and further uncovers the functional conformation as an excited state (25{+/-}5% population) accessible via a single-nucleotide register shift. These results correct an erroneous SHAPE inference of a ribosomal conformational change and suggest a general mutate-map-rescue approach for dissecting RNA dynamic structure landscapes.
2014-01-22
ATOMIC STRUCTURES OF GLUCOSE, FRUCTOSE AND SUCROSE AND EXPLANATION OF ANOMERIC CARBON
10.1101/002022
Raji Heyrovska;
Presented here are the structures of three biologically important sweet sugars, based on the additivity of covalent atomic radii in bond lengths. The observed smaller carbon-oxygen distances involving the anomeric carbons of the open chain hexoses are explained here, for the first time, as due to the smaller covalent double bond radii of carbon and oxygen than their single bond radii in the cyclic forms and in sucrose. The atomic structures of all the three carbohydrates, drawn to scale in colour, have been presented here also for the first time.
2014-01-23
Estimate of Within Population Incremental Selection Through Branch Imbalance in Lineage Trees
10.1101/002014
Gilad Liberman;Jennifer Benichou;Lea Tsaban;yaakov maman;Jacob Glanville;yoram louzoun;
Incremental selection within a population, defined as a limited fitness change following a mutation, is an important aspect of many evolutionary processes and can significantly affect a large number of mutations through the genome. Strongly advantageous or deleterious mutations are detected through the fixation of mutations in the population, using the synonymous to non-synonymous mutations ratio in sequences. There are currently to precise methods to estimate incremental selection occurring over limited periods. We here provide for the first time such a detailed method and show its precision and its applicability to the genomic analysis of selection. A special case of evolution is rapid, short term micro-evolution, where organism are under constant adaptation, occurring for example in viruses infecting a new host, B cells mutating during a germinal center reactions or mitochondria evolving within a given host. The proposed method is a novel mixed lineage tree/sequence based method to detect within population selection as defined by the effect of mutations on the average number of offspring. Specifically, we pro-pose to measure the log of the ratio between the number of leaves in lineage trees branches following synonymous and non-synonymous mutations. This method does not suffer from the need of a baseline model and is practically not affected by sampling biases. In order to show the wide applicability of this method, we apply it to multiple cases of micro-evolution, and show that it can detect genes and inter-genic regions using the selection rate and detect selection pressures in viral proteins and in the immune response to pathogens.
2014-01-23
An Improved Search Algorithm to Find G-Quadruplexes in Genome Sequences
10.1101/001990
Anna Varizhuk;Dmitry Ischenko;Igor Smirnov;Olga Tatarinova;Vyacheslav Severov;Roman Novikov;Vladimir Tsvetkov;Vladimir Naumov;Dmitry Kaluzhny;Galina Pozmogova;
A growing body of data suggests that the secondary structures adopted by G-rich polynucleotides may be more diverse than previously thought and that the definition of G-quadruplex-forming sequences should be broadened. We studied solution structures of a series of naturally occurring and model single-stranded DNA fragments defying the G3+NL1G3+NL2G3+NL3G3+ formula, which is used in most of the current GQ-search algorithms. The results confirm the GQ-forming potential of such sequences and suggest the existence of new types of GQs. We developed an improved (broadened) GQ-search algorithm (http://niifhm.ru/nauchnye-issledovanija/otdel-molekuljarnoj-biologii-i-genetiki/laboratorija-iskusstvennogo-antitelogeneza/497-2/) that accounts for the recently reported new types of GQs.
2014-01-23
Joint variant and de novo mutation identification on pedigrees from high-throughput sequencing data
10.1101/001958
John G Cleary;Ross Braithwaite;Kurt Gaastra;Brian S Hilbush;Stuart Inglis;Sean A Irvine;Alan Jackson;Richard Littin;Sahar Nohzadeh-Malakshah;Minita Shah;Mehul Rathod;David Ware;Len Trigg;Francisco M De La Vega;
The analysis of whole-genome or exome sequencing data from trios and pedigrees has being successfully applied to the identification of disease-causing mutations. However, most methods used to identify and genotype genetic variants from next-generation sequencing data ignore the relationships between samples, resulting in significant Mendelian errors, false positives and negatives. Here we present a Bayesian network framework that jointly analyses data from all members of a pedigree simultaneously using Mendelian segregation priors, yet providing the ability to detect de novo mutations in offspring, and is scalable to large pedigrees. We evaluated our method by simulations and analysis of WGS data from a 17 individual, 3-generation CEPH pedigree sequenced to 50X average depth. Compared to singleton calling, our family caller produced more high quality variants and eliminated spurious calls as judged by common quality metrics such as Ti/Tv, Het/Hom ratios, and dbSNP/SNP array data concordance. We developed a ground truth dataset to further evaluate our calls by identifying recombination cross-overs in the pedigree and testing variants for consistency with the inferred phasing, and we show that our method significantly outperforms singleton and population variant calling in pedigrees. We identify all previously validated de novo mutations in NA12878, concurrent with a 7X precision improvement. Our results show that our method is scalable to large genomics and human disease studies and allows cost optimization by rational sequencing capacity distribution.
2014-01-22
Joint variant and de novo mutation identification on pedigrees from high-throughput sequencing data
10.1101/001958
John G Cleary;Ross Braithwaite;Kurt Gaastra;Brian S Hilbush;Stuart Inglis;Sean A Irvine;Alan Jackson;Richard Littin;Sahar Nohzadeh-Malakshah;Minita Shah;Mehul Rathod;David Ware;Len Trigg;Francisco M De La Vega;
The analysis of whole-genome or exome sequencing data from trios and pedigrees has being successfully applied to the identification of disease-causing mutations. However, most methods used to identify and genotype genetic variants from next-generation sequencing data ignore the relationships between samples, resulting in significant Mendelian errors, false positives and negatives. Here we present a Bayesian network framework that jointly analyses data from all members of a pedigree simultaneously using Mendelian segregation priors, yet providing the ability to detect de novo mutations in offspring, and is scalable to large pedigrees. We evaluated our method by simulations and analysis of WGS data from a 17 individual, 3-generation CEPH pedigree sequenced to 50X average depth. Compared to singleton calling, our family caller produced more high quality variants and eliminated spurious calls as judged by common quality metrics such as Ti/Tv, Het/Hom ratios, and dbSNP/SNP array data concordance. We developed a ground truth dataset to further evaluate our calls by identifying recombination cross-overs in the pedigree and testing variants for consistency with the inferred phasing, and we show that our method significantly outperforms singleton and population variant calling in pedigrees. We identify all previously validated de novo mutations in NA12878, concurrent with a 7X precision improvement. Our results show that our method is scalable to large genomics and human disease studies and allows cost optimization by rational sequencing capacity distribution.
2014-01-24
An accelerated miRNA-based screen implicates Atf-3 in odorant receptor expression
10.1101/001982
Shreelatha Bhat;Minjung Shin;Suhyoung Bahk;Young-Joon Kim;Walton D. Jones;
Large scale genetic screening is tedious and time-consuming. To address this problem, we propose a novel two-tiered screening system comprising an initial \"pooling\" screen that identifies miRNAs whose tissue-specific over-expression causes a phenotype of interest followed by a more focused secondary screen that uses gene-specific RNAi. As miRNAs inhibit translation or direct the destruction of their target mRNAs, any phenotype observed with miRNA over-expression can be attributed to the loss-of-function of one or more target mRNAs. Since miRNA-target pairing is sequence-specific, a list of predicted targets for miRNAs identified in the initial screen serves as a list of candidates for the secondary RNAi-based screen. These predicted miRNA targets can be prioritized by expression pattern, and if multiple miRNAs produce the same phenotype, overlapping target predictions can be given higher priority in the follow-up screen.\n\nSince miRNAs are short, miRNA misexpression will likely uncover artifactual miRNA-target relation-ships. Thus, we are using miRNAs as a tool to accelerate genetic screening rather than focus on the biology of miRNAs themselves. This two-tiered system allows us to rapidly identify individual target genes involved in a phenomenon of interest, often in less than 200 crosses. Here we demonstrate the effectiveness of this method by identifying miRNAs that alter Drosophila odorant receptor expression. With subsequent miRNA target prediction and follow-up RNAi screening we identify and validate a novel role for the transcription factor Atf3 in the expression of the socially relevant receptor Or47b.
2014-01-22
Complex behavioral manipulation drives mismatch between host and parasite diversity
10.1101/001925
Fabricio Baccaro;João Araújo;Harry Evans;Jorge Souza;Bill Magnusson;David Hughes;
Parasites and hosts are intimately associated such that changes in the diversity of one partner are thought to lead to changes in the other. We investigated this linked diversity hypothesis in a specialized ant-Ophiocordyceps system in three forests across 750 km in Central Amazonia. All species belonging to the fungal genus Ophiocordyceps associated with ants have evolved some degree of behavioral control to increase their own transmission, but the leaf-biting behavior is the most complex form of host manipulation. Such a system requires control of the mandibular muscles and a distinct shift in behavior, from climbing vegetation to walking on leaves to rasping leaf veins in the seconds before death. The need to induce complex behavior may limit host availability and represent a constraint on parasite diversity. The consequence for community structure is that complex behavioral manipulation leads to a mismatch between ant hosts and the diversity of their fungal parasites.
2014-01-21
Particle size distribution and optimal capture of aqueous macrobial eDNA
10.1101/001941
Cameron R. Turner;Matthew A. Barnes;Charles C.Y. Xu;Stuart E. Jones;Christopher L. Jerde;David M. Lodge;
O_LIDetecting aquatic macroorganisms with environmental DNA (eDNA) is a new survey method with broad applicability. However, the origin, state, and fate of aqueous macrobial eDNA - which collectively determine how well eDNA can serve as a proxy for directly observing organisms and how eDNA should be captured, purified, and assayed - are poorly understood.\nC_LIO_LIThe size of aquatic particles provides clues about their origin, state, and fate. We used sequential filtration size fractionation to measure, for the first time, the particle size distribution (PSD) of macrobial eDNA, specifically Common Carp (hereafter referred to as Carp) eDNA. We compared it to the PSDs of total eDNA (from all organisms) and suspended particle matter (SPM). We quantified Carp mitochondrial eDNA using a custom qPCR assay, total eDNA with fluorometry, and SPM with gravimetric analysis.\nC_LIO_LIIn a lake and a pond, we found Carp eDNA in particles from >180 to <0.2 m, but it was most abundant from 1-10 m. Total eDNA was most abundant below 0.2 m and SPM was most abundant above 100 m. SPM was [&le;]0.1% total eDNA, and total eDNA was [&le;]0.0004% Carp eDNA. 0.2 m filtration maximized Carp eDNA capture (85%{+/-}6%) while minimizing total (i.e., non-target) eDNA capture (48%{+/-}3%), but filter clogging limited this pore size to a volume <250 mL. To mitigate this limitation we estimated a continuous PSD model for Carp eDNA and derived an equation for calculating isoclines of pore size and water volume that yield equivalent amounts of Carp eDNA.\nC_LIO_LIOur results suggest that aqueous macrobial eDNA predominantly exists inside mitochondria or cells, and that settling plays an important role in its fate. For optimal eDNA capture, we recommend 0.2 m filtration or a combination of larger pore size and water volume that exceeds the 0.2 m isocline. In situ filtration of large volumes could maximize detection probability when surveying large habitats for rare organisms. Our method for eDNA particle size analysis enables future research to compare the PSDs of eDNA from other organisms and environments, and to easily apply them for ecological monitoring.\nC_LI
2014-01-21
SINGLE NUCLEOTIDE POLYMORPHISMS SHED LIGHT ON CORRELATIONS BETWEEN ENVIRONMENTAL VARIABLES AND ADAPTIVE GENETIC DIVERGENCE AMONG POPULATIONS IN ONCORHYNCHUS KETA
10.1101/001974
Xilin Deng;Philippe Henry;
Identifying the genetic and ecological basis of adaptation is of immense importance in evolutionary biology. In our study, we applied a panel of 58 biallelic single nucleotide polymorphisms (SNPs) for the economically and culturally important salmonid Oncorhynchus keta. Samples included 4164 individuals from 43 populations ranging from Coastal Western Alaska to southern British Colombia and northern Washington. Signatures of natural selection were detected by identifying seven outlier loci using two independent approaches: one based on outlier detection and another based on environmental correlations. Evidence of divergent selection at two candidate SNP loci, Oke_RFC2-168 and Oke_MARCKS-362, indicates significant environmental correlations, particularly with the number of frost-free days (NFFD). Important associations found between environmental variables and outlier loci indicate that those environmental variables could be the major driving forces of allele frequency divergence at the candidate loci. NFFD, in particular, may play an important adaptive role in shaping genetic variation in O. keta. Correlations between divergent selection and local environmental variables will help shed light on processes of natural selection and molecular adaptation to local environmental conditions.
2014-01-22
Coalescence 2.0: a multiple branching of recent theoretical developments and their applications
10.1101/001933
Aurelien Tellier;Christophe Lemaire;
Population genetics theory has laid the foundations for genomics analyses including the recent burst in genome scans for selection and statistical inference of past demographic events in many prokaryote, animal and plant species. Identifying SNPs under natural selection and underpinning species adaptation relies on disentangling the respective contribution of random processes (mutation, drift, migration) from that of selection on nucleotide variability. Most theory and statistical tests have been developed using the Kingmans coalescent theory based on the Wright-Fisher population model. However, these theoretical models rely on biological and life-history assumptions which may be violated in many prokaryote, fungal, animal or plant species. Recent theoretical developments of the so called multiple merger coalescent models are reviewed here ({Lambda}-coalescent, beta-coalescent, Bolthausen-Snitzman, {Xi}-coalescent). We explicit how these new models take into account various pervasive ecological and biological characteristics, life history traits or life cycles which were not accounted in previous theories such as 1) the skew in offspring production typical of marine species, 2) fast adapting microparasites (virus, bacteria and fungi) exhibiting large variation in population sizes during epidemics, 3) the peculiar life cycles of fungi and bacteria alternating sexual and asexual cycles, and 4) the high rates of extinction-recolonization in spatially structured populations. We finally discuss the relevance of multiple merger models for the detection of SNPs under selection in these species, for population genomics of very large sample size and advocate to potentially examine the conclusion of previous population genetics studies.
2014-01-21
Genome-wide patterns of copy number variation in the diversified chicken genomes using next-generation sequencing
10.1101/002006
Guoqiang Yi;Lujiang Qu;Jianfeng Liu;Yiyuan Yan;Guiyun Xu;Ning Yang;
Copy number variation (CNV) is important and widespread in the genome, and is a major cause of disease and phenotypic diversity. Herein, we perform genome-wide CNV analysis in 12 diversified chicken genomes based on whole genome sequencing. A total of 9,025 CNV regions (CNVRs) covering 100.1 Mb and representing 9.6% of the chicken genome are identified, ranging in size from 1.1 to 268.8 kb with an average of 11.1 kb. Sequencing-based predictions are confirmed at high validation rate by two independent approaches, including array comparative genomic hybridization (aCGH) and quantitative PCR (qPCR). The Pearson?s correlation values between sequencing and aCGH results range from 0.395 to 0.740, and qPCR experiments reveal a positive validation rate of 91.71% and a false negative rate of 22.43%. In total, 2,188 predicted CNVRs (24.2%) span 2,182 RefSeq genes (36.8%) associated with specific biological functions. Besides two previously accepted copy number variable genes EDN3 and PRLR, we also find some promising genes with potential in phenotypic variants. FZD6 and LIMS1, two genes related to diseases susceptibility and resistance are covered by CNVRs. Highly duplicated SOCS2 may lead to higher bone mineral density. Entire or partial duplication of some genes like POPDC3 and LBFABP may have great economic importance in poultry breeding. Our results based on extensive genetic diversity provide the first individualized chicken CNV map and genome-wide gene copy number estimates and warrant future CNV association studies for important traits of chickens.
2014-01-23
Illumina TruSeq synthetic long-reads empower de novo assembly and resolve complex, highly repetitive transposable elements
10.1101/001834
Rajiv C McCoy;Ryan W Taylor;Timothy A Blauwkamp;Joanna L Kelley;Michael Kertesz;Dmitry Pushkarev;Dmitri A Petrov;Anna-Sophie Fiston-Lavier;
High-throughput DNA sequencing technologies have revolutionized genomic analysis, including the de novo assembly of whole genomes. Nevertheless, assembly of complex genomes remains challenging, in part due to the presence of dispersed repeats which introduce ambiguity during genome reconstruction. Transposable elements (TEs) can be particularly problematic, especially for TE families exhibiting high sequence identity, high copy number, or present in complex genomic arrangements. While TEs strongly affect genome function and evolution, most current de novo assembly approaches cannot resolve long, identical, and abundant families of TEs. Here, we applied a novel Illumina technology called TruSeq synthetic long-reads, which are generated through highly parallel library preparation and local assembly of short read data and achieve lengths of 1.5-18.5 Kbp with an extremely low error rate ([~]0.03% per base). To test the utility of this technology, we sequenced and assembled the genome of the model organism Drosophila melanogaster (reference genome strain y;cn,bw,sp) achieving an N50 contig size of 69.7 Kbp and covering 96.9% of the euchromatic chromosome arms of the current reference genome. TruSeq synthetic long-read technology enables placement of individual TE copies in their proper genomic locations as well as accurate reconstruction of TE sequences. We entirely recovered and accurately placed 4,229 (77.8%) of the 5,434 of annotated transposable elements with perfect identity to the current reference genome. As TEs are ubiquitous features of genomes of many species, TruSeq synthetic long-reads, and likely other methods that generate long reads, offer a powerful approach to improve de novo assemblies of whole genomes.
2014-01-21
Illumina TruSeq synthetic long-reads empower de novo assembly and resolve complex, highly repetitive transposable elements
10.1101/001834
Rajiv C McCoy;Ryan W Taylor;Timothy A Blauwkamp;Joanna L Kelley;Michael Kertesz;Dmitry Pushkarev;Dmitri A Petrov;Anna-Sophie Fiston-Lavier;
High-throughput DNA sequencing technologies have revolutionized genomic analysis, including the de novo assembly of whole genomes. Nevertheless, assembly of complex genomes remains challenging, in part due to the presence of dispersed repeats which introduce ambiguity during genome reconstruction. Transposable elements (TEs) can be particularly problematic, especially for TE families exhibiting high sequence identity, high copy number, or present in complex genomic arrangements. While TEs strongly affect genome function and evolution, most current de novo assembly approaches cannot resolve long, identical, and abundant families of TEs. Here, we applied a novel Illumina technology called TruSeq synthetic long-reads, which are generated through highly parallel library preparation and local assembly of short read data and achieve lengths of 1.5-18.5 Kbp with an extremely low error rate ([~]0.03% per base). To test the utility of this technology, we sequenced and assembled the genome of the model organism Drosophila melanogaster (reference genome strain y;cn,bw,sp) achieving an N50 contig size of 69.7 Kbp and covering 96.9% of the euchromatic chromosome arms of the current reference genome. TruSeq synthetic long-read technology enables placement of individual TE copies in their proper genomic locations as well as accurate reconstruction of TE sequences. We entirely recovered and accurately placed 4,229 (77.8%) of the 5,434 of annotated transposable elements with perfect identity to the current reference genome. As TEs are ubiquitous features of genomes of many species, TruSeq synthetic long-reads, and likely other methods that generate long reads, offer a powerful approach to improve de novo assemblies of whole genomes.
2014-04-29
Illumina TruSeq synthetic long-reads empower de novo assembly and resolve complex, highly repetitive transposable elements
10.1101/001834
Rajiv C McCoy;Ryan W Taylor;Timothy A Blauwkamp;Joanna L Kelley;Michael Kertesz;Dmitry Pushkarev;Dmitri A Petrov;Anna-Sophie Fiston-Lavier;
High-throughput DNA sequencing technologies have revolutionized genomic analysis, including the de novo assembly of whole genomes. Nevertheless, assembly of complex genomes remains challenging, in part due to the presence of dispersed repeats which introduce ambiguity during genome reconstruction. Transposable elements (TEs) can be particularly problematic, especially for TE families exhibiting high sequence identity, high copy number, or present in complex genomic arrangements. While TEs strongly affect genome function and evolution, most current de novo assembly approaches cannot resolve long, identical, and abundant families of TEs. Here, we applied a novel Illumina technology called TruSeq synthetic long-reads, which are generated through highly parallel library preparation and local assembly of short read data and achieve lengths of 1.5-18.5 Kbp with an extremely low error rate ([~]0.03% per base). To test the utility of this technology, we sequenced and assembled the genome of the model organism Drosophila melanogaster (reference genome strain y;cn,bw,sp) achieving an N50 contig size of 69.7 Kbp and covering 96.9% of the euchromatic chromosome arms of the current reference genome. TruSeq synthetic long-read technology enables placement of individual TE copies in their proper genomic locations as well as accurate reconstruction of TE sequences. We entirely recovered and accurately placed 4,229 (77.8%) of the 5,434 of annotated transposable elements with perfect identity to the current reference genome. As TEs are ubiquitous features of genomes of many species, TruSeq synthetic long-reads, and likely other methods that generate long reads, offer a powerful approach to improve de novo assemblies of whole genomes.
2014-04-30
Illumina TruSeq synthetic long-reads empower de novo assembly and resolve complex, highly repetitive transposable elements
10.1101/001834
Rajiv C McCoy;Ryan W Taylor;Timothy A Blauwkamp;Joanna L Kelley;Michael Kertesz;Dmitry Pushkarev;Dmitri A Petrov;Anna-Sophie Fiston-Lavier;
High-throughput DNA sequencing technologies have revolutionized genomic analysis, including the de novo assembly of whole genomes. Nevertheless, assembly of complex genomes remains challenging, in part due to the presence of dispersed repeats which introduce ambiguity during genome reconstruction. Transposable elements (TEs) can be particularly problematic, especially for TE families exhibiting high sequence identity, high copy number, or present in complex genomic arrangements. While TEs strongly affect genome function and evolution, most current de novo assembly approaches cannot resolve long, identical, and abundant families of TEs. Here, we applied a novel Illumina technology called TruSeq synthetic long-reads, which are generated through highly parallel library preparation and local assembly of short read data and achieve lengths of 1.5-18.5 Kbp with an extremely low error rate ([~]0.03% per base). To test the utility of this technology, we sequenced and assembled the genome of the model organism Drosophila melanogaster (reference genome strain y;cn,bw,sp) achieving an N50 contig size of 69.7 Kbp and covering 96.9% of the euchromatic chromosome arms of the current reference genome. TruSeq synthetic long-read technology enables placement of individual TE copies in their proper genomic locations as well as accurate reconstruction of TE sequences. We entirely recovered and accurately placed 4,229 (77.8%) of the 5,434 of annotated transposable elements with perfect identity to the current reference genome. As TEs are ubiquitous features of genomes of many species, TruSeq synthetic long-reads, and likely other methods that generate long reads, offer a powerful approach to improve de novo assemblies of whole genomes.
2014-06-17
Modelling and analysis of bacterial tracks suggest an active reorientation mechanism in Rhodobacter sphaeroides
10.1101/001917
Gabriel Rosser;Ruth E. Baker;Judith P. Armitage;Alexander George Fletcher;
Most free-swimming bacteria move in approximately straight lines, interspersed with random reorientation phases. A key open question concerns varying mechanisms by which reorientation occurs. We combine mathematical modelling with analysis of a large tracking dataset to study the poorly-understood reorientation mechanism in the monoflagellate species Rhodobacter sphaeroides. The flagellum on this species rotates counterclockwise to propel the bacterium, periodically ceasing rotation to enable reorientation. When rotation restarts the cell body usually points in a new direction. It has been assumed that the new direction is simply the result of Brownian rotation. We consider three variants of a self-propelled particle model of bacterial motility. The first considers rotational diffusion only, corresponding to a non-chemotactic mutant strain. A further two models also include stochastic reorientations, describing run-and-tumble motility. We derive expressions for key summary statistics and simulate each model using a stochastic computational algorithm. We also discuss the effect of cell geometry on rotational diffusion. Working with a previously published tracking dataset, we compare predictions of the models with data on individual stopping events in R. sphaeroides. This provides strong evidence that this species undergoes some form of active reorientation rather than simple reorientation by Brownian rotation.
2014-01-21
A Powerful Approach for Identification of Differentially Transcribed mRNA Isoforms
10.1101/002097
Yuande Tan;
Next generation sequencing is being increasingly used for transcriptome-wide analysis of differential gene expression. The primary goal in profiling expression is to identify genes or RNA isoforms differentially expressed between specific conditions. Yet, the next generation sequence-based count data are essentially different from the microarray data that are continuous type, therefore, the statistical methods developed well over the last decades cannot be applicable. For this reason, a variety of new statistical methods based on count data of transcript reads has been correspondingly developed. But currently the transcriptomic count data coming only from a few replicate libraries have high technical noise and small sample size bias, performances of these new methods are not desirable. We here developed a new statistical method specifically applicable to small sample count data called mBeta t-test for identifying differentially expressed gene or isoforms on the basis of the Beta t-test. The results obtained from simulated and real data showed that the mBeta t-test method significantly outperformed the existing statistical methods in all given scenarios. Findings of our method were validated by qRT-PCR experiments. The mBeta t-test method significantly reduced true false discoveries in differentially expressed genes or isoforms so that it had high work efficiencies in all given scenarios. In addition, the mBeta t-test method showed high stability in performance of statistical analysis and in estimation of FDR. These strongly suggests that our mBeta t-test method would offer us a creditable and reliable result of statistical analysis in practice.
2014-01-26
Bayesian Energy Landscape Tilting: Towards Concordant Models of Molecular Ensembles
10.1101/002048
Kyle Beauchamp;Vijay Pande;Rhiju Das;
Predicting biological structure has remained challenging for systems such as disordered proteins that take on myriad conformations. Hybrid simulation/experiment strategies have been undermined by difficulties in evaluating errors from computa- tional model inaccuracies and data uncertainties. Building on recent proposals from maximum entropy theory and nonequilibrium thermodynamics, we address these issues through a Bayesian Energy Landscape Tilting (BELT) scheme for computing Bayesian \"hyperensembles\" over conformational ensembles. BELT uses Markov chain Monte Carlo to directly sample maximum-entropy conformational ensembles consistent with a set of input experimental observables. To test this framework, we apply BELT to model trialanine, starting from disagreeing simulations with the force fields ff96, ff99, ff99sbnmr-ildn, CHARMM27, and OPLS-AA. BELT incorporation of limited chemical shift and 3J measurements gives convergent values of the peptides , {beta}, and PPII conformational populations in all cases. As a test of predictive power, all five BELT hyperensembles recover set-aside measurements not used in the fitting and report accu- rate errors, even when starting from highly inaccurate simulations. BELTs principled fxramework thus enables practical predictions for complex biomolecular systems from discordant simulations and sparse data.
2014-01-24
Holsteins Favor Heifers, Not Bulls: Biased Milk Production Programmed during Pregnancy as a Function of Fetal Sex
10.1101/002063
Katie Hinde;Abigail J Carpenter;John C Clay;Barry J Bradford;
Mammalian females pay high energetic costs for reproduction, the greatest of which is imposed by lactation. The synthesis of milk requires, in part, the mobilization of bodily reserves to nourish developing young. Numerous hypotheses have been advanced to predict how mothers will differentially invest in sons and daughters, however few studies have addressed sex-biased milk synthesis. Here we leverage the dairy cow model to investigate such phenomena. Using 2.39 million lactation records from 1.49 million dairy cows, we demonstrate that the sex of the fetus influences the capacity of the mammary gland to synthesize milk during lactation. Cows favor daughters, producing significantly more milk for daughters than for sons across lactation. Using a sub-sample of this dataset (N = 113,750 subjects) we further demonstrate that the effects of fetal sex interact dynamically across parities, whereby the sex of the fetus being gestated can enhance or diminish the production of milk during an established lactation. Moreover the sex of the fetus gestated on the first parity has persistent consequences for milk synthesis on the subsequent parity. Specifically, gestation of a daughter on the first parity increases milk production by [~]445 kg over the first two lactations. Our results identify a dramatic and sustained programming of mammary function by offspring in utero. Nutritional and endocrine conditions in utero are known to have pronounced and long-term effects on progeny, but the ways in which the progeny has sustained physiological effects on the dam have received little attention to date.
2014-01-24
A microRNA profile in Fmr1 knockout mice reveals microRNA expression alterations with possible roles in fragile X syndrome
10.1101/002071
Ting Liu;Rui-Ping Wan;Ling-Jia Tang;Shu-Jing Liu;Hai-Jun Li;Qi-Hua Zhao;Wei-Ping Liao;Xiao-Fang Sun;Yong-Hong Yi;Yue-Sheng Long;
Fragile X syndrome (FXS), a common form of inherited mental retardation, is caused by a loss of expression of the fragile X mental retardation protein (FMRP). FMRP is involved in brain functions by interacting with mRNAs and microRNAs (miRNAs) that selectively control gene expression at translational level. However, little is known about the role of FMRP in regulating miRNA expression. Here, we found a development-dependant dynamic expression of Fmr1 mRNA (encoding FMRP) in mouse hippocampus with a small peak at postnatal day 7 (P7). MiRNA microarray analysis showed that the levels of 38 miRNAs showed a significant increase with about 15~250 folds and the levels of 26 miRNAs showed a significant decrease with only about 2~4 folds in the hippocampus of P7 Fmr1 KO mice. Q-PCR assay showed that 9 of the most increased miRNAs (>100 folds in microarrays) were increased about 40~70 folds and their pre-miRNAs were increased about 5~10 folds, but no significant difference in their pri-miRNA levels was observed, suggesting a role of FMRP in regulating miRNA processing from pri-miRNA to pre-miRNA. We further demonstrated that a set of protein-coding mRNAs, potentially targeted by the 9 miRNAs were down-regulated in the hippocampus of Fmr1 KO mice. Finally, luciferase assays demonstrated that miR-34b, miR-340, miR-148a could down-regulate the reporter gene expression by interacting with the Met 3' UTR. Taken together these findings suggest that the miRNA expression alterations resulted from the absence of FMRP might contribute to molecular pathology of FXS.
2014-01-26
VgeneRepertoire.org identifies and stores variable genes of immunoglobulins and T-cell receptors from the genomes of jawed vertebrates
10.1101/002139
David N Olivieri;Francisco Gambón-Deza;
The VgeneRepertoire.org platform (http://vgenerepertoire.org) is a new public database repository for variable (V) gene sequences that encode immunoglobulin and T-cell receptor molecules. It identifies the nucleic and amino acid sequences of more than 20,000 genes, providing their exon location in either the contig, scaffold, or chromosome region, as well as locus information for more than 100 jawed vertebrate taxa whose genomes have been sequenced. This web repository provides support to immunologists interested in these molecules and aids in comparative phylogenetic studies.
2014-01-27
The determinants of alpine butterfly richness and composition vary according to the ecological traits of species
10.1101/002147
Vincent Sonnay;Loïc Pellissier;Jean-Nicolas Pradervand;Luigi Maiorano;Anne Dubuis;Mary S. Wisz;Antoine Guisan;
Predicting spatial patterns of species diversity and composition using suitable environmental predictors is an essential element in conservation planning. Although species have distinct relationships to environmental conditions, some similarities may exist among species that share functional characteristics or traits. We investigated the relationship between species richness, composition and abiotic and biotic environment in different groups of butterflies that share ecological characteristics. We inventoried butterfly species richness in 192 sites and classified all inventoried species in three traits categories: the caterpillars diet breadth, the habitat requirements and the dispersal ability of the adults. We studied how environment, including influence butterfly species richness and composition within each trait category. Across four modelling approaches, the relative influence of environmental variables on butterfly species richness differed for specialists and generalists. Climatic variables were the main determinants of butterfly species richness and composition for generalists, whereas habitat diversity, and plant richness were also important for specialists. Prediction accuracy was lower for specialists than for generalists. Although climate variables represent the strongest drivers affecting butterfly species richness and composition for generalists, plant richness and habitat diversity are at least as important for specialist butterfly species. As specialist butterflies are among those species particularly threatened by global changes, devising accurate predictors to model specialist species richness is extremely important. However, our results indicate that this task will be challenging because more complex predictors are required.
2014-01-27
Genome-wide DNA methylome analysis reveals novel epigenetically dysregulated non-coding RNAs in human breast cancer
10.1101/002204
Yongsheng Li;Yunpeng Zhang;Shengli Li;Jianping Lu;Juan Chen;Zheng Zhao;Jing Bai;Juan Xu;Xia Li;
The development of human breast cancer is driven by changes in the genetic and epigenetic landscape of the cell. Despite growing appreciation of the importance of epigenetics in breast cancers, our knowledge of epigenetic alterations of non-coding RNAs (ncRNAs) in breast cancers remains limited. Here, we explored the epigenetic patterns of ncRNAs in breast cancers via a sequencing-based comparative methylome analysis, mainly focusing on two most popular ncRNA biotypes, long non-coding RNAs (lncRNAs) and miRNAs. Besides global hypomethylation and extensive CpG islands (CGIs) hypermethylation, we observed widely aberrant methylation in the promoters of ncRNAs, which was higher than that of protein-coding genes. Specifically, intergenic ncRNAs were observed to contribute a large slice of the aberrantly methylated ncRNA promoters. Moreover, we summarized five patterns of ncRNA promoter aberrant methylation in the context of genomic CGIs, where aberrant methylation occurred not only on the CGIs, but also flanking regions and CGI sparse promoters. Integration with transcriptional datasets, we found that the ncRNA promoter methylation events were associated with transcriptional changes. Furthermore, a panel of ncRNAs were identified as biomarkers that were able to discriminate between disease phenotypes (AUCs>0.90). Finally, the potential functions for aberrantly methylated ncRNAs were predicted based on similar patterns, adjacency and/or target genes, highlighting that ncRNAs and coding genes coordinately mediated pathways dysregulation in the development and progression of breast cancers. This study presents the aberrant methylation patterns of ncRNAs, which will be a highly valuable resource for investigations at understanding epigenetic regulation of breast cancers.\n\n[Supplemental material is available online at www.genome.org.]
2014-01-28
Cytoplasmic nanojunctions between lysosomes and sarcoplasmic reticulum are required for specific calcium signaling
10.1101/002196
Nicola Fameli;Oluseye A. Ogunbayo;Cornelis van Breemen;A. Mark Evans;
Herein we demonstrate how nanojunctions between lysosomes and sarcoplasmic reticulum (L-SR junctions) serve to couple lysosomal activation to regenerative, ryanodine receptor-mediated cellular Ca2+ waves. In pulmonary artery smooth muscle cells (PASMCs) it has been proposed that nicotinic acid adenine dinucleotide phosphate (NAADP) triggers increases in cytoplasmic Ca2+ via L-SR junctions, in a manner that requires initial Ca2+ release from lysosomes and subsequent Ca2+-induced Ca2+ release (CICR) via ryanodine receptor (RyR) subtype 3 on the SR membrane proximal to lysosomes. L-SR junction membrane separation has been estimated to be < 400 nm and thus beyond the resolution of light microscopy, which has restricted detailed investigations of the junctional coupling process. The present study utilizes standard and tomographic transmission electron microscopy to provide a thorough ultrastructural characterization of the L-SR junctions in PASMCs. We show that L-SR nanojunctions are prominent features within these cells and estimate that the junctional membrane separation and extension are about 15 nm and 300 nm, respectively. Furthermore, we develop a quantitative model of the L-SR junction using these measurements, prior kinetic and specific Ca2+ signal information as input data. Simulations of NAADP-dependent junctional Ca2+ transients demonstrate that the magnitude of these signals can breach the threshold for CICR via RyR3. By correlation analysis of live cell Ca2+ signals and simulated Ca2+ transients within L-SR junctions, we estimate that \"trigger zones\" with a 60-100 junctions are required to confer a signal of similar magnitude. This is compatible with the 130 lysosomes/cell estimated from our ultrastructural observations. Most importantly, our model shows that increasing the L-SR junctional width above 50 nm lowers the magnitude of junctional [Ca2+] such that there is a failure to breach the threshold for CICR via RyR3. L-SR junctions are therefore a pre-requisite for efficient Ca2+ signal coupling and may contribute to cellular function in health and disease.
2014-01-28
Transcriptome pyrosequencing of abnormal phenotypes in Trypanosoma cruzi epimastigotes after ectopic expression of a small zinc finger protein
10.1101/002170
Gaston Westergaard;Marc Laverriere;Santiago Revale;Marina Reinert;Javier De Gaudenzi;Adriana Jager;Martin P Vazquez;
The TcZFPs are a family of small zinc finger proteins harboring WW domains or Proline rich motifs. In Trypanosoma brucei, ZFPs are involved during stage specific differentiation. TcZFPs interact with each other using the WW domain (ZFP2 and ZFP3) and the proline rich motif (ZFP1). The tcZFP1b member is exclusive to Trypanosoma cruzi and it is only expressed in trypomastigote stage. We used a tetracycline inducible vector to express ectopically tcZFP1b in the epimastigote stage. Upon induction of tcZFP1b, the parasites stopped dividing completely after five days. Visual inspection showed abnormal distorted-morphology (monster) cells with multiple flagella and increased DNA contents. We were interested in investigate global transcription changes occurred during the generation of this abnormal phenotype. Thus, we performed RNA-seq transcriptome profiling with a 454 pyrosequencer to analyze the global changes after ectopic expression of tcZFP1b. The total mRNAs sequenced from induced and non-induced control epimastigotes showed, after filtering the data, a set of 70 genes having equal or more than 3X fold change upregulation, while 35 genes showed equal or more than 3X fold downregulation. Interestingly, several trans-sialidase-like genes and pseudogenes were upregulated along with several genes in the categories of amino acid catabolism and carbohydrate metabolism. On the other hand, hypothetical proteins, fatty acid biosynthesis and mitochondrial functions dominated the group of downregulated genes. Our data showed that several mRNAs sharing related functions and pathways changed their levels in a concerted pattern resembling post-transcriptional regulons. We also found two different motifs in the 3'UTRs of the majority of mRNAs, one for upregulated and other for downregulated genes
2014-01-28
Significantly distinct branches of hierarchical trees: A framework for statistical analysis and applications to biological data
10.1101/002188
Guoli Sun;Alexander Krasnitz;
BackgroundOne of the most common goals of hierarchical clustering is finding those branches of a tree that form quantifiably distinct data subtypes. Achieving this goal in a statistically meaningful way requires (a) a measure of distinctness of a branch and (b) a test to determine the significance of the observed measure, applicable to all branches and across multiple scales of dissimilarity.\n\nResultsWe formulate a method termed Tree Branches Evaluated Statistically for Tightness (TBEST) for identifying significantly distinct tree branches in hierarchical clusters. For each branch of the tree a measure of distinctness, or tightness, is defined as a rational function of heights, both of the branch and of its parent. A statistical procedure is then developed to determine the significance of the observed values of tightness. We test TBEST as a tool for tree-based data partitioning by applying it to five benchmark datasets, one of them synthetic and the other four each from a different area of biology. For each dataset there is a well-defined partition of the data into classes. In all test cases TBEST performs on par with or better than the existing techniques.\n\nConclusionsBased on our benchmark analysis, TBEST is a tool of choice for detection of significantly distinct branches in hierarchical trees grown from biological data. An R language implementation of the method is available from the Comprehensive R Archive Network: cran.r-project.org/web/packages/TBEST/index.html.
2014-01-29
Significantly distinct branches of hierarchical trees: A framework for statistical analysis and applications to biological data
10.1101/002188
Guoli Sun;Alexander Krasnitz;
BackgroundOne of the most common goals of hierarchical clustering is finding those branches of a tree that form quantifiably distinct data subtypes. Achieving this goal in a statistically meaningful way requires (a) a measure of distinctness of a branch and (b) a test to determine the significance of the observed measure, applicable to all branches and across multiple scales of dissimilarity.\n\nResultsWe formulate a method termed Tree Branches Evaluated Statistically for Tightness (TBEST) for identifying significantly distinct tree branches in hierarchical clusters. For each branch of the tree a measure of distinctness, or tightness, is defined as a rational function of heights, both of the branch and of its parent. A statistical procedure is then developed to determine the significance of the observed values of tightness. We test TBEST as a tool for tree-based data partitioning by applying it to five benchmark datasets, one of them synthetic and the other four each from a different area of biology. For each dataset there is a well-defined partition of the data into classes. In all test cases TBEST performs on par with or better than the existing techniques.\n\nConclusionsBased on our benchmark analysis, TBEST is a tool of choice for detection of significantly distinct branches in hierarchical trees grown from biological data. An R language implementation of the method is available from the Comprehensive R Archive Network: cran.r-project.org/web/packages/TBEST/index.html.
2014-02-10
Significantly distinct branches of hierarchical trees: A framework for statistical analysis and applications to biological data
10.1101/002188
Guoli Sun;Alexander Krasnitz;
BackgroundOne of the most common goals of hierarchical clustering is finding those branches of a tree that form quantifiably distinct data subtypes. Achieving this goal in a statistically meaningful way requires (a) a measure of distinctness of a branch and (b) a test to determine the significance of the observed measure, applicable to all branches and across multiple scales of dissimilarity.\n\nResultsWe formulate a method termed Tree Branches Evaluated Statistically for Tightness (TBEST) for identifying significantly distinct tree branches in hierarchical clusters. For each branch of the tree a measure of distinctness, or tightness, is defined as a rational function of heights, both of the branch and of its parent. A statistical procedure is then developed to determine the significance of the observed values of tightness. We test TBEST as a tool for tree-based data partitioning by applying it to five benchmark datasets, one of them synthetic and the other four each from a different area of biology. For each dataset there is a well-defined partition of the data into classes. In all test cases TBEST performs on par with or better than the existing techniques.\n\nConclusionsBased on our benchmark analysis, TBEST is a tool of choice for detection of significantly distinct branches in hierarchical trees grown from biological data. An R language implementation of the method is available from the Comprehensive R Archive Network: cran.r-project.org/web/packages/TBEST/index.html.
2014-02-12
Significantly distinct branches of hierarchical trees: A framework for statistical analysis and applications to biological data
10.1101/002188
Guoli Sun;Alexander Krasnitz;
BackgroundOne of the most common goals of hierarchical clustering is finding those branches of a tree that form quantifiably distinct data subtypes. Achieving this goal in a statistically meaningful way requires (a) a measure of distinctness of a branch and (b) a test to determine the significance of the observed measure, applicable to all branches and across multiple scales of dissimilarity.\n\nResultsWe formulate a method termed Tree Branches Evaluated Statistically for Tightness (TBEST) for identifying significantly distinct tree branches in hierarchical clusters. For each branch of the tree a measure of distinctness, or tightness, is defined as a rational function of heights, both of the branch and of its parent. A statistical procedure is then developed to determine the significance of the observed values of tightness. We test TBEST as a tool for tree-based data partitioning by applying it to five benchmark datasets, one of them synthetic and the other four each from a different area of biology. For each dataset there is a well-defined partition of the data into classes. In all test cases TBEST performs on par with or better than the existing techniques.\n\nConclusionsBased on our benchmark analysis, TBEST is a tool of choice for detection of significantly distinct branches in hierarchical trees grown from biological data. An R language implementation of the method is available from the Comprehensive R Archive Network: cran.r-project.org/web/packages/TBEST/index.html.
2014-06-05
The disruption of trace element homeostasis due to aneuploidy as a unifying theme in the etiology of cancer
10.1101/002105
Johannes Engelken;Matthias Altmeyer;Renty B Franklin;
#### #### ABSTRACT FOR SCIENTISTS: While decades of cancer research have firmly established multiple hallmarks of cancer 1,2, cancers genomic landscape remains to be fully understood. Particularly, the phenomenon of aneuploidy gains and losses of large genomic regions, i.e. whole chromosomes or chromosome arms and why most cancer cells are aneuploid remains enigmatic 3. Another frequent observation in many different types of cancer is the deregulation of the homeostasis of the trace elements copper, zinc and iron. Concentrations of copper are markedly increased in cancer tissue and the blood plasma of cancer patients, while zinc levels are typically decreased 49. Here we discuss the hypothesis that the disruption of trace element homeostasis and the phenomenon of aneuploidy might be linked. Our tentative analysis of genomic data from diverse tumor types mainly from The Cancer Genome Atlas (TCGA) project suggests that gains and losses of metal transporter genes occur frequently and correlate well with transporter gene expression levels. Hereby they may confer a cancer-driving selective growth advantage at early and possibly also later stages during cancer development. This idea is consistent with recent observations in yeast, which suggest that through chromosomal gains and losses cells can adapt quickly to new carbon sources 10, nutrient starvation 11 as well as to copper toxicity 12. In human cancer development, candidate driving events may include, among others, the gains of zinc transporter genes SLC39A1 and SLC39A4 on chromosome arms 1q and 8q, respectively, and the losses of zinc transporter genes SLC30A5, SLC39A14 and SLC39A6 on 5q, 8p and 18q. The recurrent gain of 3q might be associated with the iron transporter gene TFRC and the loss of 13q with the copper transporter gene ATP7B. By altering cellular trace element homeostasis such events might contribute to the initiation of the malignant transformation. Intriguingly, attenuation or overexpression of several of these metal transporter genes has been shown to lead to malignant cellular behavior in vitro. Consistently, it has been shown that zinc affects a number of the observed hallmarks of cancer characteristics including DNA repair, inflammation and apoptosis, e.g. through its effects on NF-kappa B signaling. We term this model the aneuploidy metal transporter cancer (AMTC) hypothesis and find it compatible with the cancer-promoting role of point mutations and focal copy number alterations in established tumor suppressor genes and oncogenes (e.g. MYC, MYCN, TP53, PIK3CA, BRCA1, ERBB2). We suggest a number of approaches for how this hypothesis could be tested experimentally and briefly touch on possible implications for cancer etiology, metastasis, drug resistance and therapy. #### #### ABSTRACT FOR KIDS: We humans are made up of many very small building blocks, which are called cells. These cells can be seen with a microscope and they know how to grow and what to do from the information on the DNA of their chromosomes. Sometimes, if this information is messed up, a cell can go crazy and start to grow without control, even in places of the body where it should not. This process is called cancer, a terrible disease that makes people very sick. Scientists do not understand exactly what causes cells to go crazy, so it would be good to find out. Many years ago, scientists observed that chromosomes in these cancer cells are missing or doubled but could not find an explanation for it. More recently, scientists have detected that precious metals to our bodies, which are not gold and silver, but zinc, iron and copper, are not found in the right amounts in these crazy cancer cells. There seems to be not enough zinc and iron but too much copper, and again, scientists do not really understand why. So there are many unanswered questions about these crazy cancer cells and in this article, we describe a pretty simple idea on how chromosome numbers and the metals might be connected: we think that the missing or doubled chromosomes produce less or more transporters of zinc, iron and copper. As a result, cancer cells end up with little zinc and too much copper and these changes contribute to their out-of-control growth. If this idea were true, many people would be excited about it. But first this idea needs to be investigated more deeply in the laboratory, on the computer and in the hospitals. Therefore, we put it out on the internet so that other people can also think about and work on our idea. Now there are plenty of ways to do exciting experiments and with the results, we will hopefully understand much better why cancer cells go crazy and how doctors could improve their therapies to help patients in the future. #### #### ABSTRACT FOR ADULTS: One hundred years ago, it was suggested that cancer is a disease of the chromosomes, based on the observations that whole chromosomes or chromosome arms are missing or duplicated in the genomes of cells in a tumor. This phenomenon is called aneuploidy and is observed in most types of cancer, including breast, lung, prostate, brain and other cancers. However, it is not clear which genes could be responsible for this observation or if this phenomenon is only a side effect of cancer without importance, so it is important to find out. A second observation from basic research is that concentrations of several micronutrients, especially of the trace elements zinc, copper and iron are changed in tumor cells. In this article, we speculate that aneuploidy is the reason for these changes and that together, these two phenomena are responsible for some of the famous hallmarks or characteristics that are known from cancer cells: fast growth, escape from destruction by the immune system and poor DNA repair. This idea is new and has not been tested yet. We name it the aneuploidy metal transporter cancer (AMTC) hypothesis. To test our idea we used a wealth of information that was shared by international projects such as the Human Genome Project or the Cancer Genome Atlas Project. Indeed, we find that many zinc, iron and copper transporter genes in the genome are affected by aneuploidy. While a healthy cell has two copies of each gene, some tumor cells have only one or three copies of these genes. Furthermore, the amounts of protein and the activities of these metal transporters seem to correlate with these gene copy numbers, at least we see that the intermediate molecules and protein precursors called messenger RNA correlate well. Hence, we found that the public data is compatible with our suggested link between metal transporters and cancer. Furthermore, we identified hundreds of studies on zinc biology, evolutionary biology, genome and cancer research that also seem compatible. For example, cancer risk increases in the elderly population as well as in obese people, it also increases after certain bacterial or viral infections and through alcohol consumption. Consistent with the AMTC hypothesis and in particular, the idea that external changes in zinc concentrations in an organ or tissue may kick off the earliest steps of tumor development, all of these risk factors have been correlated with changes in zinc or other trace elements. However, since additional experiments to test the AMTC hypothesis have not yet been performed, direct evidence for our hypothesis is still missing. We hope, however, that our idea will promote further research with the goal to better understand cancer as a first step towards its prevention and the development of improved anti-cancer therapies in the future.
2014-01-29
The disruption of trace element homeostasis due to aneuploidy as a unifying theme in the etiology of cancer
10.1101/002105
Johannes Engelken;Matthias Altmeyer;Renty B Franklin;
#### #### ABSTRACT FOR SCIENTISTS: While decades of cancer research have firmly established multiple hallmarks of cancer 1,2, cancers genomic landscape remains to be fully understood. Particularly, the phenomenon of aneuploidy gains and losses of large genomic regions, i.e. whole chromosomes or chromosome arms and why most cancer cells are aneuploid remains enigmatic 3. Another frequent observation in many different types of cancer is the deregulation of the homeostasis of the trace elements copper, zinc and iron. Concentrations of copper are markedly increased in cancer tissue and the blood plasma of cancer patients, while zinc levels are typically decreased 49. Here we discuss the hypothesis that the disruption of trace element homeostasis and the phenomenon of aneuploidy might be linked. Our tentative analysis of genomic data from diverse tumor types mainly from The Cancer Genome Atlas (TCGA) project suggests that gains and losses of metal transporter genes occur frequently and correlate well with transporter gene expression levels. Hereby they may confer a cancer-driving selective growth advantage at early and possibly also later stages during cancer development. This idea is consistent with recent observations in yeast, which suggest that through chromosomal gains and losses cells can adapt quickly to new carbon sources 10, nutrient starvation 11 as well as to copper toxicity 12. In human cancer development, candidate driving events may include, among others, the gains of zinc transporter genes SLC39A1 and SLC39A4 on chromosome arms 1q and 8q, respectively, and the losses of zinc transporter genes SLC30A5, SLC39A14 and SLC39A6 on 5q, 8p and 18q. The recurrent gain of 3q might be associated with the iron transporter gene TFRC and the loss of 13q with the copper transporter gene ATP7B. By altering cellular trace element homeostasis such events might contribute to the initiation of the malignant transformation. Intriguingly, attenuation or overexpression of several of these metal transporter genes has been shown to lead to malignant cellular behavior in vitro. Consistently, it has been shown that zinc affects a number of the observed hallmarks of cancer characteristics including DNA repair, inflammation and apoptosis, e.g. through its effects on NF-kappa B signaling. We term this model the aneuploidy metal transporter cancer (AMTC) hypothesis and find it compatible with the cancer-promoting role of point mutations and focal copy number alterations in established tumor suppressor genes and oncogenes (e.g. MYC, MYCN, TP53, PIK3CA, BRCA1, ERBB2). We suggest a number of approaches for how this hypothesis could be tested experimentally and briefly touch on possible implications for cancer etiology, metastasis, drug resistance and therapy. #### #### ABSTRACT FOR KIDS: We humans are made up of many very small building blocks, which are called cells. These cells can be seen with a microscope and they know how to grow and what to do from the information on the DNA of their chromosomes. Sometimes, if this information is messed up, a cell can go crazy and start to grow without control, even in places of the body where it should not. This process is called cancer, a terrible disease that makes people very sick. Scientists do not understand exactly what causes cells to go crazy, so it would be good to find out. Many years ago, scientists observed that chromosomes in these cancer cells are missing or doubled but could not find an explanation for it. More recently, scientists have detected that precious metals to our bodies, which are not gold and silver, but zinc, iron and copper, are not found in the right amounts in these crazy cancer cells. There seems to be not enough zinc and iron but too much copper, and again, scientists do not really understand why. So there are many unanswered questions about these crazy cancer cells and in this article, we describe a pretty simple idea on how chromosome numbers and the metals might be connected: we think that the missing or doubled chromosomes produce less or more transporters of zinc, iron and copper. As a result, cancer cells end up with little zinc and too much copper and these changes contribute to their out-of-control growth. If this idea were true, many people would be excited about it. But first this idea needs to be investigated more deeply in the laboratory, on the computer and in the hospitals. Therefore, we put it out on the internet so that other people can also think about and work on our idea. Now there are plenty of ways to do exciting experiments and with the results, we will hopefully understand much better why cancer cells go crazy and how doctors could improve their therapies to help patients in the future. #### #### ABSTRACT FOR ADULTS: One hundred years ago, it was suggested that cancer is a disease of the chromosomes, based on the observations that whole chromosomes or chromosome arms are missing or duplicated in the genomes of cells in a tumor. This phenomenon is called aneuploidy and is observed in most types of cancer, including breast, lung, prostate, brain and other cancers. However, it is not clear which genes could be responsible for this observation or if this phenomenon is only a side effect of cancer without importance, so it is important to find out. A second observation from basic research is that concentrations of several micronutrients, especially of the trace elements zinc, copper and iron are changed in tumor cells. In this article, we speculate that aneuploidy is the reason for these changes and that together, these two phenomena are responsible for some of the famous hallmarks or characteristics that are known from cancer cells: fast growth, escape from destruction by the immune system and poor DNA repair. This idea is new and has not been tested yet. We name it the aneuploidy metal transporter cancer (AMTC) hypothesis. To test our idea we used a wealth of information that was shared by international projects such as the Human Genome Project or the Cancer Genome Atlas Project. Indeed, we find that many zinc, iron and copper transporter genes in the genome are affected by aneuploidy. While a healthy cell has two copies of each gene, some tumor cells have only one or three copies of these genes. Furthermore, the amounts of protein and the activities of these metal transporters seem to correlate with these gene copy numbers, at least we see that the intermediate molecules and protein precursors called messenger RNA correlate well. Hence, we found that the public data is compatible with our suggested link between metal transporters and cancer. Furthermore, we identified hundreds of studies on zinc biology, evolutionary biology, genome and cancer research that also seem compatible. For example, cancer risk increases in the elderly population as well as in obese people, it also increases after certain bacterial or viral infections and through alcohol consumption. Consistent with the AMTC hypothesis and in particular, the idea that external changes in zinc concentrations in an organ or tissue may kick off the earliest steps of tumor development, all of these risk factors have been correlated with changes in zinc or other trace elements. However, since additional experiments to test the AMTC hypothesis have not yet been performed, direct evidence for our hypothesis is still missing. We hope, however, that our idea will promote further research with the goal to better understand cancer as a first step towards its prevention and the development of improved anti-cancer therapies in the future.
2014-01-30
The disruption of trace element homeostasis due to aneuploidy as a unifying theme in the etiology of cancer
10.1101/002105
Johannes Engelken;Matthias Altmeyer;Renty B Franklin;
#### #### ABSTRACT FOR SCIENTISTS: While decades of cancer research have firmly established multiple hallmarks of cancer 1,2, cancers genomic landscape remains to be fully understood. Particularly, the phenomenon of aneuploidy gains and losses of large genomic regions, i.e. whole chromosomes or chromosome arms and why most cancer cells are aneuploid remains enigmatic 3. Another frequent observation in many different types of cancer is the deregulation of the homeostasis of the trace elements copper, zinc and iron. Concentrations of copper are markedly increased in cancer tissue and the blood plasma of cancer patients, while zinc levels are typically decreased 49. Here we discuss the hypothesis that the disruption of trace element homeostasis and the phenomenon of aneuploidy might be linked. Our tentative analysis of genomic data from diverse tumor types mainly from The Cancer Genome Atlas (TCGA) project suggests that gains and losses of metal transporter genes occur frequently and correlate well with transporter gene expression levels. Hereby they may confer a cancer-driving selective growth advantage at early and possibly also later stages during cancer development. This idea is consistent with recent observations in yeast, which suggest that through chromosomal gains and losses cells can adapt quickly to new carbon sources 10, nutrient starvation 11 as well as to copper toxicity 12. In human cancer development, candidate driving events may include, among others, the gains of zinc transporter genes SLC39A1 and SLC39A4 on chromosome arms 1q and 8q, respectively, and the losses of zinc transporter genes SLC30A5, SLC39A14 and SLC39A6 on 5q, 8p and 18q. The recurrent gain of 3q might be associated with the iron transporter gene TFRC and the loss of 13q with the copper transporter gene ATP7B. By altering cellular trace element homeostasis such events might contribute to the initiation of the malignant transformation. Intriguingly, attenuation or overexpression of several of these metal transporter genes has been shown to lead to malignant cellular behavior in vitro. Consistently, it has been shown that zinc affects a number of the observed hallmarks of cancer characteristics including DNA repair, inflammation and apoptosis, e.g. through its effects on NF-kappa B signaling. We term this model the aneuploidy metal transporter cancer (AMTC) hypothesis and find it compatible with the cancer-promoting role of point mutations and focal copy number alterations in established tumor suppressor genes and oncogenes (e.g. MYC, MYCN, TP53, PIK3CA, BRCA1, ERBB2). We suggest a number of approaches for how this hypothesis could be tested experimentally and briefly touch on possible implications for cancer etiology, metastasis, drug resistance and therapy. #### #### ABSTRACT FOR KIDS: We humans are made up of many very small building blocks, which are called cells. These cells can be seen with a microscope and they know how to grow and what to do from the information on the DNA of their chromosomes. Sometimes, if this information is messed up, a cell can go crazy and start to grow without control, even in places of the body where it should not. This process is called cancer, a terrible disease that makes people very sick. Scientists do not understand exactly what causes cells to go crazy, so it would be good to find out. Many years ago, scientists observed that chromosomes in these cancer cells are missing or doubled but could not find an explanation for it. More recently, scientists have detected that precious metals to our bodies, which are not gold and silver, but zinc, iron and copper, are not found in the right amounts in these crazy cancer cells. There seems to be not enough zinc and iron but too much copper, and again, scientists do not really understand why. So there are many unanswered questions about these crazy cancer cells and in this article, we describe a pretty simple idea on how chromosome numbers and the metals might be connected: we think that the missing or doubled chromosomes produce less or more transporters of zinc, iron and copper. As a result, cancer cells end up with little zinc and too much copper and these changes contribute to their out-of-control growth. If this idea were true, many people would be excited about it. But first this idea needs to be investigated more deeply in the laboratory, on the computer and in the hospitals. Therefore, we put it out on the internet so that other people can also think about and work on our idea. Now there are plenty of ways to do exciting experiments and with the results, we will hopefully understand much better why cancer cells go crazy and how doctors could improve their therapies to help patients in the future. #### #### ABSTRACT FOR ADULTS: One hundred years ago, it was suggested that cancer is a disease of the chromosomes, based on the observations that whole chromosomes or chromosome arms are missing or duplicated in the genomes of cells in a tumor. This phenomenon is called aneuploidy and is observed in most types of cancer, including breast, lung, prostate, brain and other cancers. However, it is not clear which genes could be responsible for this observation or if this phenomenon is only a side effect of cancer without importance, so it is important to find out. A second observation from basic research is that concentrations of several micronutrients, especially of the trace elements zinc, copper and iron are changed in tumor cells. In this article, we speculate that aneuploidy is the reason for these changes and that together, these two phenomena are responsible for some of the famous hallmarks or characteristics that are known from cancer cells: fast growth, escape from destruction by the immune system and poor DNA repair. This idea is new and has not been tested yet. We name it the aneuploidy metal transporter cancer (AMTC) hypothesis. To test our idea we used a wealth of information that was shared by international projects such as the Human Genome Project or the Cancer Genome Atlas Project. Indeed, we find that many zinc, iron and copper transporter genes in the genome are affected by aneuploidy. While a healthy cell has two copies of each gene, some tumor cells have only one or three copies of these genes. Furthermore, the amounts of protein and the activities of these metal transporters seem to correlate with these gene copy numbers, at least we see that the intermediate molecules and protein precursors called messenger RNA correlate well. Hence, we found that the public data is compatible with our suggested link between metal transporters and cancer. Furthermore, we identified hundreds of studies on zinc biology, evolutionary biology, genome and cancer research that also seem compatible. For example, cancer risk increases in the elderly population as well as in obese people, it also increases after certain bacterial or viral infections and through alcohol consumption. Consistent with the AMTC hypothesis and in particular, the idea that external changes in zinc concentrations in an organ or tissue may kick off the earliest steps of tumor development, all of these risk factors have been correlated with changes in zinc or other trace elements. However, since additional experiments to test the AMTC hypothesis have not yet been performed, direct evidence for our hypothesis is still missing. We hope, however, that our idea will promote further research with the goal to better understand cancer as a first step towards its prevention and the development of improved anti-cancer therapies in the future.
2014-03-13
The disruption of trace element homeostasis due to aneuploidy as a unifying theme in the etiology of cancer
10.1101/002105
Johannes Engelken;Matthias Altmeyer;Renty B Franklin;
#### #### ABSTRACT FOR SCIENTISTS: While decades of cancer research have firmly established multiple hallmarks of cancer 1,2, cancers genomic landscape remains to be fully understood. Particularly, the phenomenon of aneuploidy gains and losses of large genomic regions, i.e. whole chromosomes or chromosome arms and why most cancer cells are aneuploid remains enigmatic 3. Another frequent observation in many different types of cancer is the deregulation of the homeostasis of the trace elements copper, zinc and iron. Concentrations of copper are markedly increased in cancer tissue and the blood plasma of cancer patients, while zinc levels are typically decreased 49. Here we discuss the hypothesis that the disruption of trace element homeostasis and the phenomenon of aneuploidy might be linked. Our tentative analysis of genomic data from diverse tumor types mainly from The Cancer Genome Atlas (TCGA) project suggests that gains and losses of metal transporter genes occur frequently and correlate well with transporter gene expression levels. Hereby they may confer a cancer-driving selective growth advantage at early and possibly also later stages during cancer development. This idea is consistent with recent observations in yeast, which suggest that through chromosomal gains and losses cells can adapt quickly to new carbon sources 10, nutrient starvation 11 as well as to copper toxicity 12. In human cancer development, candidate driving events may include, among others, the gains of zinc transporter genes SLC39A1 and SLC39A4 on chromosome arms 1q and 8q, respectively, and the losses of zinc transporter genes SLC30A5, SLC39A14 and SLC39A6 on 5q, 8p and 18q. The recurrent gain of 3q might be associated with the iron transporter gene TFRC and the loss of 13q with the copper transporter gene ATP7B. By altering cellular trace element homeostasis such events might contribute to the initiation of the malignant transformation. Intriguingly, attenuation or overexpression of several of these metal transporter genes has been shown to lead to malignant cellular behavior in vitro. Consistently, it has been shown that zinc affects a number of the observed hallmarks of cancer characteristics including DNA repair, inflammation and apoptosis, e.g. through its effects on NF-kappa B signaling. We term this model the aneuploidy metal transporter cancer (AMTC) hypothesis and find it compatible with the cancer-promoting role of point mutations and focal copy number alterations in established tumor suppressor genes and oncogenes (e.g. MYC, MYCN, TP53, PIK3CA, BRCA1, ERBB2). We suggest a number of approaches for how this hypothesis could be tested experimentally and briefly touch on possible implications for cancer etiology, metastasis, drug resistance and therapy. #### #### ABSTRACT FOR KIDS: We humans are made up of many very small building blocks, which are called cells. These cells can be seen with a microscope and they know how to grow and what to do from the information on the DNA of their chromosomes. Sometimes, if this information is messed up, a cell can go crazy and start to grow without control, even in places of the body where it should not. This process is called cancer, a terrible disease that makes people very sick. Scientists do not understand exactly what causes cells to go crazy, so it would be good to find out. Many years ago, scientists observed that chromosomes in these cancer cells are missing or doubled but could not find an explanation for it. More recently, scientists have detected that precious metals to our bodies, which are not gold and silver, but zinc, iron and copper, are not found in the right amounts in these crazy cancer cells. There seems to be not enough zinc and iron but too much copper, and again, scientists do not really understand why. So there are many unanswered questions about these crazy cancer cells and in this article, we describe a pretty simple idea on how chromosome numbers and the metals might be connected: we think that the missing or doubled chromosomes produce less or more transporters of zinc, iron and copper. As a result, cancer cells end up with little zinc and too much copper and these changes contribute to their out-of-control growth. If this idea were true, many people would be excited about it. But first this idea needs to be investigated more deeply in the laboratory, on the computer and in the hospitals. Therefore, we put it out on the internet so that other people can also think about and work on our idea. Now there are plenty of ways to do exciting experiments and with the results, we will hopefully understand much better why cancer cells go crazy and how doctors could improve their therapies to help patients in the future. #### #### ABSTRACT FOR ADULTS: One hundred years ago, it was suggested that cancer is a disease of the chromosomes, based on the observations that whole chromosomes or chromosome arms are missing or duplicated in the genomes of cells in a tumor. This phenomenon is called aneuploidy and is observed in most types of cancer, including breast, lung, prostate, brain and other cancers. However, it is not clear which genes could be responsible for this observation or if this phenomenon is only a side effect of cancer without importance, so it is important to find out. A second observation from basic research is that concentrations of several micronutrients, especially of the trace elements zinc, copper and iron are changed in tumor cells. In this article, we speculate that aneuploidy is the reason for these changes and that together, these two phenomena are responsible for some of the famous hallmarks or characteristics that are known from cancer cells: fast growth, escape from destruction by the immune system and poor DNA repair. This idea is new and has not been tested yet. We name it the aneuploidy metal transporter cancer (AMTC) hypothesis. To test our idea we used a wealth of information that was shared by international projects such as the Human Genome Project or the Cancer Genome Atlas Project. Indeed, we find that many zinc, iron and copper transporter genes in the genome are affected by aneuploidy. While a healthy cell has two copies of each gene, some tumor cells have only one or three copies of these genes. Furthermore, the amounts of protein and the activities of these metal transporters seem to correlate with these gene copy numbers, at least we see that the intermediate molecules and protein precursors called messenger RNA correlate well. Hence, we found that the public data is compatible with our suggested link between metal transporters and cancer. Furthermore, we identified hundreds of studies on zinc biology, evolutionary biology, genome and cancer research that also seem compatible. For example, cancer risk increases in the elderly population as well as in obese people, it also increases after certain bacterial or viral infections and through alcohol consumption. Consistent with the AMTC hypothesis and in particular, the idea that external changes in zinc concentrations in an organ or tissue may kick off the earliest steps of tumor development, all of these risk factors have been correlated with changes in zinc or other trace elements. However, since additional experiments to test the AMTC hypothesis have not yet been performed, direct evidence for our hypothesis is still missing. We hope, however, that our idea will promote further research with the goal to better understand cancer as a first step towards its prevention and the development of improved anti-cancer therapies in the future.
2014-03-14
Fast Principal Component Analysis of Large-Scale Genome-Wide Data
10.1101/002238
Gad Abraham;Michael Inouye;
Principal component analysis (PCA) is routinely used to analyze genome-wide single-nucleotide polymorphism (SNP) data, for detecting population structure and potential outliers. However, the size of SNP datasets has increased immensely in recent years and PCA of large datasets has become a time consuming task. We have developed flashpca, a highly efficient PCA implementation based on randomized algorithms, which delivers identical accuracy in extracting the top principal components compared with existing tools, in substantially less time. We demonstrate the utility of flashpca on both HapMap3 and on a large Immunochip dataset. For the latter, flashpca performed PCA of 15,000 individuals up to 125 times faster than existing tools, with identical results, and PCA of 150,000 individuals using flashpca completed in 4 hours. The increasing size of SNP datasets will make tools such as flashpca essential as traditional approaches will not adequately scale. This approach will also help to scale other applications that leverage PCA or eigen-decomposition to substantially larger datasets.
2014-01-30
Fast Principal Component Analysis of Large-Scale Genome-Wide Data
10.1101/002238
Gad Abraham;Michael Inouye;
Principal component analysis (PCA) is routinely used to analyze genome-wide single-nucleotide polymorphism (SNP) data, for detecting population structure and potential outliers. However, the size of SNP datasets has increased immensely in recent years and PCA of large datasets has become a time consuming task. We have developed flashpca, a highly efficient PCA implementation based on randomized algorithms, which delivers identical accuracy in extracting the top principal components compared with existing tools, in substantially less time. We demonstrate the utility of flashpca on both HapMap3 and on a large Immunochip dataset. For the latter, flashpca performed PCA of 15,000 individuals up to 125 times faster than existing tools, with identical results, and PCA of 150,000 individuals using flashpca completed in 4 hours. The increasing size of SNP datasets will make tools such as flashpca essential as traditional approaches will not adequately scale. This approach will also help to scale other applications that leverage PCA or eigen-decomposition to substantially larger datasets.
2014-03-11
Impact of RNA degradation on measurements of gene expression
10.1101/002261
Irene Gallego Romero;Athma A. Pai;Jenny Tung;Yoav Gilad;
The use of low quality RNA samples in whole-genome gene expression profiling remains controversial. It is unclear if transcript degradation in low quality RNA samples occurs uniformly, in which case the effects of degradation can be normalized, or whether different transcripts are degraded at different rates, potentially biasing measurements of expression levels. This concern has rendered the use of low quality RNA samples in whole-genome expression profiling problematic. Yet, low quality samples are at times the sole means of addressing specific questions - e.g., samples collected in the course of fieldwork. We sought to quantify the impact of variation in RNA quality on estimates of gene expression levels based on RNA-seq data. To do so, we collected expression data from tissue samples that were allowed to decay for varying amounts of time prior to RNA extraction. The RNA samples we collected spanned the entire range of RNA Integrity Number (RIN) values (a quality metric commonly used to assess RNA quality). We observed widespread effects of RNA quality on measurements of gene expression levels, as well as a slight but significant loss of library complexity in more degraded samples. While standard normalizations failed to account for the effects of degradation, we found that a simple linear model that controls for the effects of RIN can correct for the majority of these effects. We conclude that in instances where RIN and the effect of interest are not associated, this approach can help recover biologically meaningful signals in data from degraded RNA samples.
2014-01-30
Improving Protein Docking with Constraint Programming and Coevolution Data
10.1101/002329
Ludwig Krippahl;Fábio Madeira;
BackgroundConstraint programming (CP) is usually seen as a rigid approach, focusing on crisp, precise, distinctions between what is allowed as a solution and what is not. At first sight, this makes it seem inadequate for bioinformatics applications that rely mostly on statistical parameters and optimization. The prediction of protein interactions, or protein docking, is one such application. And this apparent problem with CP is particularly evident when constraints are provided by noisy data, as it is the case when using the statistical analysis of Multiple Sequence Alignments (MSA) to extract coevolution information. The goal of this paper is to show that this first impression is misleading and that CP is a useful technique for improving protein docking even with data as vague and noisy as the coevolution indicators that can be inferred from MSA.\n\nResultsHere we focus on the study of two protein complexes. In one case we used a simplified estimator of interaction propensity to infer a set of five candidate residues for the interface and used that set to constrain the docking models. Even with this simplified approach and considering only the interface of one of the partners, there is a visible focusing of the models around the correct configuration. Considering a set of 400 models with the best geometric contacts, this constraint increases the number of models close to the target (RMSD {inverted exclamation}5[A]) from 2 to 5 and decreases the RMSD of all retained models from 26[A] to 17.5[A]. For the other example we used a more standard estimate of coevolving residues, from the Co-Evolution Analysis using Protein Sequences (CAPS) software. Using a group of three residues identified from the sequence alignment as potentially co-evolving to constrain the search, the number of complexes similar to the target among the 50 highest scoring docking models increased from 3 in the unconstrained docking to 30 in the constrained docking.\n\nConclusionsAlthough only a proof-of-concept application, our results show that, with suitably designed constraints, CP allows us to integrate coevolution data, which can be inferred from databases of protein sequences, even though the data is noisy and often \"fuzzy\", with no well-defined discontinuities. This also shows, more generally, that CP in bioinformatics needs not be limited to the more crisp cases of finite domains and explicit rules but can also be applied to a broader range of problems that depend on statistical measurements and continuous data.
2014-02-03
Modeling bi-modality improves characterization of cell cycle on gene expression in single cells
10.1101/002295
Lucas Dennis;Andrew McDavid;Patrick Danaher;Greg Finak;Michael Krouse;Alice Wang;Philippa Webster;Joseph Beechem;Raphael Gottardo;
Advances in high-throughput, single cell gene expression are allowing interrogation of cell heterogeneity. However, there is concern that the cell cycle phase of a cell might bias characterizations of gene expression at the single-cell level. We assess the effect of cell cycle phase on gene expression in single cells by measuring 333 genes in 930 cells across three phases and three cell lines. We determine each cells phase non-invasively without chemical arrest and use it as a covariate in tests of differential expression. We observe bi-modal gene expression, a previously-described phenomenon, wherein the expression of otherwise abundant genes is either strongly positive, or undetectable within individual cells. This bi-modality is likely both biologically and technically driven. Irrespective of its source, we show that it should be modeled to draw accurate inferences from single cell expression experiments. To this end, we propose a semi-continuous modeling framework based on the generalized linear model, and use it to characterize genes with consistent cell cycle effects across three cell lines. Our new computational framework improves the detection of previously characterized cell-cycle genes compared to approaches that do not account for the bi-modality of single-cell data. We use our semi-continuous modelling framework to estimate single cell gene co-expression networks. These networks suggest that in addition to having phase-dependent shifts in expression (when averaged over many cells), some, but not all, canonical cell cycle genes tend to be co-expressed in groups in single cells. We estimate the amount of single cell expression variability attributable to the cell cycle. We find that the cell cycle explains only 5%-17% of expression variability, suggesting that the cell cycle will not tend to be a large nuisance factor in analysis of the single cell transcriptome.
2014-02-03
Modeling bi-modality improves characterization of cell cycle on gene expression in single cells
10.1101/002295
Lucas Dennis;Andrew McDavid;Patrick Danaher;Greg Finak;Michael Krouse;Alice Wang;Philippa Webster;Joseph Beechem;Raphael Gottardo;
Advances in high-throughput, single cell gene expression are allowing interrogation of cell heterogeneity. However, there is concern that the cell cycle phase of a cell might bias characterizations of gene expression at the single-cell level. We assess the effect of cell cycle phase on gene expression in single cells by measuring 333 genes in 930 cells across three phases and three cell lines. We determine each cells phase non-invasively without chemical arrest and use it as a covariate in tests of differential expression. We observe bi-modal gene expression, a previously-described phenomenon, wherein the expression of otherwise abundant genes is either strongly positive, or undetectable within individual cells. This bi-modality is likely both biologically and technically driven. Irrespective of its source, we show that it should be modeled to draw accurate inferences from single cell expression experiments. To this end, we propose a semi-continuous modeling framework based on the generalized linear model, and use it to characterize genes with consistent cell cycle effects across three cell lines. Our new computational framework improves the detection of previously characterized cell-cycle genes compared to approaches that do not account for the bi-modality of single-cell data. We use our semi-continuous modelling framework to estimate single cell gene co-expression networks. These networks suggest that in addition to having phase-dependent shifts in expression (when averaged over many cells), some, but not all, canonical cell cycle genes tend to be co-expressed in groups in single cells. We estimate the amount of single cell expression variability attributable to the cell cycle. We find that the cell cycle explains only 5%-17% of expression variability, suggesting that the cell cycle will not tend to be a large nuisance factor in analysis of the single cell transcriptome.
2014-07-10
FALDO: A semantic standard for describing the location of nucleotide and protein feature annotation.
10.1101/002121
Jerven Bolleman;Christopher J Mungall;Francesco Strozzi;Joachim Baran;Michel Dumontier;Raoul J P Bonnal;Robert Buels;Robert Hoehndorf;Takatomo Fujisawa;Toshiaki Katayama;Peter J A Cock;
Background Nucleotide and protein sequence feature annotations are essential to understand biology on the genomic, transcriptomic, and proteomic level. Using Semantic Web technologies to query biological annotations, there was no standard that described this potentially complex location information as subject-predicate-object triples.\n\nDescription We have developed an ontology, the Feature Annotation Location Description Ontology (FALDO), to describe the positions of annotated features on linear and circular sequences. FALDO can be used to describe nucleotide features in sequence records, protein annotations, and glycan binding sites, among other features in coordinate systems of the aforementioned \"omics\" areas. Using the same data format to represent sequence positions that are independent of file formats allows us to integrate sequence data from multiple sources and data types. The genome browser JBrowse is used to demonstrate accessing multiple SPARQL endpoints to display genomic feature annotations, as well as protein annotations from UniProt mapped to genomic locations.\n\nConclusions Our ontology allows users to uniformly describe - and potentially merge - sequence annotations from multiple sources. Data sources using FALDO can prospectively be retrieved using federalised SPARQL queries against public SPARQL endpoints and/or local private triple stores.
2014-01-31
FALDO: A semantic standard for describing the location of nucleotide and protein feature annotation.
10.1101/002121
Jerven Bolleman;Christopher J Mungall;Francesco Strozzi;Joachim Baran;Michel Dumontier;Raoul J P Bonnal;Robert Buels;Robert Hoehndorf;Takatomo Fujisawa;Toshiaki Katayama;Peter J A Cock;
Background Nucleotide and protein sequence feature annotations are essential to understand biology on the genomic, transcriptomic, and proteomic level. Using Semantic Web technologies to query biological annotations, there was no standard that described this potentially complex location information as subject-predicate-object triples.\n\nDescription We have developed an ontology, the Feature Annotation Location Description Ontology (FALDO), to describe the positions of annotated features on linear and circular sequences. FALDO can be used to describe nucleotide features in sequence records, protein annotations, and glycan binding sites, among other features in coordinate systems of the aforementioned \"omics\" areas. Using the same data format to represent sequence positions that are independent of file formats allows us to integrate sequence data from multiple sources and data types. The genome browser JBrowse is used to demonstrate accessing multiple SPARQL endpoints to display genomic feature annotations, as well as protein annotations from UniProt mapped to genomic locations.\n\nConclusions Our ontology allows users to uniformly describe - and potentially merge - sequence annotations from multiple sources. Data sources using FALDO can prospectively be retrieved using federalised SPARQL queries against public SPARQL endpoints and/or local private triple stores.
2014-02-01
FALDO: A semantic standard for describing the location of nucleotide and protein feature annotation.
10.1101/002121
Jerven Bolleman;Christopher J Mungall;Francesco Strozzi;Joachim Baran;Michel Dumontier;Raoul J P Bonnal;Robert Buels;Robert Hoehndorf;Takatomo Fujisawa;Toshiaki Katayama;Peter J A Cock;
Background Nucleotide and protein sequence feature annotations are essential to understand biology on the genomic, transcriptomic, and proteomic level. Using Semantic Web technologies to query biological annotations, there was no standard that described this potentially complex location information as subject-predicate-object triples.\n\nDescription We have developed an ontology, the Feature Annotation Location Description Ontology (FALDO), to describe the positions of annotated features on linear and circular sequences. FALDO can be used to describe nucleotide features in sequence records, protein annotations, and glycan binding sites, among other features in coordinate systems of the aforementioned \"omics\" areas. Using the same data format to represent sequence positions that are independent of file formats allows us to integrate sequence data from multiple sources and data types. The genome browser JBrowse is used to demonstrate accessing multiple SPARQL endpoints to display genomic feature annotations, as well as protein annotations from UniProt mapped to genomic locations.\n\nConclusions Our ontology allows users to uniformly describe - and potentially merge - sequence annotations from multiple sources. Data sources using FALDO can prospectively be retrieved using federalised SPARQL queries against public SPARQL endpoints and/or local private triple stores.
2014-02-02
Approximation to the distribution of fitness effects across functional categories in human segregating polymorphisms
10.1101/002345
Fernando Racimo;Joshua G Schraiber;
Quantifying the proportion of polymorphic mutations that are deleterious or neutral is of fundamental importance to our understanding of evolution, disease genetics and the maintenance of variation genome-wide. Here, we develop an approximation to the distribution of fitness effects (DFE) of segregating single-nucleotide mutations in humans. Unlike previous methods, we do not assume that synonymous mutations are neutral or not strongly selected, and we do not rely on fitting the DFE of all new nonsynonymous mutations to a single probability distribution, which is poorly motivated on a biological level. We rely on a previously developed method that utilizes a variety of published annotations (including conservation scores, protein deleteriousness estimates and regulatory data) to score all mutations in the human genome based on how likely they are to be affected by negative selection, controlling for mutation rate. We map this score to a scale of fitness coefficients via maximum likelihood using diffusion theory and a Poisson random field model on SNP data. Our method serves to approximate the deleterious DFE of mutations that are segregating, regardless of their genomic consequence. We can then compare the proportion of mutations that are negatively selected or neutral across various categories, including different types of regulatory sites. We observe that the distribution of intergenic polymorphisms is highly peaked at neutrality, while the distribution of nonsynonymous polymorphisms is bimodal, with a neutral peak and a second peak at s {approx} -10-4. Other types of polymorphisms have shapes that fall roughly in between these two. We find that transcriptional start sites, strong CTCF-enriched elements and enhancers are the regulatory categories with the largest proportion of deleterious polymorphisms.\n\nAuthor SummaryThe relative frequencies of polymorphic mutations that are deleterious, nearly neutral and neutral is traditionally called the distribution of fitness effects (DFE). Obtaining an accurate approximation to this distribution in humans can help us understand the nature of disease and the mechanisms by which variation is maintained in the genome. Previous methods to approximate this distribution have relied on fitting the DFE of new mutations to a single probability distribution, like a normal or an exponential distribution. Generally, these methods also assume that a particular category of mutations, like synonymous changes, can be assumed to be neutral or nearly neutral. Here, we provide a novel method designed to reflect the strength of negative selection operating on any segregating site in the human genome. We use a maximum likelihood mapping approach to fit these scores to a scale of neutral and negative fitness coefficients. Finally, we compare the shape of the DFEs we obtain from this mapping for different types of functional categories. We observe the distribution of polymorphisms has a strong peak at neutrality, as well as a second peak of deleterious effects when restricting to nonsynonymous polymorphisms.
2014-02-04
Approximation to the distribution of fitness effects across functional categories in human segregating polymorphisms
10.1101/002345
Fernando Racimo;Joshua G Schraiber;
Quantifying the proportion of polymorphic mutations that are deleterious or neutral is of fundamental importance to our understanding of evolution, disease genetics and the maintenance of variation genome-wide. Here, we develop an approximation to the distribution of fitness effects (DFE) of segregating single-nucleotide mutations in humans. Unlike previous methods, we do not assume that synonymous mutations are neutral or not strongly selected, and we do not rely on fitting the DFE of all new nonsynonymous mutations to a single probability distribution, which is poorly motivated on a biological level. We rely on a previously developed method that utilizes a variety of published annotations (including conservation scores, protein deleteriousness estimates and regulatory data) to score all mutations in the human genome based on how likely they are to be affected by negative selection, controlling for mutation rate. We map this score to a scale of fitness coefficients via maximum likelihood using diffusion theory and a Poisson random field model on SNP data. Our method serves to approximate the deleterious DFE of mutations that are segregating, regardless of their genomic consequence. We can then compare the proportion of mutations that are negatively selected or neutral across various categories, including different types of regulatory sites. We observe that the distribution of intergenic polymorphisms is highly peaked at neutrality, while the distribution of nonsynonymous polymorphisms is bimodal, with a neutral peak and a second peak at s {approx} -10-4. Other types of polymorphisms have shapes that fall roughly in between these two. We find that transcriptional start sites, strong CTCF-enriched elements and enhancers are the regulatory categories with the largest proportion of deleterious polymorphisms.\n\nAuthor SummaryThe relative frequencies of polymorphic mutations that are deleterious, nearly neutral and neutral is traditionally called the distribution of fitness effects (DFE). Obtaining an accurate approximation to this distribution in humans can help us understand the nature of disease and the mechanisms by which variation is maintained in the genome. Previous methods to approximate this distribution have relied on fitting the DFE of new mutations to a single probability distribution, like a normal or an exponential distribution. Generally, these methods also assume that a particular category of mutations, like synonymous changes, can be assumed to be neutral or nearly neutral. Here, we provide a novel method designed to reflect the strength of negative selection operating on any segregating site in the human genome. We use a maximum likelihood mapping approach to fit these scores to a scale of neutral and negative fitness coefficients. Finally, we compare the shape of the DFEs we obtain from this mapping for different types of functional categories. We observe the distribution of polymorphisms has a strong peak at neutrality, as well as a second peak of deleterious effects when restricting to nonsynonymous polymorphisms.
2014-06-19
Stress, heritability, tissue type and human methylome variation in mother-newborn dyads.
10.1101/002303
David A. Hughes;Nicole C. Rodney;Connie J. Mulligan;
DNA methylation variation has been implicated as a factor that influences inter-individual and inter-tissue phenotypic variation in numerous organisms and under various conditions. Here, using a unique collection of three tissues, derived from 24 mother-newborn dyads from war-torn Democratic Republic of Congo, we estimate how stress, heritability, tissue type and genomic/regulatory context influence genome-wide DNA methylation. We also evaluate if stress-associated variation may mediate an observed phenotype - newborn birthweight. On average, a minimal influence of stress and heritability are observed, while in contrast extensive among tissues and context dependency is evident. However, a notable overlap in heritable and stress-associated variation is observed and that variation is commonly correlated with birthweight variation. Finally, we observe that variation outside of promoter regions, particularly in enhancers, is far more dynamic across tissues and across conditions than in promoters, suggesting that variation outside of promoters may play a larger role in expression variation than variation found within promoter regions.
2014-01-31
Diverse and widespread contamination evident in the unmapped depths of high throughput sequencing data
10.1101/002279
Richard W Lusk;
BackgroundTrace quantities of contaminating DNA are widespread in the laboratory environment, but their presence has received little attention in the context of high throughput sequencing. This issue is highlighted by recent works that have rested controversial claims upon sequencing data that appear to support the presence of unexpected exogenous species.\n\nResultsI used reads that preferentially aligned to alternate genomes to infer the distribution of potential contaminant species in a set of independent sequencing experiments. I confirmed that dilute samples are more exposed to contaminating DNA, and, focusing on four single-cell sequencing experiments, found that these contaminants appear to originate from a wide diversity of clades. Although negative control libraries prepared from blank samples recovered the highest-frequency contaminants, low-frequency contaminants, which appeared to make heterogeneous contributions to samples prepared in parallel within a single experiment, were not well controlled for. I used these results to show that, despite heavy replication and plausible controls, contamination can explain all of the observations used to support a recent claim that complete genes pass from food to human blood.\n\nConclusionsContamination must be considered a potential source of signals of exogenous species in sequencing data, even if these signals are replicated in independent experiments, vary across conditions, or indicate a species which seems a priori unlikely to contaminate. Negative control libraries processed in parallel are essential to control for contaminant DNAs, but their limited ability to recover low-frequency contaminants must be recognized.
2014-01-30
Diverse and widespread contamination evident in the unmapped depths of high throughput sequencing data
10.1101/002279
Richard W Lusk;
BackgroundTrace quantities of contaminating DNA are widespread in the laboratory environment, but their presence has received little attention in the context of high throughput sequencing. This issue is highlighted by recent works that have rested controversial claims upon sequencing data that appear to support the presence of unexpected exogenous species.\n\nResultsI used reads that preferentially aligned to alternate genomes to infer the distribution of potential contaminant species in a set of independent sequencing experiments. I confirmed that dilute samples are more exposed to contaminating DNA, and, focusing on four single-cell sequencing experiments, found that these contaminants appear to originate from a wide diversity of clades. Although negative control libraries prepared from blank samples recovered the highest-frequency contaminants, low-frequency contaminants, which appeared to make heterogeneous contributions to samples prepared in parallel within a single experiment, were not well controlled for. I used these results to show that, despite heavy replication and plausible controls, contamination can explain all of the observations used to support a recent claim that complete genes pass from food to human blood.\n\nConclusionsContamination must be considered a potential source of signals of exogenous species in sequencing data, even if these signals are replicated in independent experiments, vary across conditions, or indicate a species which seems a priori unlikely to contaminate. Negative control libraries processed in parallel are essential to control for contaminant DNAs, but their limited ability to recover low-frequency contaminants must be recognized.
2014-02-06
The organization and dynamics of corticostriatal pathways link the medial orbitofrontal cortex to future decisions
10.1101/002311
Timothy Verstynen;
Accurately making a decision in the face of incongruent options increases the efficiency of making similar congruency decisions in the future. This adaptive process is modulated by reward, suggesting that ventral corticostriatal circuits may contribute to the process of conflict adaptation. To evaluate this possibility, a group of healthy adults (N = 30) were tested using functional MRI (fMRI) while they performed a color-word Stroop task. In a conflict-related region of the medial orbitofrontal cortex (mOFC), stronger BOLD responses predicted faster response times (RTs) on the next trial. More importantly, the degree of behavioral conflict adaptation on RTs was correlated with the magnitude of mOFC-RT associations on the previous trial, but only after accounting for network-level interactions with prefrontal and striatal regions. This suggests that conflict adaptation may rely on interactions between distributed corticostriatal circuits. The convergence of white matter projections fro ...
2014-02-03
Predicting growth conditions from internal metabolic fluxes in an in-silico model of E. coli
10.1101/002287
Viswanadham Sridhara;Austin G. Meyer;Piyush Rai;Jeffrey E. Barrick;Pradeep Ravikumar;Daniel Segrè;Claus O Wilke;
A widely studied problem in systems biology is to predict bacterial phenotype from growth conditions, using mechanistic models such as flux balance analysis (FBA). However, the inverse prediction of growth conditions from phenotype is rarely considered. Here we develop a computational framework to carry out this inverse prediction on a computational model of bacterial metabolism. We use FBA to calculate bacterial phenotypes from growth conditions in E. coli, and then we assess how accurately we can predict the original growth conditions from the phenotypes. Prediction is carried out via regularized multinomial regression. Our analysis provides several important physiological and statistical insights. First, we show that by analyzing metabolic end products we can consistently predict growth conditions. Second, prediction is reliable even in the presence of small amounts of impurities. Third, flux through a relatively small number of reactions per growth source (~10) is sufficient for accurate prediction. Fourth, combining the predictions from two separate models, one trained only on carbon sources and one only on nitrogen sources, performs better than models trained to perform joint prediction. Finally, that separate predictions perform better than a more sophisticated joint prediction scheme suggests that carbon and nitrogen utilization pathways, despite jointly affecting cellular growth, may be fairly decoupled in terms of their dependence on specific assortments of molecular precursors.
2014-01-31
Predicting growth conditions from internal metabolic fluxes in an in-silico model of E. coli
10.1101/002287
Viswanadham Sridhara;Austin G. Meyer;Piyush Rai;Jeffrey E. Barrick;Pradeep Ravikumar;Daniel Segrè;Claus O Wilke;
A widely studied problem in systems biology is to predict bacterial phenotype from growth conditions, using mechanistic models such as flux balance analysis (FBA). However, the inverse prediction of growth conditions from phenotype is rarely considered. Here we develop a computational framework to carry out this inverse prediction on a computational model of bacterial metabolism. We use FBA to calculate bacterial phenotypes from growth conditions in E. coli, and then we assess how accurately we can predict the original growth conditions from the phenotypes. Prediction is carried out via regularized multinomial regression. Our analysis provides several important physiological and statistical insights. First, we show that by analyzing metabolic end products we can consistently predict growth conditions. Second, prediction is reliable even in the presence of small amounts of impurities. Third, flux through a relatively small number of reactions per growth source (~10) is sufficient for accurate prediction. Fourth, combining the predictions from two separate models, one trained only on carbon sources and one only on nitrogen sources, performs better than models trained to perform joint prediction. Finally, that separate predictions perform better than a more sophisticated joint prediction scheme suggests that carbon and nitrogen utilization pathways, despite jointly affecting cellular growth, may be fairly decoupled in terms of their dependence on specific assortments of molecular precursors.
2014-10-14
SNP-guided identification of monoallelic DNA-methylation events from enrichment-based sequencing data
10.1101/002352
Sandra Steyaert;Wim Van Criekinge;Ayla De Paepe;Simon Denil;Klaas Mensaert;Katrien Vandepitte;Wim Vanden Berghe;Geert Trooskens;Tim De Meyer;
Monoallelic gene expression is typically initiated early in the development of an organism. Dysregulation of monoallelic gene expression has already been linked to several non-Mendelian inherited genetic disorders. In humans, DNA-methylation is deemed to be an important regulator of monoallelic gene expression, but only few examples are known. One important reason is that current, cost-affordable truly genome-wide methods to assess DNA-methylation are based on sequencing post enrichment. Here, we present a new methodology that combines methylomic data from MethylCap-seq with associated SNP profiles to identify monoallelically methylated loci. Using the Hardy-Weinberg theorem for each SNP locus, it could be established whether the observed frequency of samples featured by biallelic methylation was lower than randomly expected. Applied on 334 MethylCap-seq samples of very diverse origin, this resulted in the identification of 80 genomic regions featured by monoallelic DNA-methylation. Of these 80 loci, 49 are located in genic regions of which 25 have already been linked to imprinting. Further analysis revealed statistically significant enrichment of these loci in promoter regions, further establishing the relevance and usefulness of the method. Additional validation of the found loci was done using 14 whole-genome bisulfite sequencing data sets. Importantly, the developed approach can be easily applied to other enrichment-based sequencing technologies, such as the ChIP-seq-based identification of monoallelic histone modifications.
2014-02-04
A phase diagram for gene selection and disease classification
10.1101/002360
Hong-Dong Li;Qing-Song Xu;Yi-Zeng Liang;
Identifying a small subset of discriminate genes is important for predicting clinical outcomes and facilitating disease diagnosis. Based on the model population analysis framework, we present a method, called PHADIA, which is able to output a phase diagram displaying the predictive ability of each variable, which provides an intuitive way for selecting informative variables. Using two publicly available microarray datasets, its demonstrated that our method can selects a few informative genes and achieves significantly better or comparable classification accuracy compared to the reported results in the literature. The source codes are freely available at: www.libpls.net.
2014-02-04
A phase diagram for gene selection and disease classification
10.1101/002360
Hong-Dong Li;Qing-Song Xu;Yi-Zeng Liang;
Identifying a small subset of discriminate genes is important for predicting clinical outcomes and facilitating disease diagnosis. Based on the model population analysis framework, we present a method, called PHADIA, which is able to output a phase diagram displaying the predictive ability of each variable, which provides an intuitive way for selecting informative variables. Using two publicly available microarray datasets, its demonstrated that our method can selects a few informative genes and achieves significantly better or comparable classification accuracy compared to the reported results in the literature. The source codes are freely available at: www.libpls.net.
2014-02-05
Evolutionary dynamics of shared niche construction
10.1101/002378
Philip Gerlee;Alexander RA Anderson;
Many species engage in niche construction that ultimately leads to an increase in the carrying capacity of the population. We have investigated how the specificity of this behaviour affects evolutionary dynamics using a set of coupled logistic equations, where the carrying capacity of each genotype consists of two components: an intrinsic part and a contribution from all genotypes present in the population. The relative contribution of the two components is controlled by a specificity parameter{gamma} , and we show that the ability of a mutant to invade a resident population depends strongly on this parameter. When the carrying capacity is intrinsic, selection is almost exclusively for mutants with higher carrying capacity, while a shared carrying capacity yields selection purely on growth rate. This result has important implications for our understanding of niche construction, in particular the evolutionary dynamics of tumor growth.
2014-02-05
Genetic variants associated with motion sickness point to roles for inner ear development, neurological processes, and glucose homeostasis
10.1101/002386
Bethann S Hromatka;Joyce Y Tung;Amy K Kiefer;Chuong B Do;David A Hinds;Nicholas Eriksson;
Roughly one in three individuals is highly susceptible to motion sickness and yet the underlying causes of this condition are not well understood. Despite high heritability, no associated genetic factors have been discovered to date. Here, we conducted the first genome-wide association study on motion sickness in 80,494 individuals from the 23andMe database who were surveyed about car sickness. Thirty-five single-nucleotide polymorphisms (SNPs) were associated with motion sickness at a genome-wide-significant level (p< 5e-8). Many of these SNPs are near genes involved in balance, and eye, ear, and cranial development (e.g., PVRL3, TSHZ1, MUTED, HOXB3, HOXD3). Other SNPs may affect motion sickness through nearby genes with roles in the nervous system, glucose homeostasis, or hypoxia. We show that several of these SNPs display sex-specific effects, with as much as three times stronger effects in women. We searched for comorbid phenotypes with motion sickness, confirming associations with known comorbidities including migraines, postoperative nausea and vomiting (PONV), vertigo, and morning sickness, and observing new associations with altitude sickness and many gastrointestinal conditions. We also show that two of these related phenotypes (PONV and migraines) share underlying genetic factors with motion sickness. These results point to the importance of the nervous system in motion sickness and suggest a role for glucose levels in motion-induced nausea and vomiting, a finding that may provide insight into other nausea-related phenotypes such as PONV. They also highlight personal characteristics (e.g., being a poor sleeper) that correlate with motion sickness, findings that could help identify risk factors or treatments.
2014-02-04
Methods to study toxic transgenes in C. elegans: an analysis of protease-dead separase in the C. elegans embryo
10.1101/002444
Diana M Mitchell;Lindsey R Uehlein;Joshua Bembenek;
We investigated whether the protease activity of separase, which is required for chromosome segregation, is also required for its other roles during anaphase in C. elegans given that non-proteolytic functions of separase have been identified in other organisms. We find that expression of protease-dead separase is dominant-negative in C. elegans embryos. The C. elegans embryo is an ideal system to study developmental processes in a genetically tractable system. However, a major limitation is the lack of an inducible gene expression system for the embryo. The most common method for embryonic expression involves generation of integrated transgenes under the control of the pie-1 promoter, using unc-119 as a selection marker. However expression of dominant-negative proteins kills the strain preventing analysis of mutants. We have developed two methods that allow for the propagation of lines carrying dominant-negative transgenes in order to study protease-dead separase in embryos. The first involves feeding gfp RNAi to eliminate transgene expression and allows propagation of transgenic lines indefinitely. Animals removed from gfp RNAi for several generations recover transgene expression and associated phenotypes. The second involves propagation of the transgene with the female specific pie-1 promoter via the male germline and analysis of phenotypes in embryos from F1 heterozygous hermaphrodites that express the protein. Using these methods, we show that protease-dead separase causes chromosome nondisjunction and cytokinesis failures. These methods are immediately applicable for studies of dominant-negative transgenes and should open new lines of investigation in the C. elegans embryo.
2014-02-06
Biochemical ‘Cambrian’ explosion-implosions: the generation and pruning of genetic codes
10.1101/002436
Rodrick Wallace;
Tlusty's topological analysis of the genetic code suggests ecosystem changes in available metabolic free energy that predated the aerobic transition enabled a punctuated sequence of increasingly complex genetic codes and protein translators. These coevolved via a `Cambrian explosion' until, very early on, the ancestor of the present narrow spectrum of protein machineries became evolutionarily locked in at a modest level of fitness reflecting a modest embedding metabolic free energy ecology. Similar biochemical `Cambrian singularities' must have occurred at different scales and levels of organization on Earth, with competition or chance-selected outcomes frozen at a far earlier period than the physical bauplan Cambrian explosion. Other examples might include explosive variations in mechanisms of photosynthesis and subsequent oxygen metabolisms. Intermediate between Cambrian bauplan and genetic code, variants of both remain today, even after evolutionary pruning, often protected in specialized ecological niches. This suggests that, under less energetic astrobiological ecologies, a spectrum of less complicated reproductive codes may also survive in specialized niches.
2014-02-06
Broadly tuned and respiration-independent inhibition in the olfactory bulb of awake mice
10.1101/002410
Brittany N Cazakoff;Billy Y B Lau;Kerensa L Crump;Heike Demmer;Stephen David Shea;
Olfactory representations are shaped by both brain state and respiration; however, the interaction and circuit substrates of these influences are poorly understood. Granule cells (GCs) in the main olfactory bulb (MOB) are presumed to sculpt activity that reaches the olfactory cortex via inhibition of mitral/tufted cells (MTs). GCs may potentially sparsen ensemble activity by facilitating lateral inhibition among MTs, and/or they may enforce temporally-precise activity locked to breathing. Yet, the selectivity and temporal structure of GC activity during wakefulness are unknown. We recorded GCs in the MOB of anesthetized and awake mice and reveal pronounced state-dependent features of odor coding and temporal patterning. Under anesthesia, GCs exhibit sparse activity and are strongly and synchronously coupled to the respiratory cycle. Upon waking, GCs desynchronize, broaden their odor responses, and typically fire without regard for the respiratory rhythm. Thus during wakefulness, GCs exhibit stronger odor responses with less temporal structure. Based on these observations, we propose that during wakefulness GCs likely predominantly shape MT odor responses through broadened lateral interactions rather than respiratory synchronization.
2014-02-06
Automated ensemble assembly and validation of microbial genomes
10.1101/002469
Sergey Koren;Todd J Treangen;Christopher M Hill;Mihai Pop;Adam M Phillippy;
BackgroundThe continued democratization of DNA sequencing has sparked a new wave of development of genome assembly and assembly validation methods. As individual research labs, rather than centralized centers, begin to sequence the majority of new genomes, it is important to establish best practices for genome assembly. However, recent evaluations such as GAGE and the Assemblathon have concluded that there is no single best approach to genome assembly. Instead, it is preferable to generate multiple assemblies and validate them to determine which is most useful for the desired analysis; this is a labor-intensive process that is often impossible or unfeasible.\n\nResultsTo encourage best practices supported by the community, we present iMetAMOS, an automated ensemble assembly pipeline; iMetAMOS encapsulates the process of running, validating, and selecting a single assembly from multiple assemblies. iMetAMOS packages several leading open-source tools into a single binary that automates parameter selection and execution of multiple assemblers, scores the resulting assemblies based on multiple validation metrics, and annotates the assemblies for genes and contaminants. We demonstrate the utility of the ensemble process on 225 previously unassembled Mycobacterium tuberculosis genomes as well as a Rhodobacter sphaeroides benchmark dataset. On these real data, iMetAMOS reliably produces validated assemblies and identifies potential contamination without user intervention. In addition, intelligent parameter selection produces assemblies of R. sphaeroides that exceed the quality of those from the GAGE-B evaluation, affecting the relative ranking of some assemblers.\n\nConclusionsEnsemble assembly with iMetAMOS provides users with multiple, validated assemblies for each genome. Although computationally limited to small or mid-sized genomes, this approach is the most effective and reproducible means for generating high-quality assemblies and enables users to select an assembly best tailored to their specific needs.
2014-02-07
Within the fortress: A specialized parasite of ants is not evicted
10.1101/002501
Emilia S. Gracia;Charissa de Bekker;Jim Russell;Kezia Manlove;Ephraim Hanks;David P. Hughes;
Every level of biological organization from cells to societies require that composing units come together to form parts of a bigger unit (1). Our knowledge of how behavioral manipulating parasites change social interactions between social hosts is limited. Here we use an endoparasite to observe changes in social interactions between infected and healthy ants, using trophallaxis (liquid food exchange) and spatial data as proxies for food sharing and social segregation. We found no change in trophallaxis (p-value = 0.5156). By using K-function and nearest neighbor analyses we did see a significant difference in spatial segregation on day 3 (less than 8 millimeters; p-value < 0.05). These results suggest healthy individuals are unable to detect the parasite within the host.
2014-02-07
Within the fortress: A specialized parasite of ants is not evicted
10.1101/002501
Emilia S. Gracia;Charissa de Bekker;Jim Russell;Kezia Manlove;Ephraim Hanks;David P. Hughes;
Every level of biological organization from cells to societies require that composing units come together to form parts of a bigger unit (1). Our knowledge of how behavioral manipulating parasites change social interactions between social hosts is limited. Here we use an endoparasite to observe changes in social interactions between infected and healthy ants, using trophallaxis (liquid food exchange) and spatial data as proxies for food sharing and social segregation. We found no change in trophallaxis (p-value = 0.5156). By using K-function and nearest neighbor analyses we did see a significant difference in spatial segregation on day 3 (less than 8 millimeters; p-value < 0.05). These results suggest healthy individuals are unable to detect the parasite within the host.
2014-02-12
Investigating speciation in face of polyploidization: what can we learn from approximate Bayesian computation approach?
10.1101/002527
Camille Roux;John Pannell;
Despite its importance in the diversification of many eucaryote clades, particularly plants, detailed genomic analysis of polyploid species is still in its infancy, with published analysis of only a handful of model species to date. Fundamental questions concerning the origin of polyploid lineages (e.g., auto- vs. allopolyploidy) and the extent to which polyploid genomes display disomic vs. polysomic vs. heterosomic inheritance are poorly resolved for most polyploids, not least because they have hitherto required detailed karyotypic analysis or the analysis of allele segregation at multiple loci in pedigrees or artificial crosses, which are often not practical for non-model species. However, the increasing availability of sequence data for non-model species now presents an opportunity to apply established approaches for the evolutionary analysis of genomic data to polyploid species complexes. Here, we ask whether approximate Bayesian computation (ABC), applied to sequence data produced by next-generation sequencing technologies from polyploid taxa, allows correct inference of the evolutionary and demographic history of polyploid lineages and their close relatives. We use simulations to investigate how the number of sampled individuals, the number of surveyed loci and their length affect the accuracy and precision of evolutionary and demographic inferences by ABC, including the mode of polyploidisation, mode of inheritance of polyploid taxa, the relative timing of genome duplication and speciation, and effective populations sizes of contributing lineages. We also apply the ABC framework we develop to sequence data from diploid and polyploidy species of the plant genus Capsella, for which we infer an allopolyploid origin for tetra C. bursa-pastoris {approx} 90,000 years ago. In general, our results indicate that ABC is a promising and powerful method for uncovering the origin and subsequent evolution of polyploid species.
2014-02-09
The roles of standing genetic variation and evolutionary history in determining the evolvability of anti-predator strategies
10.1101/002493
Jordan Fish;Daniel R O'Donnell;Abhijna Parigi;Ian Dworkin;Aaron P Wagner;
Standing genetic variation and the historical environment in which that variation arises (evolutionary history) are both potentially significant determinants of a populations capacity for evolutionary response to a changing environment. We evaluated the relative importance of these two factors in influencing the evolutionary trajectories in the face of sudden environmental change. We used the open-ended digital evolution software Avida to examine how historic exposure to predation pressures, different levels of genetic variation, and combinations of the two, impact anti-predator strategies and competitive abilities evolved in the face of threats from new, invasive, predator populations. We show that while standing genetic variation plays some role in determining evolutionary responses, evolutionary history has the greater influence on a populations capacity to evolve effective anti-predator traits. This adaptability likely reflects the relative ease of repurposing existing, relevant genes and traits, and the broader potential value of the generation and maintenance of adaptively flexible traits in evolving populations.
2014-02-07
Epistasis within the MHC contributes to the genetic architecture of celiac disease
10.1101/002485
Ben Goudey;Gad Abraham;Eder Kikianty;Qiao Wang;Dave Rawlinson;Fan Shi;Izhak Haviv;Linda Stern;Adam Kowalczyk;Michael Inouye;
Epistasis has long been thought to contribute to the genetic aetiology of complex diseases, yet few robust epistatic interactions in humans have been detected. We have conducted exhaustive genome-wide scans for pairwise epistasis in five independent celiac disease (CD) case-control studies, using a rapid model-free approach to examine over 500 billion SNP pairs in total. We found 20 significant epistatic signals within the HLA region which achieved stringent replication criteria across multiple studies and were independent of known CD risk HLA haplotypes. The strongest independent CD epistatic signal corresponded to genes in the HLA class III region, in particular PRRC2A and GPANK1/C6orf47, which are known to contain variants for non-Hodgkins lymphoma and early menopause, co-morbidities of celiac disease. Replicable evidence for epistatic variants outside the MHC was not observed. Both within and between European populations, we observed striking consistency of epistatic models and epistatic model distribution. Within the UK population, models of CD based on both epistatic and additive single-SNP effects increased explained CD variance by approximately 1% over those of single SNPs. Models of only epistatic pairs or additive single-SNPs showed similar levels of CD variance explained, indicating the existence of a substantial overlap of additive and epistatic components. Our findings have implications for the determination of genetic architecture and, by extension, the use of human genetics for validation of therapeutic targets.
2014-02-07
Epistasis within the MHC contributes to the genetic architecture of celiac disease
10.1101/002485
Ben Goudey;Gad Abraham;Eder Kikianty;Qiao Wang;Dave Rawlinson;Fan Shi;Izhak Haviv;Linda Stern;Adam Kowalczyk;Michael Inouye;
Epistasis has long been thought to contribute to the genetic aetiology of complex diseases, yet few robust epistatic interactions in humans have been detected. We have conducted exhaustive genome-wide scans for pairwise epistasis in five independent celiac disease (CD) case-control studies, using a rapid model-free approach to examine over 500 billion SNP pairs in total. We found 20 significant epistatic signals within the HLA region which achieved stringent replication criteria across multiple studies and were independent of known CD risk HLA haplotypes. The strongest independent CD epistatic signal corresponded to genes in the HLA class III region, in particular PRRC2A and GPANK1/C6orf47, which are known to contain variants for non-Hodgkins lymphoma and early menopause, co-morbidities of celiac disease. Replicable evidence for epistatic variants outside the MHC was not observed. Both within and between European populations, we observed striking consistency of epistatic models and epistatic model distribution. Within the UK population, models of CD based on both epistatic and additive single-SNP effects increased explained CD variance by approximately 1% over those of single SNPs. Models of only epistatic pairs or additive single-SNPs showed similar levels of CD variance explained, indicating the existence of a substantial overlap of additive and epistatic components. Our findings have implications for the determination of genetic architecture and, by extension, the use of human genetics for validation of therapeutic targets.
2014-02-20
Epistasis within the MHC contributes to the genetic architecture of celiac disease
10.1101/002485
Ben Goudey;Gad Abraham;Eder Kikianty;Qiao Wang;Dave Rawlinson;Fan Shi;Izhak Haviv;Linda Stern;Adam Kowalczyk;Michael Inouye;
Epistasis has long been thought to contribute to the genetic aetiology of complex diseases, yet few robust epistatic interactions in humans have been detected. We have conducted exhaustive genome-wide scans for pairwise epistasis in five independent celiac disease (CD) case-control studies, using a rapid model-free approach to examine over 500 billion SNP pairs in total. We found 20 significant epistatic signals within the HLA region which achieved stringent replication criteria across multiple studies and were independent of known CD risk HLA haplotypes. The strongest independent CD epistatic signal corresponded to genes in the HLA class III region, in particular PRRC2A and GPANK1/C6orf47, which are known to contain variants for non-Hodgkins lymphoma and early menopause, co-morbidities of celiac disease. Replicable evidence for epistatic variants outside the MHC was not observed. Both within and between European populations, we observed striking consistency of epistatic models and epistatic model distribution. Within the UK population, models of CD based on both epistatic and additive single-SNP effects increased explained CD variance by approximately 1% over those of single SNPs. Models of only epistatic pairs or additive single-SNPs showed similar levels of CD variance explained, indicating the existence of a substantial overlap of additive and epistatic components. Our findings have implications for the determination of genetic architecture and, by extension, the use of human genetics for validation of therapeutic targets.
2014-05-29
Epistasis within the MHC contributes to the genetic architecture of celiac disease
10.1101/002485
Ben Goudey;Gad Abraham;Eder Kikianty;Qiao Wang;Dave Rawlinson;Fan Shi;Izhak Haviv;Linda Stern;Adam Kowalczyk;Michael Inouye;
Epistasis has long been thought to contribute to the genetic aetiology of complex diseases, yet few robust epistatic interactions in humans have been detected. We have conducted exhaustive genome-wide scans for pairwise epistasis in five independent celiac disease (CD) case-control studies, using a rapid model-free approach to examine over 500 billion SNP pairs in total. We found 20 significant epistatic signals within the HLA region which achieved stringent replication criteria across multiple studies and were independent of known CD risk HLA haplotypes. The strongest independent CD epistatic signal corresponded to genes in the HLA class III region, in particular PRRC2A and GPANK1/C6orf47, which are known to contain variants for non-Hodgkins lymphoma and early menopause, co-morbidities of celiac disease. Replicable evidence for epistatic variants outside the MHC was not observed. Both within and between European populations, we observed striking consistency of epistatic models and epistatic model distribution. Within the UK population, models of CD based on both epistatic and additive single-SNP effects increased explained CD variance by approximately 1% over those of single SNPs. Models of only epistatic pairs or additive single-SNPs showed similar levels of CD variance explained, indicating the existence of a substantial overlap of additive and epistatic components. Our findings have implications for the determination of genetic architecture and, by extension, the use of human genetics for validation of therapeutic targets.
2015-08-11
Metabolic composition of anode community predicts electrical power in microbial fuel cells
10.1101/002337
Andre Gruning;Nelli J Beecroft;Claudio Avignone-Rossa;
Microbial Fuel Cells (MFCs) are a promising technology for organic waste treatment and sustainable bioelectricity production. Inoculated with natural communities, they present a complex microbial ecosystem with syntrophic interactions between microbes with different metabolic capabilities. From this point of view, they are similar to anaerobic digesters, however with methanogenesis replaced by anaerobic respiration with the anode as terminal electron acceptor. Bio-electrochemically they are similar to classical fuel cells where however the electrogenic redox reaction is part of the microbial metabolism rather than mediated by an inorganic catalyst.\n\nIn this paper, we analyse how electric power production in MFCs depends on the composition of the anodic biofilm in terms of metabolic capabilities of identified sets of species. MFCs were started with a natural inoculum and continuously fed with sucrose, a fermentable carbohydrate. The composition of the community, power and other environmental data were sampled over a period of a few weeks during the maturation of the anodic biofilm, and the community composition was determined down to the species level including relevant metabolic capabilities.\n\nOur results support the hypothesis that an MFCs with natural inoculum and fermentable feedstock is essentially a two stage system with fermentation followed by anode-respiration. Our results also show that under identical starting and operating conditions, MFCs with comparable power output can develop different anodic communities with no particular species dominant across all replicas. It is only important for good power production that all cells contain a sufficient fraction of low-potential anaerobic respirators, that is respirators that can use terminal electron acceptors with a low redox potential. We conclude with a number of hypotheses and recommendations for the operation of MFCs to ensure good electric yield.
2014-02-07