scholarly journals Exploring microbial dark matter to resolve the deep archaeal ancestry of eukaryotes

2015 ◽  
Vol 370 (1678) ◽  
pp. 20140328 ◽  
Author(s):  
Jimmy H. Saw ◽  
Anja Spang ◽  
Katarzyna Zaremba-Niedzwiedzka ◽  
Lina Juzokaite ◽  
Jeremy A. Dodsworth ◽  
...  

The origin of eukaryotes represents an enigmatic puzzle, which is still lacking a number of essential pieces. Whereas it is currently accepted that the process of eukaryogenesis involved an interplay between a host cell and an alphaproteobacterial endosymbiont, we currently lack detailed information regarding the identity and nature of these players. A number of studies have provided increasing support for the emergence of the eukaryotic host cell from within the archaeal domain of life, displaying a specific affiliation with the archaeal TACK superphylum. Recent studies have shown that genomic exploration of yet-uncultivated archaea, the so-called archaeal ‘dark matter’, is able to provide unprecedented insights into the process of eukaryogenesis. Here, we provide an overview of state-of-the-art cultivation-independent approaches, and demonstrate how these methods were used to obtain draft genome sequences of several novel members of the TACK superphylum, including Lokiarchaeum, two representatives of the Miscellaneous Crenarchaeotal Group (Bathyarchaeota), and a Korarchaeum -related lineage. The maturation of cultivation-independent genomics approaches, as well as future developments in next-generation sequencing technologies, will revolutionize our current view of microbial evolution and diversity, and provide profound new insights into the early evolution of life, including the enigmatic origin of the eukaryotic cell.

2015 ◽  
Vol 3 (6) ◽  
Author(s):  
Kgaugelo E. Lekota ◽  
Joseph Mafofo ◽  
Evelyn Madoroba ◽  
Jasper Rees ◽  
Henriette van Heerden ◽  
...  

Bacillus anthracis is a Gram-positive bacterium that causes anthrax, mainly in herbivores through exotoxins and capsule produced on plasmids, pXO1 and pXO2. This paper compares the whole-genome sequences of two B. anthracis strains from an endemic region and a sporadic outbreak in South Africa. Sequencing was done using next-generation sequencing technologies.


2019 ◽  
Vol 14 (2) ◽  
pp. 157-163
Author(s):  
Majid Hajibaba ◽  
Mohsen Sharifi ◽  
Saeid Gorgin

Background: One of the pivotal challenges in nowadays genomic research domain is the fast processing of voluminous data such as the ones engendered by high-throughput Next-Generation Sequencing technologies. On the other hand, BLAST (Basic Local Alignment Search Tool), a longestablished and renowned tool in Bioinformatics, has shown to be incredibly slow in this regard. Objective: To improve the performance of BLAST in the processing of voluminous data, we have applied a novel memory-aware technique to BLAST for faster parallel processing of voluminous data. Method: We have used a master-worker model for the processing of voluminous data alongside a memory-aware technique in which the master partitions the whole data in equal chunks, one chunk for each worker, and consequently each worker further splits and formats its allocated data chunk according to the size of its memory. Each worker searches every split data one-by-one through a list of queries. Results: We have chosen a list of queries with different lengths to run insensitive searches in a huge database called UniProtKB/TrEMBL. Our experiments show 20 percent improvement in performance when workers used our proposed memory-aware technique compared to when they were not memory aware. Comparatively, experiments show even higher performance improvement, approximately 50 percent, when we applied our memory-aware technique to mpiBLAST. Conclusion: We have shown that memory-awareness in formatting bulky database, when running BLAST, can improve performance significantly, while preventing unexpected crashes in low-memory environments. Even though distributed computing attempts to mitigate search time by partitioning and distributing database portions, our memory-aware technique alleviates negative effects of page-faults on performance.


Pathogens ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 144
Author(s):  
William Little ◽  
Caroline Black ◽  
Allie Clinton Smith

With the development of next generation sequencing technologies in recent years, it has been demonstrated that many human infectious processes, including chronic wounds, cystic fibrosis, and otitis media, are associated with a polymicrobial burden. Research has also demonstrated that polymicrobial infections tend to be associated with treatment failure and worse patient prognoses. Despite the importance of the polymicrobial nature of many infection states, the current clinical standard for determining antimicrobial susceptibility in the clinical laboratory is exclusively performed on unimicrobial suspensions. There is a growing body of research demonstrating that microorganisms in a polymicrobial environment can synergize their activities associated with a variety of outcomes, including changes to their antimicrobial susceptibility through both resistance and tolerance mechanisms. This review highlights the current body of work describing polymicrobial synergism, both inter- and intra-kingdom, impacting antimicrobial susceptibility. Given the importance of polymicrobial synergism in the clinical environment, a new system of determining antimicrobial susceptibility from polymicrobial infections may significantly impact patient treatment and outcomes.


2017 ◽  
Vol 01 (02) ◽  
pp. 108-120 ◽  
Author(s):  
Nick Lane

All complex life on Earth is composed of ‘eukaryotic’ cells. Eukaryotes arose just once in 4 billion years, via an endosymbiosis — bacteria entered a simple host cell, evolving into mitochondria, the ‘powerhouses’ of complex cells. Mitochondria lost most of their genes, retaining only those needed for respiration, giving eukaryotes ‘multi-bacterial’ power without the costs of maintaining thousands of complete bacterial genomes. These energy savings supported a substantial expansion in nuclear genome size, and far more protein synthesis from each gene.


2020 ◽  
Vol 36 (12) ◽  
pp. 3669-3679 ◽  
Author(s):  
Can Firtina ◽  
Jeremie S Kim ◽  
Mohammed Alser ◽  
Damla Senol Cali ◽  
A Ercument Cicek ◽  
...  

Abstract Motivation Third-generation sequencing technologies can sequence long reads that contain as many as 2 million base pairs. These long reads are used to construct an assembly (i.e. the subject’s genome), which is further used in downstream genome analysis. Unfortunately, third-generation sequencing technologies have high sequencing error rates and a large proportion of base pairs in these long reads is incorrectly identified. These errors propagate to the assembly and affect the accuracy of genome analysis. Assembly polishing algorithms minimize such error propagation by polishing or fixing errors in the assembly by using information from alignments between reads and the assembly (i.e. read-to-assembly alignment information). However, current assembly polishing algorithms can only polish an assembly using reads from either a certain sequencing technology or a small assembly. Such technology-dependency and assembly-size dependency require researchers to (i) run multiple polishing algorithms and (ii) use small chunks of a large genome to use all available readsets and polish large genomes, respectively. Results We introduce Apollo, a universal assembly polishing algorithm that scales well to polish an assembly of any size (i.e. both large and small genomes) using reads from all sequencing technologies (i.e. second- and third-generation). Our goal is to provide a single algorithm that uses read sets from all available sequencing technologies to improve the accuracy of assembly polishing and that can polish large genomes. Apollo (i) models an assembly as a profile hidden Markov model (pHMM), (ii) uses read-to-assembly alignment to train the pHMM with the Forward–Backward algorithm and (iii) decodes the trained model with the Viterbi algorithm to produce a polished assembly. Our experiments with real readsets demonstrate that Apollo is the only algorithm that (i) uses reads from any sequencing technology within a single run and (ii) scales well to polish large assemblies without splitting the assembly into multiple parts. Availability and implementation Source code is available at https://github.com/CMU-SAFARI/Apollo. Supplementary information Supplementary data are available at Bioinformatics online.


2014 ◽  
Vol 563 ◽  
pp. 379-383 ◽  
Author(s):  
Yue Yang ◽  
Xin Jun Du ◽  
Ping Li ◽  
Bin Liang ◽  
Shuo Wang

More and more attention has been paid to filamentous fungal evolution, metabolic pathway and gene functional analysis via genome sequencing. However, the published methods for the extraction of fungal genomic DNA were usually costly or inefficient. In the present study, we compared five different DNA extraction protocols: CTAB protocol with some modifications, benzyl chloride protocol with some modifications, snailase protocol, SDS protocol and extraction with the E.Z.N.A. Fungal DNA Maxi Kit (Omega Bio-Tek, USA). The CTAB method which we established with some modification in several steps is not only economical and convenient, but also can be reliably used to obtain large amounts of highly pure genomic DNA fromMonascus purpureusfor sequencing with next-generation sequencing technologies (Illumina and 454) successfully.


2008 ◽  
Vol 18 (10) ◽  
pp. 1638-1642 ◽  
Author(s):  
D. R. Smith ◽  
A. R. Quinlan ◽  
H. E. Peckham ◽  
K. Makowsky ◽  
W. Tao ◽  
...  

2011 ◽  
Vol 16 (11-12) ◽  
pp. 512-519 ◽  
Author(s):  
Peter M. Woollard ◽  
Nalini A.L. Mehta ◽  
Jessica J. Vamathevan ◽  
Stephanie Van Horn ◽  
Bhushan K. Bonde ◽  
...  

Author(s):  
Giulio Caravagna

AbstractCancers progress through the accumulation of somatic mutations which accrue during tumour evolution, allowing some cells to proliferate in an uncontrolled fashion. This growth process is intimately related to latent evolutionary forces moulding the genetic and epigenetic composition of tumour subpopulations. Understanding cancer requires therefore the understanding of these selective pressures. The adoption of widespread next-generation sequencing technologies opens up for the possibility of measuring molecular profiles of cancers at multiple resolutions, across one or multiple patients. In this review we discuss how cancer genome sequencing data from a single tumour can be used to understand these evolutionary forces, overviewing mathematical models and inferential methods adopted in field of Cancer Evolution.


Sign in / Sign up

Export Citation Format

Share Document