scholarly journals ACDtool: a web-server extending the original Audic-Claverie statistical test to the comparison of large data sets of counts

2018 ◽  
Author(s):  
Jean-Michel Claverie ◽  
TA Thi Ngan

AbstractMotivationMore than 20 years ago, our laboratory published an original statistical test (referred to as the Audic-Claverie (AC) test in the literature) to identify differentially expressed genes from the pairwise comparison of counts of cognate RNA-seq reads (then called “expressed sequence tags”) determined in different conditions. Despite its antiquity and the publications of more sophisticated software packages, this original article continued to gather more than 200 citations per year, indicating the persistent usefulness of the simple AC test for the community. This prompted us to propose a fully revamped version of the AC test with a user interface adapted to the diverse and much larger datasets produced by contemporary omics techniques.ResultsWe implemented ACDtool as an interactive, freely accessible web service proposing 3 types of analyses: 1) the pairwise comparison of individual counts, 2) pairwise comparisons of arbitrary large lists of counts, 3) the all-at-once pairwise comparisons of multiple datasets. Statistical computations are implemented using standard R functions and mathematically reformulated as to accommodate all practical ranges of count values. ACDtool can thus analyze datasets from transcriptomic, proteomic, metagenomics, barcoding, ChlP'seq, population genetics, etc, using the same mathematical approach. ACDtool is particularly well suited for comparisons of large datasets without replicates.AvailabilityACDtool is at URL: www.igs.cnrs-mrs.fr/acdtool/[email protected] informationnone.

Author(s):  
Gábor Szárnyas ◽  
János Maginecz ◽  
Dániel Varró

The last decade brought considerable improvements in distributed storage and query technologies, known as NoSQL systems. These systems provide quick evaluation of simple retrieval operations and are able to answer certain complex queries in a scalable way, albeit not instantly. Providing scalability and quick response times at the same time for querying large data sets is still a challenging task. Evaluating complex graph queries is particularly difficult, as it requires lots of join, antijoin and filtering operations. This paper presents optimization techniques used in relational database systems and applies them on graph queries. We evaluate various query plans on multiple datasets and discuss the effect of different optimization techniques.


2016 ◽  
Vol 33 (4) ◽  
pp. 211-220 ◽  
Author(s):  
Temilade Adefioye Aina ◽  
Louise Cooke ◽  
Derek Stephens

Competitive intelligence (CI) is important for businesses to remain competitive. Software technologies have been developed to make the CI process simpler. These software technologies need to be able to carry out CI effectively by performing all the stages in the CI Cycle, conforming to the British Standard software quality characteristics, extract information from large data sets by having certain Additional Features and be cost effective. Three evaluation frameworks were developed based on the CI Cycle, British Standard and Additional Features. This methodology and the marketing literature of the software were used to evaluate four CI software packages. Information on cost and availability of a free trial version was also taken from the marketing literature. From the evaluation results, the software were able to support at least one CI Cycle stage and at least one British Standard characteristics but none of them fully had any of the Additional Features. One of them has a free trial version, while two of them had information about the cost of their commercial version. It is recommended that before choosing CI software, CI practitioners determine and prioritize their intelligence needs and then test which CI software can meet them. Also, CI software vendors need to provide more information in their marketing literature on the cost and availability of a free trial version of their software and features pertaining to the CI Cycle, British Standard and Additional Features. The British Standard and Additional Features have not been used in the previous CI software evaluation studies.


2017 ◽  
Author(s):  
Oana M. Enache ◽  
David L. Lahr ◽  
Ted E. Natoli ◽  
Lev Litichevskiy ◽  
David Wadden ◽  
...  

AbstractMotivationComputational analysis of datasets generated by treating cells with pharmacological and genetic perturbagens has proven useful for the discovery of functional relationships. Facilitated by technological improvements, perturbational datasets have grown in recent years to include millions of experiments. While initial studies, such as our work on Connectivity Map, used gene expression readouts, recent studies from the NIH LINCS consortium have expanded to a more diverse set of molecular readouts, including proteomic and cell morphological signatures. Sharing these diverse data creates many opportunities for research and discovery, but the unprecedented size of data generated and the complex metadata associated with experiments have also created fundamental technical challenges regarding data storage and cross-assay integration.ResultsWe present the GCTx file format and a suite of open-source packages for the efficient storage, serialization, and analysis of dense two-dimensional matrices. The utility of this format is not just theoretical; we have extensively used the format in the Connectivity Map to assemble and share massive data sets comprising 1.7 million experiments. We anticipate that the generalizability of the GCTx format, paired with code libraries that we provide, will stimulate wider adoption and lower barriers for integrated cross-assay analysis and algorithm development.AvailabilitySoftware packages (available in Matlab, Python, and R) are freely available at https://github.com/cmapSupplementary informationSupplementary information is available at clue.io/[email protected]


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


2018 ◽  
Vol 2018 (6) ◽  
pp. 38-39
Author(s):  
Austa Parker ◽  
Yan Qu ◽  
David Hokanson ◽  
Jeff Soller ◽  
Eric Dickenson ◽  
...  

Computers ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 47
Author(s):  
Fariha Iffath ◽  
A. S. M. Kayes ◽  
Md. Tahsin Rahman ◽  
Jannatul Ferdows ◽  
Mohammad Shamsul Arefin ◽  
...  

A programming contest generally involves the host presenting a set of logical and mathematical problems to the contestants. The contestants are required to write computer programs that are capable of solving these problems. An online judge system is used to automate the judging procedure of the programs that are submitted by the users. Online judges are systems designed for the reliable evaluation of the source codes submitted by the users. Traditional online judging platforms are not ideally suitable for programming labs, as they do not support partial scoring and efficient detection of plagiarized codes. When considering this fact, in this paper, we present an online judging framework that is capable of automatic scoring of codes by detecting plagiarized contents and the level of accuracy of codes efficiently. Our system performs the detection of plagiarism by detecting fingerprints of programs and using the fingerprints to compare them instead of using the whole file. We used winnowing to select fingerprints among k-gram hash values of a source code, which was generated by the Rabin–Karp Algorithm. The proposed system is compared with the existing online judging platforms to show the superiority in terms of time efficiency, correctness, and feature availability. In addition, we evaluated our system by using large data sets and comparing the run time with MOSS, which is the widely used plagiarism detection technique.


2021 ◽  
Author(s):  
Věra Kůrková ◽  
Marcello Sanguineti
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document