scholarly journals Software solutions for reproducible RNA-seq workflows

2017 ◽  
Author(s):  
Trevor Meiss ◽  
Ling-Hong Hung ◽  
Yuguang Xiong ◽  
Eric Sobie ◽  
Ka Yee Yeung

AbstractComputational workflows typically consist of many tools that are usually distributed as compiled binaries or source code. Each of these software tools typically depends on other installed software, and performance could potentially vary due to versions, updates, and operating systems. We show here that the analysis of mRNA-seq data can depend on the computing environment, and we demonstrate that software containers represent practical solutions that ensure the reproducibility of RNAseq data analyses.

2020 ◽  
Author(s):  
Cut Nabilah Damni

AbstrakSoftware komputer atau perangkat lunak komputer merupakan kumpulan instruksi (program atau prosedur) untuk dapat melaksanakan pekerjaan secara otomatis dengan cara mengolah atau memproses kumpulan intruksi (data) yang diberikan. (Yahfizham, 2019 : 19) Sebagian besar dari software komputer dibuat oleh (programmer) dengan menggunakan bahasa pemprograman. Orang yang membuat bahasa pemprograman menuliskan perintah dalam bahasa pemprograman seperti layaknya bahasa yang digunakan oleh orang pada umumnya dalam melakukan perbincangan. Perintah-perintah tersebut dinamakan (source code). Program komputer lainnya dinamakan (compiler) yang digunakan pada (source code) dan kemudian mengubah perintah tersebut kedalam bahasa yang dimengerti oleh komputer lalu hasilnya dinamakan program executable (EXE). Pada dasarnya, komputer selalu memiliki perangkat lunak komputer atau software yang terdiri dari sistem operasi, sistem aplikasi dan bahasa pemograman.AbstractComputer software or computer software is a collection of instructions (programs or procedures) to be able to carry out work automatically by processing or processing the collection of instructions (data) provided. (Yahfizham, 2019: 19) Most of the computer software is made by (programmers) using the programming language. People who make programming languages write commands in the programming language like the language used by people in general in conducting conversation. The commands are called (source code). Other computer programs called (compilers) are used in (source code) and then change the command into a language understood by the computer and the results are called executable programs (EXE). Basically, computers always have computer software or software consisting of operating systems, application systems and programming languages.


2021 ◽  
Vol 10 (1) ◽  
pp. 20
Author(s):  
Walter Tiberti ◽  
Dajana Cassioli ◽  
Antinisca Di Marco ◽  
Luigi Pomante ◽  
Marco Santic

Advances in technology call for a parallel evolution in the software. New techniques are needed to support this dynamism, to track and guide its evolution process. This applies especially in the field of embedded systems, and certainly in Wireless Sensor Networks (WSNs), where hardware platforms and software environments change very quickly. Commonly, operating systems play a key role in the development process of any application. The most used operating system in WSNs is TinyOS, currently at its TinyOS 2.1.2 version. The evolution from TinyOS 1.x and TinyOS 2.x made the applications developed on TinyOS 1.x obsolete. In other words, these applications are not compatible out-of-the-box with TinyOS 2.x and require a porting action. In this paper, we discuss on the porting of embedded system (i.e., Wireless Sensor Networks) applications in response to operating systems’ evolution. In particular, using a model-based approach, we report the porting we did of Agilla, a Mobile-Agent Middleware (MAMW) for WSNs, on TinyOS 2.x, which we refer to as Agilla 2. We also provide a comparative analysis about the characteristics of Agilla 2 versus Agilla. The proposed Agilla 2 is compatible with TinyOS 2.x, has full capabilities and provides new features, as shown by the maintainability and performance measurement presented in this paper. An additional valuable result is the architectural modeling of Agilla and Agilla 2, missing before, which extends its documentation and improves its maintainability.


2016 ◽  
Vol 13 (1) ◽  
pp. 204-211
Author(s):  
Baghdad Science Journal

The internet is a basic source of information for many specialities and uses. Such information includes sensitive data whose retrieval has been one of the basic functions of the internet. In order to protect the information from falling into the hands of an intruder, a VPN has been established. Through VPN, data privacy and security can be provided. Two main technologies of VPN are to be discussed; IPSec and Open VPN. The complexity of IPSec makes the OpenVPN the best due to the latter’s portability and flexibility to use in many operating systems. In the LAN, VPN can be implemented through Open VPN to establish a double privacy layer(privacy inside privacy). The specific subnet will be used in this paper. The key and certificate will be generated by the server. An authentication and key exchange will be based on standard protocol SSL/TLS. Various operating systems from open source and windows will be used. Each operating system uses a different hardware specification. Tools such as tcpdump and jperf will be used to verify and measure the connectivity and performance. OpenVPN in the LAN is based on the type of operating system, portability and straightforward implementation. The bandwidth which is captured in this experiment is influenced by the operating system rather than the memory and capacity of the hard disk. Relationship and interoperability between each peer and server will be discussed. At the same time privacy for the user in the LAN can be introduced with a minimum specification.


2017 ◽  
Vol 20 (3) ◽  
pp. 918-930 ◽  
Author(s):  
Michele Bortolomeazzi ◽  
Enrico Gaffo ◽  
Stefania Bortoluzzi
Keyword(s):  

2013 ◽  
Vol 3 (1) ◽  
pp. 44-57 ◽  
Author(s):  
Veena Goswami ◽  
Choudhury Nishkanta Sahoo

Cloud computing has emerged as a new paradigm for accessing distributed computing resources such as infrastructure, hardware platform, and software applications on-demand over the internet as services. This paper presents an optimal resource management framework for multi-cloud computing environment. The authors model the behavior and performance of applications to integrate different service-providers for end-to-end-requirements. Each service model caters to specific type of requirements and there are already number of players with own customized products/services offered. Intercloud Federation and Service delegation models are part of Multi-Cloud environment where the broader target is to achieve infinite pool of resources. They propose an analytical queueing network model to improve the efficiency of the system. Numerical results indicate that the proposed provisioning technique detects changes in arrival pattern, resource demands that occur over time and allocates multiple virtualized IT resources accordingly to achieve application Quality of Service targets.


2019 ◽  
Vol 35 (19) ◽  
pp. 3839-3841 ◽  
Author(s):  
Artem Babaian ◽  
I Richard Thompson ◽  
Jake Lever ◽  
Liane Gagnier ◽  
Mohammad M Karimi ◽  
...  

Abstract Summary Transposable elements (TEs) influence the evolution of novel transcriptional networks yet the specific and meaningful interpretation of how TE-derived transcriptional initiation contributes to the transcriptome has been marred by computational and methodological deficiencies. We developed LIONS for the analysis of RNA-seq data to specifically detect and quantify TE-initiated transcripts. Availability and implementation Source code, container, test data and instruction manual are freely available at www.github.com/ababaian/LIONS. Supplementary information Supplementary data are available at Bioinformatics online.


2019 ◽  
Vol 214 ◽  
pp. 06034 ◽  
Author(s):  
Tibor Šimko ◽  
Lukas Heinrich ◽  
Harri Hirvonsalo ◽  
Dinos Kousidis ◽  
Diego Rodríguez

The revalidation, reinterpretation and reuse of research data analyses requires having access to the original computing environment, the experimental datasets, the analysis software, and the computational workflow steps which were used by researchers to produce the original scientific results in the first place. REANA (Reusable Analyses) is a nascent platform enabling researchers to structure their research data analyses in view of enabling future reuse. The analysis is described by means of a YAML file that captures sufficient information about the analysis assets, parameters and processes. The REANA platform consists of a set of micro-services allowing to launch and monitor container-based computational workflow jobs on the cloud. The REANA user interface and the command-line client enables researchers to easily rerun analysis workflows with new input parameters. The REANA platform aims at supporting several container technologies (Docker), workflow engines (CWL, Yadage), shared storage systems (Ceph, EOS) and compute cloud infrastructures (Ku-bernetes/OpenStack, HTCondor) used by the community. REANA was developed with the particle physics use case in mind and profits from synergies with general reusable research data analysis patterns in other scientific disciplines, such as bioinformatics and life sciences.


2018 ◽  
Vol 210 ◽  
pp. 04018
Author(s):  
Jarosław Koszela ◽  
Maciej Szymczyk

Today’s hardware has computing power allowing to conduct virtual simulation. However, even the most powerful machine may not be sufficient in case of using models characterized by high precision and resolution. Switching into constructive simulation causes the loss of details in the simulation. Nonetheless, it is possible to use the distributed virtual simulation in the cloud-computing environment. The aim of this paper is to propose a model that enables the scaling of the virtual simulation. The aspects on which the ability to disperse calculations depends were presented. A commercial SpatialOS solution was presented and performance tests were carried out. The use of distributed virtual simulation allows the use of more extensive and detailed simulation models using thin clients. In addition, the presented model of the simulation cloud can be the basis of the “Simulation-as-a-Service” cloud computing product.


2010 ◽  
Vol 2 (2) ◽  
pp. 24-35
Author(s):  
Marek Górski ◽  
Marzena Marcinek

In this paper, the authors present the results of research on the use of software tools for data collection and analysis in strategic and current library management. Special attention has been paid to StatuS and Performance Analysis for Polish Research Libraries (AFBN) — the tools most frequently used by Polish academic librarians. StatuS is used by the academic libraries of Krakow Library Group and several other libraries in Poland. Performance Analysis of Polish Research Libraries (AFBN) is a national project. Its main objective is to create standards for Polish libraries based on a set of performance indicators. AFBN consists of the e-survey, a database and special software for the collection and analysis of data. The surveys are submitted by academic and public research libraries once a year. The research on the application of selected software tools for data collection and analysis in library management in Polish academic libraries was conducted in February and March 2009. Results of the research reflect attitudes of managers toward usability of such tools to support various aspects of managerial processes.


Sign in / Sign up

Export Citation Format

Share Document