scholarly journals Automation of Data Consumption by Pluggable Module Software

2021 ◽  
Vol 23 (06) ◽  
pp. 1672-1681
Author(s):  
Vinay Balamurali ◽  
◽  
Prof. Venkatesh S ◽  

Servers are required to monitor the health of the various I/O cards connected to it to alert the required personnel to service these cards. The Data Collection Unit (DCU) is responsible for detecting the I/O cards, sending their inventory as well as monitoring their health. Currently, the keys required to detect these I/O cards are manually coded into the source code. Such a task is highly laborious and time-consuming. To eliminate this manual work, a Software Pluggable Module was devised which would read the I/O card-related information from the I/O component list. This software design aims at using Data Science and OOPS concepts to automate certain tasks on server systems. The proposed methodology is implemented on a Linux system. The software design is modular in nature and extensible to accommodate future requirements. Such an automation framework can be used to track information maintained in Excel Spreadsheets and access them using an Application Programming Interface (API).

2020 ◽  
Vol 12 (10) ◽  
pp. 4200 ◽  
Author(s):  
Thanh-Long Giang ◽  
Dinh-Tri Vo ◽  
Quan-Hoang Vuong

Using data from the WHO’s Situation Report on the COVID-19 pandemic from 21 January 2020 to 30 March 2020 along with other health, demographic, and macroeconomic indicators from the WHO’s Application Programming Interface and the World Bank’s Development Indicators, this paper explores the death rates of infected persons and their possible associated factors. Through the panel analysis, we found consistent results that healthcare system conditions, particularly the number of hospital beds and medical staff, have played extremely important roles in reducing death rates of COVID-19 infected persons. In addition, both the mortality rates due to different non-communicable diseases (NCDs) and rate of people aged 65 and over were significantly related to the death rates. We also found that controlling international and domestic travelling by air along with increasingly popular anti-COVID-19 actions (i.e., quarantine and social distancing) would help reduce the death rates in all countries. We conducted tests for robustness and found that the Driscoll and Kraay (1998) method was the most suitable estimator with a finite sample, which helped confirm the robustness of our estimations. Based on the findings, we suggest that preparedness of healthcare systems for aged populations need more attentions from the public and politicians, regardless of income level, when facing COVID-19-like pandemics.


Analysis of structured and consistent data has seen remarkable success in past decades. Whereas, the analysis of unstructured data in the form of multimedia format remains a challenging task. YouTube is one of the most popular and used social media tool. It reveals the community feedback through comments for published videos, number of likes, dislikes, number of subscribers for a particular channel. The main objective of this work is to demonstrate by using Hadoop concepts, how data generated from YouTube can be mined and utilized to make targeted, real time and informed decisions. In our paper, we analyze the data to identify the top categories in which the most number of videos are uploaded. This YouTube data is publicly available and the YouTube data set is described below under the heading Data Set Description. The dataset will be fetched from the Google using the YouTube API (Application Programming Interface) and going to be stored in Hadoop Distributed File System (HDFS). Using MapReduce we are going to analyze the dataset to identify the video categories in which most number of videos are uploaded. The objective of this paper is to demonstrate Apache Hadoop framework concepts and how to make targeted, real-time and informed decisions using data gathered from YouTube.


2019 ◽  
Vol 8 (3) ◽  
pp. 6996-7001

Data Mining is a method that requires analyzing and exploring large blocks of data to glean meaningful trends and patterns. In today’s period, every person on earth relies on allopathic treatments and medicines. Data mining techniques can be applied to medical databases that have a vast scope of opportunity for textual as well as visual data. In medical services, there are myriad obscure data that needs to be scrutinized and data mining is the key to gain useful knowledge from these data. This paper provides an application programming interface to recommend drugs to users suffering from a particular disease which would also be diagnosed by the framework through analyzing the user's symptoms by the means of machine learning algorithms. We utilize some insightful information here related to mining procedure to figure out most precise sickness that can be related with symptoms. The patient can without much of a stretch recognize the diseases. The patients can undoubtedly recognize the disease by simply ascribing their issues and the application interface produces what malady the user might be tainted with. The framework will demonstrate complaisant in critical situations where the patient can't achieve a doctor's facility or when there are situations, when professional are accessible in the territory. Predictive analysis would be performed on the disease that would result in recommending drugs to the user by taking into account various features in the database. The experimental results can also be used in further research work and for Healthcare tools.


2018 ◽  
Author(s):  
Soohyun Lee ◽  
Jeremy Johnson ◽  
Carl Vitzthum ◽  
Koray Kırlı ◽  
Burak H. Alver ◽  
...  

AbstractSummaryWe introduce Tibanna, an open-source software tool for automated execution of bioinformatics pipelines on Amazon Web Services (AWS). Tibanna accepts reproducible and portable pipeline standards including Common Workflow Language (CWL), Workflow Description Language (WDL) and Docker. It adopts a strategy of isolation and optimization of individual executions, combined with a serverless scheduling approach. Pipelines are executed and monitored using local commands or the Python Application Programming Interface (API) and cloud configuration is automatically handled. Tibanna is well suited for projects with a range of computational requirements, including those with large and widely fluctuating loads. Notably, it has been used to process terabytes of data for the 4D Nucleome (4DN) Network.AvailabilitySource code is available on GitHub at https://github.com/4dn-dcic/tibanna.


2021 ◽  
Author(s):  
Florian Malard ◽  
Laura Danner ◽  
Emilie Rouzies ◽  
Jesse G Meyer ◽  
Ewen Lescop ◽  
...  

AbstractSummaryArtificial Neural Networks (ANNs) have achieved unequaled performance for numerous problems in many areas of Science, Business, Public Policy, and more. While experts are familiar with performance-oriented software and underlying theory, ANNs are difficult to comprehend for non-experts because it requires skills in programming, background in mathematics and knowledge of terminology and concepts. In this work, we release EpyNN, an educational python resource meant for a public willing to understand key concepts and practical implementation of scalable ANN architectures from concise, homogeneous and idiomatic source code. EpyNN contains an educational Application Programming Interface (API), educational workflows from data preparation to ANN training and a documentation website setting side-by-side code, mathematics, graphical representation and text to facilitate learning and provide teaching material. Overall, EpyNN provides basics for python-fluent individuals who wish to learn, teach or develop from scratch.AvailabilityEpyNN documentation is available at https://epynn.net and repository can be retrieved from https://github.com/synthaze/epynn.ContactStéphanie Olivier-Van-Stichelen, [email protected] InformationSupplementary files and listings.


Author(s):  
D. Oxoli ◽  
H.-K. Kang ◽  
M. A. Brovelli

<p><strong>Abstract.</strong> The open and direct collaboration at the creation, improvement, and documentation of source code and software applications &amp;ndash; enabled by the web &amp;ndash; is recognized as a peculiarity of the Free and Open Source Software for Geospatial (FOSS4G) projects representing, at the same time, one of their main strengths. With this in mind, it turns out to be interesting to perform an extensive monitoring of both the evolution and the geographical arrangement of the developers’ communities in order to investigate their actual extension, evolution and degree of activity. In this work, a semi-automatic procedure to perform this particular analysis is described. The procedure is mainly based on the use of the GitHub Search Application Programming Interface by means of JavaScript custom modules to perform a census of the users registered with a collaborator role to the repositories of the most popular FOSS4G projects, hosted on the GitHub platform. The collected data is processed and analysed using Python and QGIS. The results &amp;ndash; presented through tables, charts, and thematic maps &amp;ndash; allow describing both dimensions as well as the geographical heterogeneity of the contributing community of each individual project, while enabling to identify the most active countries &amp;ndash; in terms of the number of contributors &amp;ndash; in the development of the most popular FOSS4G. The limits of the analysis, including technical constraints and considerations on the significance of the developers' census, are finally highlighted and discussed.</p>


2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Lovisa Sundin ◽  
Nourhan Sakr ◽  
Juho Leinonen ◽  
Quintin Cutts

<p style='text-indent:20px;'>With the rising demand for data science skills, the ability to wrangle data programmatically becomes a crucial barrier. In this paper, we discuss the centrality of API (application programming interface) lookup to data wrangling, and how an ontology-structured command menu could facilitate it. We design thumbnail graphics as visual alternatives to explaining data wrangling operations and use a survey to validate their quality. We furthermore predict that thumbnail graphics make the menu more navigable, improving lookup efficiency and performance. Our predictions are tested using Slice N Dice, an online data wrangling tutorial platform that collects learner activity. It includes both non-programmatic and programmatic data wrangling exercises. Participants from a multi-institutional sample (<i>n</i> = 200) were randomly assigned the tutorial either with or without thumbnail graphics. Our results show that thumbnail graphics reduce the need for clarifications, thereby assisting API lookup for novices learning data wrangling. We further present some negative results regarding performance gain and follow up with a discussion on why the differences are subtle and how they can be improved. Last but not least, we complement our statistical results with a qualitative study where we receive positive feedback from our participants on the design and helpfulness of the thumbnail graphics.</p>


2019 ◽  
Vol 35 (21) ◽  
pp. 4424-4426 ◽  
Author(s):  
Soohyun Lee ◽  
Jeremy Johnson ◽  
Carl Vitzthum ◽  
Koray Kırlı ◽  
Burak H Alver ◽  
...  

Abstract Summary We introduce Tibanna, an open-source software tool for automated execution of bioinformatics pipelines on Amazon Web Services (AWS). Tibanna accepts reproducible and portable pipeline standards including Common Workflow Language (CWL), Workflow Description Language (WDL) and Docker. It adopts a strategy of isolation and optimization of individual executions, combined with a serverless scheduling approach. Pipelines are executed and monitored using local commands or the Python Application Programming Interface (API) and cloud configuration is automatically handled. Tibanna is well suited for projects with a range of computational requirements, including those with large and widely fluctuating loads. Notably, it has been used to process terabytes of data for the 4D Nucleome (4DN) Network. Availability and implementation Source code is available on GitHub at https://github.com/4dn-dcic/tibanna. Supplementary information Supplementary data are available at Bioinformatics online.


2021 ◽  
Vol 11 (1) ◽  
pp. 135-145
Author(s):  
Matúš Sulír ◽  
Jaroslav Porubän

Abstract After a voice control system transforms audio input into a natural language sentence, its main purpose is to map this sentence to a specific action in the API (application programming interface) that should be performed. This mapping is usually specified after the API is already designed. In this paper, we show how an API can be designed with voice control in mind, which makes this mapping natural. The classes, methods, and parameters in the source code are named and typed according to the terms expected in the natural language commands. When this is insufficient, annotations (attribute-oriented programming) are used to define synonyms, string-to-object maps, or other properties. We also describe the mapping process and present a preliminary implementation called VCMapper. In its evaluation on a third-party dataset, it was successfully used to map all the sentences, while a large portion of the mapping was performed using only naming and typing conventions.


Sign in / Sign up

Export Citation Format

Share Document