scholarly journals Software and Hardware Requirements and Trade-Offs in Operating Systems for Wearables: A Tool to Improve Devices’ Performance

Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1904 ◽  
Author(s):  
Vicente J. P. Amorim ◽  
Mateus C. Silva ◽  
Ricardo A. R. Oliveira

Wearable device requirements currently vary from soft to hard real-time constraints. Frequently, hardware improvements are a way to speed-up the global performance of a solution. However, changing some parts or the whole hardware may increase device complexity, raising the costs and leading to development delays of products or research prototypes. This paper focuses on software improvements, presenting a tool designed to create different versions of operating systems (OSs) fitting the specifications of wearable devices projects. Authors have developed a software tool allowing the end-user to craft a new OS in just a few steps. In order to validate the generated OS, an original wearable prototype for mining environments is outlined. Resulting data presented here allows for measuring the actual impact an OS has in different variables of a solution. Finally, the analysis also allows for evaluating the performance impact associated with each hardware part. Results suggest the viability of using the proposed solution when searching for performance improvements on wearables.

Author(s):  
Anandakumar Haldorai ◽  
Shrinand Anandakumar

The segmentation step of therapy treatment includes a detailed examination of medical imaging. In diagnosis, clinical research, and patient management, medical pictures are mainly utilized as radiographic methods. Image processing software for medical imaging is also crucial. It is possible to improve and speed up the analysis of a medical picture using a bioMIP technique. This article presents a biomedical imaging software tool that aims to provide a similar level of programmability while investigating pipelined processor solutions. These tools mimic entire systems made up of many of the recommended processing segment within the setups categorized by the schematic framework. In this paper, 15 biomedical imaging technologies will be evaluated on a number of different levels. The comparison's primary goal is to collect and analyze data in order to suggest which medical image program should be used when analyzing various kinds of imaging to users of various operating systems. The article included a result table that was reviewed.


Author(s):  
Muhammet Serhat Okyay ◽  
Aysegul Alaybeyoglu ◽  
Aytug Onan

In this study, the necessary data obtained from the databases (such as MSSQL, Mysql, Postgre SQL, etc.) of the softwares like ETA and NETSIS that are used for preaccountancy of the local and global companies will be read, some of the data will be written to the databases, and this data will be used by the end user by using mobile phones or tablets carrying Android operating systems and working with internet and cloud technology. In addition, new data sets that are collected on a cloud system from the accounting data while the company is on will be studied by genetic algorithms which is one of the artificial intelligence algorithms. Then, users will be reported by the App developed here to inform them about the company’s and its brach offices’ performances and making suggestions about such as financial budget estimation, stocks investment budget estimation, how to shape the investments. Our App will be worked on a system of local server, cloud server, mobile devices having Android operating system, and include a user interface and artificial intelligence working background. The most important aims of this study are supplying the most updated software and hardware technologies to companies, increasing the easily availability and deciding or recommending solutions using the characteristics similar to human, such as making the right analysis and decisions right away more rapidly for more realistic solutions.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-38
Author(s):  
Takayuki Fukatani ◽  
Hieu Hanh Le ◽  
Haruo Yokota

With the recent performance improvements in commodity hardware, low-cost commodity server-based storage has become a practical alternative to dedicated-storage appliances. Because of the high failure rate of commodity servers, data redundancy across multiple servers is required in a server-based storage system. However, the extra storage capacity for this redundancy significantly increases the system cost. Although erasure coding (EC) is a promising method to reduce the amount of redundant data, it requires distributing and encoding data among servers. There remains a need to reduce the performance impact of these processes involving much network traffic and processing overhead. Especially, the performance impact becomes significant for random-intensive applications. In this article, we propose a new lightweight redundancy control for server-based storage. Our proposed method uses a new local filesystem-based approach that avoids distributing data by adding data redundancy to locally stored user data. Our method switches the redundancy method of user data between replication and EC according to workloads to improve capacity efficiency while achieving higher performance. Our experiments show up to 230% better online-transaction-processing performance for our method compared with CephFS, a widely used alternative system. We also confirmed that our proposed method prevents unexpected performance degradation while achieving better capacity efficiency.


2018 ◽  
Vol 9 (2) ◽  
pp. 257-274
Author(s):  
Ririen Kusumawati

The computer technology has incredibly increased. Computer software and hardware compete to meet the customer's needs. The research intends to spread the knowledge of information technology, specifically, on the artificial intelligence. The concept of artificial intelligence is adopting and imitating human form, character, and habit which to be implemented on the computer. Using natural approach, the research aims to investigate whether artificial intelligence (AI) will produce the duplication of God's creation. Another important reason of other reseaches on AI is to create a computer which is smart and able to understand human brain working system. Hence, AI has been designed into more practical with faster CPU, cheaper mass memory, and sophisticated software tool. The concept of integrating AI science or collaborative art among sub-fields of technology will stimulate and lead to further AI researches, and it will be an interesting topic for AI researchers for developing AI technology in the future.


Author(s):  
Pascal Prado ◽  
Yulia Panchenko ◽  
Jean-Yves Tre´panier ◽  
Christophe Tribes

Preliminary Multidisciplinary Design Optimization (PMDO) project addresses the development and implementation of the Multidisciplinary Design Optimization (MDO) methodology in the Concept/Preliminary stages of the gas turbine design process. These initial phases encompass a wide range of coupled engineering disciplines. The PMDO System is a software tool intended to integrate existing design and analysis tools, decompose coupled multidisciplinary problems and, therefore, allow optimizers to speed-up preliminary engine design process. The current paper is a brief presentation of the specifications for the PMDO System as well as a description of the prototype being developed and evaluated. The current assumed e xible architecture is based on three software components that can be installed on different computers: a Java/XML MultiServer, a Java Graphical User Interface and a commercial optimization software.


Author(s):  
Merrill Warkentin ◽  
Kimberly Davis ◽  
Ernst Bekkering

The objective of information system security management is information assurance: to maintain confidentiality (privacy), integrity, and availability of information resources for authorized organizational end users. User authentication is a foundation procedure in the overall pursuit of these objectives, and password procedures have historically been the primary method of user authentication. There is an inverse relationship between the level of security provided by a password procedure and ease of recall for users. The longer the password and the more variability in its characters, the higher the level of security provided by such a password (because they are more difficult to violate or “crack”). However, such passwords tend to be more difficult for end users to remember, particularly when the password does not spell a recognizable word (or includes non-alphanumeric characters such as punctuation marks or other symbols). Conversely, when end users select their own more easily remembered passwords, the passwords may also be easier to crack. This study presents a new approach to entering passwords, which combines a high level of security with easy recall for the end user. The Check-Off Password System (COPS) is more secure than self-selected passwords as well as high-protection, assigned-password procedures. The present study investigates trade-offs between using COPS and three traditional password procedures, and provides a preliminary assessment of the efficacy of COPS. The study offers evidence that COPS is a valid alternative to current user authentication systems. End users perceive all password procedures tested to have equal usefulness, but the perceived ease of use of COPS passwords equals that of an established high-security password, and the new interface does not negatively affect user performance compared with that high-security password. Further research will be conducted to investigate long-term benefits.


Author(s):  
Peter Šimurka ◽  
Ján Procháska

Continually increasing requirements on nowadays full scope PSA L1 and L2 as whole, which is multiplied by importance of specific data for all modes of operation of nuclear power plant, highlight role of input data used in PSA quantification process. This fact also emphasizes the role of capability to process all necessary information to analyze all nuclear plant modes by appropriate way. Even if abovementioned aspects are relevant for all parts of nowadays PSAs, their importance is critical for internal hazards including specific fire analysis. Because internal fire analysis forms one of the most challenging PSA tasks, requiring interdisciplinary work including processing and integration of extensive amount of data in such a way that fire analysis results are fully consistent with internal PSA events and can be directly incorporated into PSA project. Application of tailored information system forms one of the ways to speed up analyzing process, enhances manageability and maintainability of particular PSA projects and provides effective reporting mean to document process of work as well as traceable and human readable documentation for customers. Such information system also allows implementing rapid changes in processing input data and reduces the risk of human error. Usage of information systems for modification of input data for Living PSA is invaluable. Transparent highly automatized processing of input data allows the analyst to obtain more accurate and better insight to evaluate aspects of particular fire and its consequences. This paper provides brief overview of VUJE approach and experience in this area. The paper introduces general purpose of database developed for PSA needs containing data for relevant PSA structure system and components as well as information relevant for flood and fire analyses. Paper explains as this basic data source is enhanced by adding several relatively independent tiers to employ all common data for fire PSA purpose. Paper also briefly introduces capability of such system to generate integrated documentation covering all stages of fire analyses, covering all screening stages of fire analysis as well as future plans to enhance this part of work in such a way to be capable to build automatic interface between PSA model and fire database to enable PSA model parameters automatic updating and expansion of fires in combinations of initiating events (for example Fire and seismic event).


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Jacob R Heldenbrand ◽  
Saurabh Baheti ◽  
Matthew A Bockol ◽  
Travis M Drucker ◽  
Steven N Hart ◽  
...  

Abstract Background Use of the Genome Analysis Toolkit (GATK) continues to be the standard practice in genomic variant calling in both research and the clinic. Recently the toolkit has been rapidly evolving. Significant computational performance improvements have been introduced in GATK3.8 through collaboration with Intel in 2017. The first release of GATK4 in early 2018 revealed rewrites in the code base, as the stepping stone toward a Spark implementation. As the software continues to be a moving target for optimal deployment in highly productive environments, we present a detailed analysis of these improvements, to help the community stay abreast with changes in performance. Results We re-evaluated multiple options, such as threading, parallel garbage collection, I/O options and data-level parallelization. Additionally, we considered the trade-offs of using GATK3.8 and GATK4. We found optimized parameter values that reduce the time of executing the best practices variant calling procedure by 29.3% for GATK3.8 and 16.9% for GATK4. Further speedups can be accomplished by splitting data for parallel analysis, resulting in run time of only a few hours on whole human genome sequenced to the depth of 20X, for both versions of GATK. Nonetheless, GATK4 is already much more cost-effective than GATK3.8. Thanks to significant rewrites of the algorithms, the same analysis can be run largely in a single-threaded fashion, allowing users to process multiple samples on the same CPU. Conclusions In time-sensitive situations, when a patient has a critical or rapidly developing condition, it is useful to minimize the time to process a single sample. In such cases we recommend using GATK3.8 by splitting the sample into chunks and computing across multiple nodes. The resultant walltime will be nnn.4 hours at the cost of $41.60 on 4 c5.18xlarge instances of Amazon Cloud. For cost-effectiveness of routine analyses or for large population studies, it is useful to maximize the number of samples processed per unit time. Thus we recommend GATK4, running multiple samples on one node. The total walltime will be ∼34.1 hours on 40 samples, with 1.18 samples processed per hour at the cost of $2.60 per sample on c5.18xlarge instance of Amazon Cloud.


Author(s):  
Eduardo C. Inacio ◽  
Mario A. R. Dantas ◽  
Francieli Z. Boito ◽  
Philippe O. A. Navaux ◽  
Douglas D. J. de Macedo

Sign in / Sign up

Export Citation Format

Share Document