scholarly journals HETEROGENEOUS COMPUTING TO ACCELERATE THE SEARCH OF SUPER K-MERS BASED ON MINIMIZERS

2020 ◽  
pp. 525-532
Author(s):  
Nelson Enrique Vera-Parra ◽  
Danilo Alfonso López-Sarmiento ◽  
Cristian Alejandro Rojas-Quintero

The k-mers processing techniques based on partitioning of the data set on the disk using minimizer-type seeds have led to a significant reduction in memory requirements; however, it has added processes (search and distribution of super k-mers) that can be intensive given the large volume of data. This paper presents a massive parallel processing model in order to enable the efficient use of heterogeneous computation to accelerate the search of super k-mers based on seeds (minimizers or signatures). The model includes three main contributions: a new data structure called CISK for representing the super k-mers, their minimizers and two massive parallelization patterns in an indexed and compact way: one for obtaining the canonical m-mers of a set of reads and another for  searching for super k-mers based on minimizers. The model was implemented through two OpenCL kernels. The evaluation of the kernels shows favorable results in terms of execution times and memory requirements to use the model for constructing heterogeneous solutions with simultaneous execution (workload distribution), which perform co-processing using the current search methods of super k -mers on the CPU and the methods presented herein on GPU. The model implementation code is available in the repository: https://github.com/BioinfUD/K-mersCL.

2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


1985 ◽  
Vol 7 (4) ◽  
pp. 371-378 ◽  
Author(s):  
W. Jack Rejeski

Subjective estimates of physical work intensity are considered of major importance to those concerned with prescription of exercise. This article reviews major theoretical models which might guide research on the antecedents for ratings of perceived exertion (RPE). It is argued that an active rather than passive view of perception is warranted in future research, and a parallel-processing model is emphasized as providing the needed structure for such reconceptualization. Moreover, existing exercise research is reviewed as support for this latter approach and several suggestions are offered with regard to needed empirical study.


Author(s):  
Navin Pai ◽  
Mark Henderson

Abstract Solid modeling is a very useful industrial tool in the manufacture and design of industrial parts and assemblies. As a tool in the industrial workplace it has to be able to respond quickly to changes in design. To do this, the intersection algorithms between the solids have to be speeded up. Optimizations such as vector and parallel processing traditionally supported by supercomputers have the potential to solve this problem. A solid modeler was developed based on the boundary representation approach using a half-edge data structure. Those parts of the solid modeler code that could be vectorized were identified. A method was tested which allows loops involving linked lists to vectorize. It was also shown that this solid modeler has an inherent parallelism that can be exploited. Results are presented for vectorization and parallelization. The practical limits to both vectorization and parallelization are highlighted. Improvements to the geometric intersections algorithms are suggested to take advantage of vector and parallel processing. Results of the speedups possible using these algorithms are presented.


2021 ◽  
Author(s):  
Monique B. Sager ◽  
Aditya M. Kashyap ◽  
Mila Tamminga ◽  
Sadhana Ravoori ◽  
Christopher Callison-Burch ◽  
...  

BACKGROUND Reddit, the fifth most popular website in the United States, boasts a large and engaged user base on its dermatology forums where users crowdsource free medical opinions. Unfortunately, much of the advice provided is unvalidated and could lead to inappropriate care. Initial testing has shown that artificially intelligent bots can detect misinformation on Reddit forums and may be able to produce responses to posts containing misinformation. OBJECTIVE To analyze the ability of bots to find and respond to health misinformation on Reddit’s dermatology forums in a controlled test environment. METHODS Using natural language processing techniques, we trained bots to target misinformation using relevant keywords and to post pre-fabricated responses. By evaluating different model architectures across a held-out test set, we compared performances. RESULTS Our models yielded data test accuracies ranging from 95%-100%, with a BERT fine-tuned model resulting in the highest level of test accuracy. Bots were then able to post corrective pre-fabricated responses to misinformation. CONCLUSIONS Using a limited data set, bots had near-perfect ability to detect these examples of health misinformation within Reddit dermatology forums. Given that these bots can then post pre-fabricated responses, this technique may allow for interception of misinformation. Providing correct information, even instantly, however, does not mean users will be receptive or find such interventions persuasive. Further work should investigate this strategy’s effectiveness to inform future deployment of bots as a technique in combating health misinformation. CLINICALTRIAL N/A


Diabetes ◽  
2021 ◽  
Vol 70 (Supplement 1) ◽  
pp. 548-P
Author(s):  
DANIEL J. RUBIN ◽  
DEBORAH A. SWAVELY ◽  
JESSE BRAJUHA ◽  
PATRICK J. KELLY ◽  
SHANEISHA ALLEN ◽  
...  

2012 ◽  
Author(s):  
A. Robert Weiß ◽  
Uwe Adomeit ◽  
Philippe Chevalier ◽  
Stéphane Landeau ◽  
Piet Bijl ◽  
...  

Compiler ◽  
2012 ◽  
Vol 1 (2) ◽  
Author(s):  
Devia Tito Setyaningsih ◽  
Hero Wintolo ◽  
Dwi Nugraheny

One of the internet service used by the people of Indonesia is a blog that can be accessed via http://blogspot.com or http://wordpress.com and others. Everyone can use this medium for the purpose of publication of information in the form of text, images, sounds, or video without having to master web programming languages. One of the obstacles a person who has a blog is a way of increasing number of visitors to his blog. In order for many visitors the blog owner needs to understand about SEO (Search Engine Optimization).To allow users to blog in increasing the number of visitors and ranking his blog, they invented a system that utilizes parallel processing techniques and create a new method, named TSC (Together in a Single Connection) to increase the number of visitors, number of pages viewed and ranking.Increase blog traffic system based on parallel processing can improve the effectiveness in increasing traffic to a blog. With this system the blog owner is expected to be easier to rise to visit his blog.


Sign in / Sign up

Export Citation Format

Share Document