Computing Gamma Calculus on Computer Cluster

2010 ◽  
Vol 1 (4) ◽  
pp. 42-52 ◽  
Author(s):  
Hong Lin ◽  
Jeremy Kemp ◽  
Padraic Gilbert

Gamma Calculus is an inherently parallel, high-level programming model, which allows simple programming molecules to interact, creating a complex system with minimum of coding. Gamma calculus modeled programs were written on top of IBM’s TSpaces middleware, which is Java-based and uses a “Tuple Space” based model for communication, similar to that in Gamma. A parser was written in C++ to translate the Gamma syntax. This was implemented on UHD’s grid cluster (grid.uhd.edu), and in an effort to increase performance and scalability, existing Gamma programs are being transferred to Nvidia’s CUDA architecture. General Purpose GPU computing is well suited to run Gamma programs, as GPU’s excel at running the same operation on a large data set, potentially offering a large speedup.

2012 ◽  
pp. 2016-2026
Author(s):  
Hong Lin ◽  
Jeremy Kemp ◽  
Padraic Gilbert

Gamma Calculus is an inherently parallel, high-level programming model, which allows simple programming molecules to interact, creating a complex system with minimum of coding. Gamma calculus modeled programs were written on top of IBM’s TSpaces middleware, which is Java-based and uses a “Tuple Space” based model for communication, similar to that in Gamma. A parser was written in C++ to translate the Gamma syntax. This was implemented on UHD’s grid cluster (grid.uhd.edu), and in an effort to increase performance and scalability, existing Gamma programs are being transferred to Nvidia’s CUDA architecture. General Purpose GPU computing is well suited to run Gamma programs, as GPU’s excel at running the same operation on a large data set, potentially offering a large speedup.


Author(s):  
Hong Lin ◽  
Jeremy Kemp ◽  
Padraic Gilbert

Gamma Calculus is an inherently parallel, high-level programming model, which allows simple programming molecules to interact, creating a complex system with minimum of coding. Gamma calculus modeled programs were written on top of IBM’s TSpaces middleware, which is Java-based and uses a “Tuple Space” based model for communication, similar to that in Gamma. A parser was written in C++ to translate the Gamma syntax. This was implemented on UHD’s grid cluster (grid.uhd.edu), and in an effort to increase performance and scalability, existing Gamma programs are being transferred to Nvidia’s CUDA architecture. General Purpose GPU computing is well suited to run Gamma programs, as GPU’s excel at running the same operation on a large data set, potentially offering a large speedup.


Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 193 ◽  
Author(s):  
Sebastian Raschka ◽  
Joshua Patterson ◽  
Corey Nolet

Smarter applications are making better use of the insights gleaned from data, having an impact on every industry and research discipline. At the core of this revolution lies the tools and the methods that are driving it, from processing the massive piles of data generated each day to learning from and taking useful action. Deep neural networks, along with advancements in classical machine learning and scalable general-purpose graphics processing unit (GPU) computing, have become critical components of artificial intelligence, enabling many of these astounding breakthroughs and lowering the barrier to adoption. Python continues to be the most preferred language for scientific computing, data science, and machine learning, boosting both performance and productivity by enabling the use of low-level libraries and clean high-level APIs. This survey offers insight into the field of machine learning with Python, taking a tour through important topics to identify some of the core hardware and software paradigms that have enabled it. We cover widely-used libraries and concepts, collected together for holistic comparison, with the goal of educating the reader and driving the field of Python machine learning forward.


2017 ◽  
Vol 73 (6) ◽  
pp. 478-487 ◽  
Author(s):  
Daniel Castaño-Díez

Dynamois a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance.Dynamois built upon mbtools (middle layer toolbox), a general-purposeMATLABlibrary for object-oriented scientific programming specifically developed to underpinDynamobut usable as an independent tool. Its structure intertwines a flexibleMATLABcodebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without aMATLABlicense. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided forMATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction.Dynamosupports the use of graphical processing units (GPUs), yielding considerable speedup factors both for nativeDynamoprocedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through itsMATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version ofDynamocan be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance.


PeerJ ◽  
2015 ◽  
Vol 3 ◽  
pp. e1393 ◽  
Author(s):  
Camille Desjonquères ◽  
Fanny Rybak ◽  
Marion Depraetere ◽  
Amandine Gasc ◽  
Isabelle Le Viol ◽  
...  

The past decade has produced an increased ecological interest in sonic environments, or soundscapes. However, despite this rise in interest and technological improvements that allow for long-term acoustic surveys in various environments, some habitats’ soundscapes remain to be explored. Ponds, and more generally freshwater habitats, are one of these acoustically unexplored environments. Here we undertook the first long term acoustic monitoring of three temperate ponds in France. By aural and visual inspection of a selection of recordings, we identified 48 different sound types, and according to the rarefaction curves we calculated, more sound types are likely present in one of the three ponds. The richness of sound types varied significantly across ponds. Surprisingly, there was no pond-to-pond daily consistency of sound type richness variation; each pond had its own daily patterns of activity. We also explored the possibility of using six acoustic diversity indices to conduct rapid biodiversity assessments in temperate ponds. We found that all indices were sensitive to the background noise as estimated through correlations with the signal-to-noise ratio (SNR). However, we determined that theARindex could be a good candidate to measure acoustic diversities using partial correlations with the SNR as a control variable. Yet, research is still required to automatically compute the SNR in order to apply this index on a large data set of recordings. The results showed that these three temperate ponds host a high level of acoustic diversity in which the soundscapes were variable not only between but also within the ponds. The sources producing this diversity of sounds and the drivers of difference in daily song type richness variation both require further investigation. Such research would yield insights into the biodiversity and ecology of temperate ponds.


Author(s):  
Hannah S. Walsh ◽  
Andy Dong ◽  
Irem Y. Tumer ◽  
Guillaume Brat

Abstract When designing engineered systems, the potential for unintended consequences of design policies exists despite best intentions. The effect of risk factors for unintended consequences are often known only in hindsight. However, since historical knowledge is generally associated with a single event, it is difficult to uncover general trends in the formation and types of unintended consequences. In this research, archetypes of unintended consequences are learned from historical data. This research contributes toward the understanding of archetypes of unintended consequences by using machine learning over a large data set of lessons learned from adverse events at NASA. Sixty-six archetypes are identified because they share similar sets of risk factors such as complexity and human-machine interaction. To validate the learned archetypes, system dynamics representations of the archetypes are compared to known high-level archetypes of unintended consequences. The main contribution of the paper is a set of archetypes that apply to many engineered systems and a pattern of leading indicators that open a new path to manage unintended consequences and mitigate the magnitude of potentially adverse outcomes.


Author(s):  
Nikitas Papangelopoulos ◽  
Dimitrios Vlachakis ◽  
Arianna Filntisi ◽  
Paraskevas Fakourelis ◽  
Louis Papageorgiou ◽  
...  

The exponential growth of available biological data in recent years coupled with their increasing complexity has made their analysis a computationally challenging process. Traditional central processing unist (CPUs) are reaching their limit in processing power and are not designed primarily for multithreaded applications. Graphics processing units (GPUs) on the other hand are affordable, scalable computer powerhouses that, thanks to the ever increasing demand for higher quality graphics, have yet to reach their limit. Typically high-end CPUs have 8-16 cores, whereas GPUs can have more than 2,500 cores. GPUs are also, by design, highly parallel, multicore and multithreaded, able of handling thousands of threads doing the same calculation on different subsets of a large data set. This ability is what makes them perfectly suited for biological analysis tasks. Lately this potential has been realized by many bioinformatics researches and a huge variety of tools and algorithms have been ported to GPUs, or designed from the ground up to maximize the usage of available cores. Here, we present a comprehensive review of available bioinformatics tools ranging from sequence and image analysis to protein structure prediction and systems biology that use NVIDIA Compute Unified Device Architecture (CUDA) general-purpose computing on graphics processing units (GPGPU) programming language.


2020 ◽  
pp. 0887302X2093119 ◽  
Author(s):  
Rachel Rose Getman ◽  
Denise Nicole Green ◽  
Kavita Bala ◽  
Utkarsh Mall ◽  
Nehal Rawat ◽  
...  

With the proliferation of digital photographs and the increasing digitization of historical imagery, fashion studies scholars must consider new methods for interpreting large data sets. Computational methods to analyze visual forms of big data have been underway in the field of computer science through computer vision, where computers are trained to “read” images through a process called machine learning. In this study, fashion historians and computer scientists collaborated to explore the practical potential of this emergent method by examining a trend related to one particular fashion item—the baseball cap—across two big data sets—the Vogue Runway database (2000–2018) and the Matzen et al. Streetstyle-27K data set (2013–2016). We illustrate one implementation of high-level concept recognition to map a fashion trend. Tracking trend frequency helps visualize larger patterns and cultural shifts while creating sociohistorical records of aesthetics, which benefits fashion scholars and industry alike.


Sign in / Sign up

Export Citation Format

Share Document