Transitioning Spiking Neural Network Simulators to Heterogeneous Hardware

2021 ◽  
Vol 31 (2) ◽  
pp. 1-26
Author(s):  
Quang Anh Pham Nguyen ◽  
Philipp Andelfinger ◽  
Wen Jun Tan ◽  
Wentong Cai ◽  
Alois Knoll

Spiking neural networks (SNN) are among the most computationally intensive types of simulation models, with node counts on the order of up to 10 11 . Currently, there is intensive research into hardware platforms suitable to support large-scale SNN simulations, whereas several of the most widely used simulators still rely purely on the execution on CPUs. Enabling the execution of these established simulators on heterogeneous hardware allows new studies to exploit the many-core hardware prevalent in modern supercomputing environments, while still being able to reproduce and compare with results from a vast body of existing literature. In this article, we propose a transition approach for CPU-based SNN simulators to enable the execution on heterogeneous hardware (e.g., CPUs, GPUs, and FPGAs), with only limited modifications to an existing simulator code base and without changes to model code. Our approach relies on manual porting of a small number of core simulator functionalities as found in common SNN simulators, whereas the unmodified model code is analyzed and transformed automatically. We apply our approach to the well-known simulator NEST and make a version executable on heterogeneous hardware available to the community. Our measurements show that at full utilization, a single GPU achieves the performance of about 9 CPU cores. A CPU-GPU co-execution with load balancing is also demonstrated, which shows better performance compared to CPU-only or GPU-only execution. Finally, an analytical performance model is proposed to heuristically determine the optimal parameters to execute the heterogeneous NEST.

Author(s):  
Timothy Dykes ◽  
Claudio Gheller ◽  
Marzia Rivi ◽  
Mel Krokos

With the increasing size and complexity of data produced by large-scale numerical simulations, it is of primary importance for scientists to be able to exploit all available hardware in heterogenous high-performance computing environments for increased throughput and efficiency. We focus on the porting and optimization of Splotch, a scalable visualization algorithm, to utilize the Xeon Phi, Intel’s coprocessor based upon the new many integrated core architecture. We discuss steps taken to offload data to the coprocessor and algorithmic modifications to aid faster processing on the many-core architecture and make use of the uniquely wide vector capabilities of the device, with accompanying performance results using multiple Xeon Phi. Finally we compare performance against results achieved with the Graphics Processing Unit (GPU) based implementation of Splotch.


2020 ◽  
Vol 102 ◽  
pp. 514-523
Author(s):  
Jie Tang ◽  
Shaoshan Liu ◽  
Jie Cao ◽  
Dawei Sun ◽  
Bolin Ding ◽  
...  

1984 ◽  
Vol 16 (1-2) ◽  
pp. 281-295 ◽  
Author(s):  
Donald C Gordon

Large-scale tidal power development in the Bay of Fundy has been given serious consideration for over 60 years. There has been a long history of productive interaction between environmental scientists and engineers durinn the many feasibility studies undertaken. Up until recently, tidal power proposals were dropped on economic grounds. However, large-scale development in the upper reaches of the Bay of Fundy now appears to be economically viable and a pre-commitment design program is highly likely in the near future. A large number of basic scientific research studies have been and are being conducted by government and university scientists. Likely environmental impacts have been examined by scientists and engineers together in a preliminary fashion on several occasions. A full environmental assessment will be conducted before a final decision is made and the results will definately influence the outcome.


Impact ◽  
2019 ◽  
Vol 2019 (10) ◽  
pp. 44-46
Author(s):  
Masato Edahiro ◽  
Masaki Gondo

The pace of technology's advancements is ever-increasing and intelligent systems, such as those found in robots and vehicles, have become larger and more complex. These intelligent systems have a heterogeneous structure, comprising a mixture of modules such as artificial intelligence (AI) and powertrain control modules that facilitate large-scale numerical calculation and real-time periodic processing functions. Information technology expert Professor Masato Edahiro, from the Graduate School of Informatics at the Nagoya University in Japan, explains that concurrent advances in semiconductor research have led to the miniaturisation of semiconductors, allowing a greater number of processors to be mounted on a single chip, increasing potential processing power. 'In addition to general-purpose processors such as CPUs, a mixture of multiple types of accelerators such as GPGPU and FPGA has evolved, producing a more complex and heterogeneous computer architecture,' he says. Edahiro and his partners have been working on the eMBP, a model-based parallelizer (MBP) that offers a mapping system as an efficient way of automatically generating parallel code for multi- and many-core systems. This ensures that once the hardware description is written, eMBP can bridge the gap between software and hardware to ensure that not only is an efficient ecosystem achieved for hardware vendors, but the need for different software vendors to adapt code for their particular platforms is also eliminated.


2021 ◽  
Vol 13 (3) ◽  
pp. 355
Author(s):  
Weixian Tan ◽  
Borong Sun ◽  
Chenyu Xiao ◽  
Pingping Huang ◽  
Wei Xu ◽  
...  

Classification based on polarimetric synthetic aperture radar (PolSAR) images is an emerging technology, and recent years have seen the introduction of various classification methods that have been proven to be effective to identify typical features of many terrain types. Among the many regions of the study, the Hunshandake Sandy Land in Inner Mongolia, China stands out for its vast area of sandy land, variety of ground objects, and intricate structure, with more irregular characteristics than conventional land cover. Accounting for the particular surface features of the Hunshandake Sandy Land, an unsupervised classification method based on new decomposition and large-scale spectral clustering with superpixels (ND-LSC) is proposed in this study. Firstly, the polarization scattering parameters are extracted through a new decomposition, rather than other decomposition approaches, which gives rise to more accurate feature vector estimate. Secondly, a large-scale spectral clustering is applied as appropriate to meet the massive land and complex terrain. More specifically, this involves a beginning sub-step of superpixels generation via the Adaptive Simple Linear Iterative Clustering (ASLIC) algorithm when the feature vector combined with the spatial coordinate information are employed as input, and subsequently a sub-step of representative points selection as well as bipartite graph formation, followed by the spectral clustering algorithm to complete the classification task. Finally, testing and analysis are conducted on the RADARSAT-2 fully PolSAR dataset acquired over the Hunshandake Sandy Land in 2016. Both qualitative and quantitative experiments compared with several classification methods are conducted to show that proposed method can significantly improve performance on classification.


2003 ◽  
Vol 79 (1) ◽  
pp. 132-146 ◽  
Author(s):  
Dennis Yemshanov ◽  
Ajith H Perera

We reviewed the published knowledge on forest succession in the North American boreal biome for its applicability in modelling forest cover change over large extents. At broader scales, forest succession can be viewed as forest cover change over time. Quantitative case studies of forest succession in peer-reviewed literature are reliable sources of information about changes in forest canopy composition. We reviewed the following aspects of forest succession in literature: disturbances; pathways of post-disturbance forest cover change; timing of successional steps; probabilities of post-disturbance forest cover change, and effects of geographic location and ecological site conditions on forest cover change. The results from studies in the literature, which were mostly based on sample plot observations, appeared to be sufficient to describe boreal forest cover change as a generalized discrete-state transition process, with the discrete states denoted by tree species dominance. In this paper, we outline an approach for incorporating published knowledge on forest succession into stochastic simulation models of boreal forest cover change in a standardized manner. We found that the lack of details in the literature on long-term forest succession, particularly on the influence of pre-disturbance forest cover composition, may be limiting factors in parameterizing simulation models. We suggest that the simulation models based on published information can provide a good foundation as null models, which can be further calibrated as detailed quantitative information on forest cover change becomes available. Key words: probabilistic model, transition matrix, boreal biome, landscape ecology


Morphology ◽  
2021 ◽  
Author(s):  
Rossella Varvara ◽  
Gabriella Lapesa ◽  
Sebastian Padó

AbstractWe present the results of a large-scale corpus-based comparison of two German event nominalization patterns: deverbal nouns in -ung (e.g., die Evaluierung, ‘the evaluation’) and nominal infinitives (e.g., das Evaluieren, ‘the evaluating’). Among the many available event nominalization patterns for German, we selected these two because they are both highly productive and challenging from the semantic point of view. Both patterns are known to keep a tight relation with the event denoted by the base verb, but with different nuances. Our study targets a better understanding of the differences in their semantic import.The key notion of our comparison is that of semantic transparency, and we propose a usage-based characterization of the relationship between derived nominals and their bases. Using methods from distributional semantics, we bring to bear two concrete measures of transparency which highlight different nuances: the first one, cosine, detects nominalizations which are semantically similar to their bases; the second one, distributional inclusion, detects nominalizations which are used in a subset of the contexts of the base verb. We find that only the inclusion measure helps in characterizing the difference between the two types of nominalizations, in relation with the traditionally considered variable of relative frequency (Hay, 2001). Finally, the distributional analysis allows us to frame our comparison in the broader coordinates of the inflection vs. derivation cline.


2019 ◽  
Vol 35 (14) ◽  
pp. i417-i426 ◽  
Author(s):  
Erin K Molloy ◽  
Tandy Warnow

Abstract Motivation At RECOMB-CG 2018, we presented NJMerge and showed that it could be used within a divide-and-conquer framework to scale computationally intensive methods for species tree estimation to larger datasets. However, NJMerge has two significant limitations: it can fail to return a tree and, when used within the proposed divide-and-conquer framework, has O(n5) running time for datasets with n species. Results Here we present a new method called ‘TreeMerge’ that improves on NJMerge in two ways: it is guaranteed to return a tree and it has dramatically faster running time within the same divide-and-conquer framework—only O(n2) time. We use a simulation study to evaluate TreeMerge in the context of multi-locus species tree estimation with two leading methods, ASTRAL-III and RAxML. We find that the divide-and-conquer framework using TreeMerge has a minor impact on species tree accuracy, dramatically reduces running time, and enables both ASTRAL-III and RAxML to complete on datasets (that they would otherwise fail on), when given 64 GB of memory and 48 h maximum running time. Thus, TreeMerge is a step toward a larger vision of enabling researchers with limited computational resources to perform large-scale species tree estimation, which we call Phylogenomics for All. Availability and implementation TreeMerge is publicly available on Github (http://github.com/ekmolloy/treemerge). Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Brian Bush ◽  
Laura Vimmerstedt ◽  
Jeff Gonder

Connected and automated vehicle (CAV) technologies could transform the transportation system over the coming decades, but face vehicle and systems engineering challenges, as well as technological, economic, demographic, and regulatory issues. The authors have developed a system dynamics model for generating, analyzing, and screening self-consistent CAV adoption scenarios. Results can support selection of scenarios for subsequent computationally intensive study using higher-resolution models. The potential for and barriers to large-scale adoption of CAVs have been analyzed using preliminary quantitative data and qualitative understandings of system relationships among stakeholders across the breadth of these issues. Although they are based on preliminary data, the results map possibilities for achieving different levels of CAV adoption and system-wide fuel use and demonstrate the interplay of behavioral parameters such as how consumers value their time versus financial parameters such as operating cost. By identifying the range of possibilities, estimating the associated energy and transportation service outcomes, and facilitating screening of scenarios for more detailed analysis, this work could inform transportation planners, researchers, and regulators.


1977 ◽  
Vol 3 (1/2) ◽  
pp. 126
Author(s):  
W. Brian Arthur ◽  
Geoffrey McNicoll

Sign in / Sign up

Export Citation Format

Share Document