scholarly journals PETRA

2021 ◽  
Vol 18 (2) ◽  
pp. 1-26
Author(s):  
Ramin Izadpanah ◽  
Christina Peterson ◽  
Yan Solihin ◽  
Damian Dechev

Emerging byte-addressable Non-Volatile Memories (NVMs) enable persistent memory where process state can be recovered after crashes. To enable applications to rely on persistent data, durable data structures with failure-atomic operations have been proposed. However, they lack the ability to allow users to execute a sequence of operations as transactions. Meanwhile, persistent transactional memory (PTM) has been proposed by adding durability to Software Transactional Memory (STM). However, PTM suffers from high performance overheads and low scalability due to false aborts, logging, and ordering constraints on persistence. In this article, we propose PETRA, a new approach for constructing persistent transactional linked data structures. PETRA natively supports transactions, but unlike PTM, relies on the high-level information from the data structure semantics. This gives PETRA unique advantages in the form of high performance and high scalability. Our experimental results using various benchmarks demonstrate the scalability of PETRA in all workloads and transaction sizes. PETRA outperforms the state-of-the-art PTMs by an order of magnitude in transactions of size greater than one, and demonstrates superior performance in transactions of size one.

Author(s):  
Kristin Krahl ◽  
Mark W. Scerbo

The present study examined team performance on an adaptive pursuit tracking task with human-human and human-computer teams. The participants were randomly assigned to one of three team conditions where their partner was either a computer novice, computer expert, or human. Participants began the experiment with control over either the horizontal or vertical axis, but had the option of taking control of their teammate's axis if they achieved superior performance on the previous trial. A control condition was also run where a single participant controlled both axes. Performance was assessed by RMSE scores over 100 trials. The results showed that performance along the horizontal axis improved over the session regardless of the experimental condition, but the degree of improvement was dependent upon group assignment. Individuals working alone or paired with an expert computer maintained a high level of performance throughout the experiment. Those paired with a computer-novice or another human performed poorly initially, but eventually reached the level of those in the other conditions. The results showed that team training can be as effective as individual training, but that the quality of training is moderated by the skill level of one's teammate. Moreover, these findings suggest that task partitioning of high performance skills between a human and a computer is not only possible but may be considered a viable option in the design of adaptive systems.


2018 ◽  
Author(s):  
Gabriella Shull ◽  
Christiane Haffner ◽  
Wieland B. Huttner ◽  
Elena Taverna ◽  
Suhasa B Kodandaramaiah

AbstractMicroinjection into single cells in intact tissue allows the delivery of membrane-impermeant molecules such as nucleic acids and proteins is a powerful technique to study and manipulate the behavior of these cells and, if applicable, their progeny. However, a high level of skill is required to perform such microinjection and is a low-throughput and low-yield process. The automation of microinjection into cells in intact tissue would empower an increasing number of researchers to perform these challenging experiments and could potentially open up new avenues of experimentation. We have developed the ‘Autoinjector’, a robot that utilizes images acquired from a microscope to guide a microinjection needle into tissue to deliver femtoliter volumes of liquids into single cells. The robotic operation enables microinjection of hundreds of cells within a single organotypic slice, resulting in an overall yield that is an order of magnitude greater than manual microinjection. We validated the performance of the Autoinjector by microinjecting both apical progenitors (APs) and newborn neurons in the embryonic mouse telencephalon, APs in the embryonic mouse hindbrain, and neurons in fetal human brain tissue. We demonstrate the capability of the Autoinjector to deliver exogenous mRNA into APs. Further, we used the Autoinjector to systematically study gap-junctional communication between neural progenitors in the embryonic mouse telencephalon and found that apical contact is a characteristic feature of the cells that are part of a gap junction-coupled cell cluster. The throughput and versatility of the Autoinjector will not only render microinjection a broadly accessible high-performance cell manipulation technique but will also provide a powerful new platform for bioengineering and biotechnology for performing single-cell analyses in intact tissue.


Author(s):  
E. Salami ◽  
J. A. Soler ◽  
R. Cuadrado ◽  
C. Barrado ◽  
E. Pastor

Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.


2021 ◽  
Vol 55 (3) ◽  
pp. 92-96
Author(s):  
Shashi Gowda ◽  
Yingbo Ma ◽  
Alessandro Cheli ◽  
Maja Gwóźzdź ◽  
Viral B. Shah ◽  
...  

As mathematical computing becomes more democratized in high-level languages, high-performance symbolic-numeric systems are necessary for domain scientists and engineers to get the best performance out of their machine without deep knowledge of code optimization. Naturally, users need different term types either to have different algebraic properties for them, or to use efficient data structures. To this end, we developed Symbolics.jl, an extendable symbolic system which uses dynamic multiple dispatch to change behavior depending on the domain needs. In this work we detail an underlying abstract term interface which allows for speed without sacrificing generality. We show that by formalizing a generic API on actions independent of implementation, we can retroactively add optimized data structures to our system without changing the pre-existing term rewriters. We showcase how this can be used to optimize term construction and give a 113x acceleration on general symbolic transformations. Further, we show that such a generic API allows for complementary term-rewriting implementations. Exploiting this feature, we demonstrate the ability to swap between classical term-rewriting simplifiers and e-graph-based term-rewriting simplifiers. We illustrate how this symbolic system improves numerical computing tasks by showcasing an e-graph ruleset which minimizes the number of CPU cycles during expression evaluation, and demonstrate how it simplifies a real-world reaction-network simulation to halve the runtime. Additionally, we show a reaction-diffusion partial differential equation solver which is able to be automatically converted into symbolic expressions via multiple dispatch tracing, which is subsequently accelerated and parallelized to give a 157x simulation speedup. Together, this presents Symbolics.jl as a next-generation symbolic-numeric computing environment geared towards modeling and simulation.


2021 ◽  
Vol 11 (22) ◽  
pp. 10803
Author(s):  
Jiagang Song ◽  
Yunwu Lin ◽  
Jiayu Song ◽  
Weiren Yu ◽  
Leyuan Zhang

Mass multimedia data with geographical information (geo-multimedia) are collected and stored on the Internet due to the wide application of location-based services (LBS). How to find the high-level semantic relationship between geo-multimedia data and construct efficient index is crucial for large-scale geo-multimedia retrieval. To combat this challenge, the paper proposes a deep cross-modal hashing framework for geo-multimedia retrieval, termed as Triplet-based Deep Cross-Modal Retrieval (TDCMR), which utilizes deep neural network and an enhanced triplet constraint to capture high-level semantics. Besides, a novel hybrid index, called TH-Quadtree, is developed by combining cross-modal binary hash codes and quadtree to support high-performance search. Extensive experiments are conducted on three common used benchmarks, and the results show the superior performance of the proposed method.


2021 ◽  
Vol 5 (ICFP) ◽  
pp. 1-29
Author(s):  
Nicolas Krauter ◽  
Patrick Raaf ◽  
Peter Braam ◽  
Reza Salkhordeh ◽  
Sebastian Erdweg ◽  
...  

Emerging persistent memory in commodity hardware allows byte-granular accesses to persistent state at memory speeds. However, to prevent inconsistent state in persistent memory due to unexpected system failures, different write-semantics are required compared to volatile memory. Transaction-based library solutions for persistent memory facilitate the atomic modification of persistent data in languages where memory is explicitly managed by the programmer, such as C/C++. For languages that provide extended capabilities like automatic memory management, a more native integration into the language is needed to maintain the high level of memory abstraction. It is shown in this paper how persistent software transactional memory (PSTM) can be tightly integrated into the runtime system of Haskell to atomically manage values of persistent transactional data types. PSTM has a clear interface and semantics extending that of software transactional memory (STM). Its integration with the language’s memory management retains features like garbage collection and allocation strategies, and is fully compatible with Haskell's lazy execution model. Our PSTM implementation demonstrates competitive performance with low level libraries and trivial portability of existing STM libraries to PSTM. The implementation allows further interesting use cases, such as persistent memoization and persistent Haskell expressions.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Belhassen Akrout ◽  
Sana Fakhfakh

Current research in biometrics aims to develop high-performance tools, which would make it possible to better extract the traits specific to each individual and to grasp their discriminating characteristics. This research is based on high-level analyses of images, captured from the candidate to identify, for a better understanding and interpretation of these signals. Several biometric identification systems exist. The recognition systems based on the iris have many advantages and they are among the most reliable. In this paper, we propose a new approach based on biometric iris authentication. A new scheme was made in this work that consists of calculating a three-dimensional head pose to capture a good iris image from a video sequence which affects the identification results. From this image, we were able to locate the iris and analyse its texture by intelligent use of Meyer wavelets. Our approach was evaluated and approved through two databases CASIA Iris Distance and MiraclHB. The comparative study showed its effectiveness compared to those in the literature.


Author(s):  
Ann Wrightson

Parallel processing and memory bottlenecks dominate current platform architecture conversations. After many years on the sidelines, parallel architectures are rapidly becoming mainstream, with more parallelism the obvious way to gain yet more performance. Feeding data to and from all these parallel cycles is also becoming more challenging. What does this have to do with XML? Surely all this is under the hood, something for compiler designers, software architects and other non-content people to worry about? The answer is that these issues can't be totally hidden under the hood. Balisageurs as content-folks and interoperability-folks need to pay attention now to the high level information design heuristics that will prevent our data structures being the ones that happen to run like treacle (or molasses) on the coming generations of faster, larger and neater systems.


2021 ◽  
Vol 17 (1) ◽  
pp. 1-27
Author(s):  
Aishwarya Ganesan ◽  
Ramnatthan Alagappan ◽  
Andrea C. Arpaci-Dusseau ◽  
Remzi H. Arpaci-Dusseau

We introduce consistency-aware durability or C ad , a new approach to durability in distributed storage that enables strong consistency while delivering high performance. We demonstrate the efficacy of this approach by designing cross-client monotonic reads , a novel and strong consistency property that provides monotonic reads across failures and sessions in leader-based systems; such a property can be particularly beneficial in geo-distributed and edge-computing scenarios. We build O rca , a modified version of ZooKeeper that implements C ad and cross-client monotonic reads. We experimentally show that O rca provides strong consistency while closely matching the performance of weakly consistent ZooKeeper. Compared to strongly consistent ZooKeeper, O rca provides significantly higher throughput (1.8--3.3×) and notably reduces latency, sometimes by an order of magnitude in geo-distributed settings. We also implement C ad in Redis and show that the performance benefits are similar to that of C ad ’s implementation in ZooKeeper.


2020 ◽  
Author(s):  
James McDonagh ◽  
William Swope ◽  
Richard L. Anderson ◽  
Michael Johnston ◽  
David J. Bray

Digitization offers significant opportunities for the formulated product industry to transform the way it works and develop new methods of business. R&D is one area of operation that is challenging to take advantage of these technologies due to its high level of domain specialisation and creativity but the benefits could be significant. Recent developments of base level technologies such as artificial intelligence (AI)/machine learning (ML), robotics and high performance computing (HPC), to name a few, present disruptive and transformative technologies which could offer new insights, discovery methods and enhanced chemical control when combined in a digital ecosystem of connectivity, distributive services and decentralisation. At the fundamental level, research in these technologies has shown that new physical and chemical insights can be gained, which in turn can augment experimental R&D approaches through physics-based chemical simulation, data driven models and hybrid approaches. In all of these cases, high quality data is required to build and validate models in addition to the skills and expertise to exploit such methods. In this article we give an overview of some of the digital technology demonstrators we have developed for formulated product R&D. We discuss the challenges in building and deploying these demonstrators.<br>


Sign in / Sign up

Export Citation Format

Share Document