scholarly journals Efficient Acceleration of Stencil Applications through In-Memory Computing

Micromachines ◽  
2020 ◽  
Vol 11 (6) ◽  
pp. 622
Author(s):  
Hasan Erdem Yantır ◽  
Ahmed M. Eltawil ◽  
Khaled N. Salama

The traditional computer architectures severely suffer from the bottleneck between the processing elements and memory that is the biggest barrier in front of their scalability. Nevertheless, the amount of data that applications need to process is increasing rapidly, especially after the era of big data and artificial intelligence. This fact forces new constraints in computer architecture design towards more data-centric principles. Therefore, new paradigms such as in-memory and near-memory processors have begun to emerge to counteract the memory bottleneck by bringing memory closer to computation or integrating them. Associative processors are a promising candidate for in-memory computation, which combines the processor and memory in the same location to alleviate the memory bottleneck. One of the applications that need iterative processing of a huge amount of data is stencil codes. Considering this feature, associative processors can provide a paramount advantage for stencil codes. For demonstration, two in-memory associative processor architectures for 2D stencil codes are proposed, implemented by both emerging memristor and traditional SRAM technologies. The proposed architecture achieves a promising efficiency for a variety of stencil applications and thus proves its applicability for scientific stencil computing.

Author(s):  
Khushi Gupta ◽  
Tushar Sharma

In the modern world, we use microprocessors which are either based on ARM or x86 architecture which are the most common processor architectures. ARM originally stood for ‘Acorn RISC Machines’ but over the years changed to ‘Advanced RISC Machines’. It was started as just an experiment but showed promising results and now it is omnipresent in our modern devices. Unlike x86 which is designed for high performance, ARM focuses on low power consumption with considerable performance. Because of the advancements in the ARM technology, they are becoming more powerful than their x86 counterparts. In this analysis we will collate the two architectures briefly and conclude which microprocessor will dominate the microprocessor industry. The processor which will perform better in different tests will be more suitable for the reader to use in their application. The shift in the industry towards ARM processors can change how we write softwares which in turn will affect the whole software development environment.


Author(s):  
Akhilesh Bajaj

Recently, there has been considerable interest in evaluating newer computer architectures such as the Web services architecture and the network computer architecture. In this work we investigate the decision models of expert and novice IS managers when evaluating computing architectures for use in an organization. This task is important because several consumer choice models in the literature indicate that the evaluation of alternative products is a critical phase that consumers undergo prior to forming an attitude toward the product. Previous work on evaluating the performance of experts vs. novices has focused either on the process differences between them, or on the performance outcome differences, with work in MIS focusing primarily on process differences. In this work, we utilize a methodology that examines both aspects, by constructing individual decision models for each expert and novice in the study. There is a growing consensus in the management literature that while experts may follow different processes, very often their performance does not differ significantly from novices in the business domain.


Author(s):  
Brian Rosmaita

Von Neumann was one of the great mathematical minds of the twentieth century. His work has affected philosophy on several fronts, including logic and the philosophy of science. He also had great influence upon developments in the philosophy of mind: the computer model of mind employed during the middle-to-late twentieth century was explicitly based upon the von Neumann computer architecture. Although late twentieth-century philosophy of mind has largely rejected the von Neumann machine as a model of brain activity, his pioneering work in cellular automata has provided a basis for subsequent development in ‘distributed’ or ‘connectionist’ computer architectures.


2018 ◽  
Vol 176 ◽  
pp. 01043 ◽  
Author(s):  
Jin Wei

With the development of science and technology, artificial intelligence technology has received more and more attention and attention. Under the background of the rapid development of big data and cloud computing, the artificial intelligence industry broke out. There is a huge amount of research on artificial intelligence and the artificial intelligence industry is huge. As far as the artificial intelligence industry in China is concerned, even the start is relatively late, but the industry scale, industrial layout, and technology research are all in a continuous improvement stage. Especially after the deepening of the layout of science and technology and manufacturing industries, the scale of artificial intelligence industry is further developed. More artificial intelligence products will appear at the same time. From the perspective of the concept, development history and new progress of artificial intelligence, this paper combines China’s artificial intelligence market and the development of artificial intelligence companies to analyze the current major application areas, and then further explore the future development trend of artificial intelligence.


1984 ◽  
Vol 1 (1) ◽  
pp. 26-38 ◽  
Author(s):  
Robert Kowalski

The Japanese Fifth Generation Computer Systems (FGCS) project has chosen logic programming for its core programming language. It has recognized the major contribution that logic programming has to make not only in artificial intelligence but in database systems and software specification as well. It has recognized and intends to exploit the greater potential that logic programming has to offer for taking advantage of the parallelism possible with innovative multiprocessor computer architectures.


1992 ◽  
Vol 01 (01) ◽  
pp. 57-83
Author(s):  
JOSE G. DELGADO-FRIAS ◽  
STAMATIS VASSILIADIS ◽  
JAMSHID GOSHTASBI

Semantic networks as a means for knowledge representation and manipulation are used in many artificial intelligence applications. A number of computer architectures, that have been reported for semantic network processing, are presented in this paper. A novel set of evaluation criteria for such semantic network architectures has been developed. Semantic network processing as well as architectural issues are considered in such evaluation criteria. A study of how the reported architectures meet the requirements of each criterion is presented. This set of evaluation criteria is useful for future designs of machines for semantic networks because of its comprehensive range of issues on semantic networks and architectures.


2021 ◽  
Vol 18 (5) ◽  
pp. 6430-6433
Author(s):  
Ivan Izonin ◽  
◽  
Nataliya Shakhovska

<abstract> <p>The current state of the development of Medicine today is changing dramatically. Previously, data of the patient's health were collected only during a visit to the clinic. These were small chunks of information obtained from observations or experimental studies by clinicians, and were recorded on paper or in small electronic files. The advances in computer power development, hardware and software tools and consequently design an emergence of miniature smart devices for various purposes (flexible electronic devices, medical tattoos, stick-on sensors, biochips etc.) can monitor various vital signs of patients in real time and collect such data comprehensively. There is a steady growth of such technologies in various fields of medicine for disease prevention, diagnosis, and therapy. Due to this, clinicians began to face similar problems as data scientists. They need to perform many different tasks, which are based on a huge amount of data, in some cases with incompleteness and uncertainty and in most others with complex, non-obvious connections between them and different for each individual patient (observation) as well as a lack of time to solve them effectively. These factors significantly decrease the quality of decision making, which usually affects the effectiveness of diagnosis or therapy. That is why the new concept in Medicine, widely known as Data-Driven Medicine, arises nowadays. This approach, which based on IoT and Artificial Intelligence, provide possibilities for efficiently process of the huge amounts of data of various types, stimulates new discoveries and provides the necessary integration and management of such information for enabling precision medical care. Such approach could create a new wave in health care. It will provide effective management of a huge amount of comprehensive information about the patient's condition; will increase the speed of clinician's expertise, and will maintain high accuracy analysis based on digital tools and machine learning. The combined use of different digital devices and artificial intelligence tools will provide an opportunity to deeply understand the disease, boost the accuracy and speed of its detection at early stages and improve the modes of diagnosis. Such invaluable information stimulates new ways to choose patient-oriented preventions and interventions for each individual case.</p> </abstract>


2020 ◽  
Vol 9 (1) ◽  
pp. 27-44
Author(s):  
Vinayak Majhi ◽  
Angana Saikia ◽  
Amitava Datta ◽  
Aseem Sinha ◽  
Sudip Paul

In the last few years deep learning (DL) has gained a great attention in modern technology. By using a deep learning method, we can analyse different types of data in different domains which is near to the accuracy of humans. As DL is our upcoming technology and it is also under development, we can say DL is the successor of machine learning (ML) technique. In the present era, ML is used everywhere, wherever we need to analyse statistical data. As we can say DL is our future technology that going to cover every sector of our modern industry, one question always remains: why we are lagging? So, the simple answer in terms of analysing any algorithm is complexity, both time and space. DL needs a large artificial neural network (ANN) with hundreds of hidden layers trained with a huge amount of data. So, to performing these tasks we need high-performance computing device that is very expensive in nowadays. With the growing industries of semiconducting devices, we can easily say that the future of DL is about to come with developing artificial intelligence (AI). As an example, in 2009, the Google Brain, a deep learning artificial intelligence team of Google introduced a Nvidia GPU which increased the learning speed of DL system by 100 times. As of 2017, the intermediate connection of networks increases to a few million units from few thousand, this network can perform several tasks like object recognition, pattern recognition, speech recognition, and image restoration. It has a greater scope in bioengineering since each living organism contains a huge amount data; it can be used for disease diagnosis, rehabilitation, and treatment. It can also help by using data to find the different features and helps us to take several possible decisions in real time. In this review, we found that DL can be very helpful for diagnosing neurological disorders by its symptoms, because DL can be used to identify patterns for each disorder for identification. The benefit is learning how DL can be helpful identifying different neuronal disorders based on different neuropsychiatric symptoms.


Author(s):  
Jo Dale Carothers

Abstract The importance of the study of design theory and methodologies has become apparent in all disciplines of engineering. In particular, if the computer industry is to advance at the rate of improvements in semiconductor technology, then better methods must be developed for designing and verifying computer architectures. The field of computer-aided design (CAD) has provided tools and analysis techniques for use in this area. However, the vast majority of CAD systems require a formal, complete specification of the computer’s design as input. Creating such a complex specification is itself a very difficult, time-consuming task. In reality this is truly the “design” phase and is usually assumed already complete by CAD systems. The design, including performance/cost trade-offs and verification, of architecture specifications is the subject of this research.


Sign in / Sign up

Export Citation Format

Share Document