High-speed electron beam data verification system using high-performance neural network accelerator board

1996 ◽  
Author(s):  
Toshiyuki Tamura ◽  
Dominique Bouchon ◽  
Pierre Fournier ◽  
Koichi Moriizumi ◽  
Ken-ichi Tanaka ◽  
...  
Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1365
Author(s):  
Tao Zheng ◽  
Zhizhao Duan ◽  
Jin Wang ◽  
Guodong Lu ◽  
Shengjie Li ◽  
...  

Semantic segmentation of room maps is an essential issue in mobile robots’ execution of tasks. In this work, a new approach to obtain the semantic labels of 2D lidar room maps by combining distance transform watershed-based pre-segmentation and a skillfully designed neural network lidar information sampling classification is proposed. In order to label the room maps with high efficiency, high precision and high speed, we have designed a low-power and high-performance method, which can be deployed on low computing power Raspberry Pi devices. In the training stage, a lidar is simulated to collect the lidar detection line maps of each point in the manually labelled map, and then we use these line maps and the corresponding labels to train the designed neural network. In the testing stage, the new map is first pre-segmented into simple cells with the distance transformation watershed method, then we classify the lidar detection line maps with the trained neural network. The optimized areas of sparse sampling points are proposed by using the result of distance transform generated in the pre-segmentation process to prevent the sampling points selected in the boundary regions from influencing the results of semantic labeling. A prototype mobile robot was developed to verify the proposed method, the feasibility, validity, robustness and high efficiency were verified by a series of tests. The proposed method achieved higher scores in its recall, precision. Specifically, the mean recall is 0.965, and mean precision is 0.943.


Author(s):  
N. Yoshimura ◽  
K. Shirota ◽  
T. Etoh

One of the most important requirements for a high-performance EM, especially an analytical EM using a fine beam probe, is to prevent specimen contamination by providing a clean high vacuum in the vicinity of the specimen. However, in almost all commercial EMs, the pressure in the vicinity of the specimen under observation is usually more than ten times higher than the pressure measured at the punping line. The EM column inevitably requires the use of greased Viton O-rings for fine movement, and specimens and films need to be exchanged frequently and several attachments may also be exchanged. For these reasons, a high speed pumping system, as well as a clean vacuum system, is now required. A newly developed electron microscope, the JEM-100CX features clean high vacuum in the vicinity of the specimen, realized by the use of a CASCADE type diffusion pump system which has been essentially improved over its predeces- sorD employed on the JEM-100C.


Author(s):  
J. E. Johnson

In the early years of biological electron microscopy, scientists had their hands full attempting to describe the cellular microcosm that was suddenly before them on the fluorescent screen. Mitochondria, Golgi, endoplasmic reticulum, and other myriad organelles were being examined, micrographed, and documented in the literature. A major problem of that early period was the development of methods to cut sections thin enough to study under the electron beam. A microtome designed in 1943 moved the specimen toward a rotary “Cyclone” knife revolving at 12,500 RPM, or 1000 times as fast as an ordinary microtome. It was claimed that no embedding medium was necessary or that soft embedding media could be used. Collecting the sections thus cut sounded a little precarious: “The 0.1 micron sections cut with the high speed knife fly out at a tangent and are dispersed in the air. They may be collected... on... screens held near the knife“.


Author(s):  
Marc H. Peeters ◽  
Max T. Otten

Over the past decades, the combination of energy-dispersive analysis of X-rays and scanning electron microscopy has proved to be a powerful tool for fast and reliable elemental characterization of a large variety of specimens. The technique has evolved rapidly from a purely qualitative characterization method to a reliable quantitative way of analysis. In the last 5 years, an increasing need for automation is observed, whereby energy-dispersive analysers control the beam and stage movement of the scanning electron microscope in order to collect digital X-ray images and perform unattended point analysis over multiple locations.The Philips High-speed Analysis of X-rays system (PHAX-Scan) makes use of the high performance dual-processor structure of the EDAX PV9900 analyser and the databus structure of the Philips series 500 scanning electron microscope to provide a highly automated, user-friendly and extremely fast microanalysis system. The software that runs on the hardware described above was specifically designed to provide the ultimate attainable speed on the system.


Author(s):  
M. T. Postek ◽  
A. E. Vladar

One of the major advancements applied to scanning electron microscopy (SEM) during the past 10 years has been the development and application of digital imaging technology. Advancements in technology, notably the availability of less expensive, high-density memory chips and the development of high speed analog-to-digital converters, mass storage and high performance central processing units have fostered this revolution. Today, most modern SEM instruments have digital electronics as a standard feature. These instruments, generally have 8 bit or 256 gray levels with, at least, 512 × 512 pixel density operating at TV rate. In addition, current slow-scan commercial frame-grabber cards, directly applicable to the SEM, can have upwards of 12-14 bit lateral resolution permitting image acquisition at 4096 × 4096 resolution or greater. The two major categories of SEM systems to which digital technology have been applied are:In the analog SEM system the scan generator is normally operated in an analog manner and the image is displayed in an analog or "slow scan" mode.


2019 ◽  
Vol 12 (3) ◽  
pp. 248-261
Author(s):  
Baomin Wang ◽  
Xiao Chang

Background: Angular contact ball bearing is an important component of many high-speed rotating mechanical systems. Oil-air lubrication makes it possible for angular contact ball bearing to operate at high speed. So the lubrication state of angular contact ball bearing directly affects the performance of the mechanical systems. However, as bearing rotation speed increases, the temperature rise is still the dominant limiting factor for improving the performance and service life of angular contact ball bearings. Therefore, it is very necessary to predict the temperature rise of angular contact ball bearings lubricated with oil-air. Objective: The purpose of this study is to provide an overview of temperature calculation of bearing from many studies and patents, and propose a new prediction method for temperature rise of angular contact ball bearing. Methods: Based on the artificial neural network and genetic algorithm, a new prediction methodology for bearings temperature rise was proposed which capitalizes on the notion that the temperature rise of oil-air lubricated angular contact ball bearing is generally coupling. The influence factors of temperature rise in high-speed angular contact ball bearings were analyzed through grey relational analysis, and the key influence factors are determined. Combined with Genetic Algorithm (GA), the Artificial Neural Network (ANN) model based on these key influence factors was built up, two groups of experimental data were used to train and validate the ANN model. Results: Compared with the ANN model, the ANN-GA model has shorter training time, higher accuracy and better stability, the output of ANN-GA model shows a good agreement with the experimental data, above 92% of bearing temperature rise under varying conditions can be predicted using the ANNGA model. Conclusion: A new method was proposed to predict the temperature rise of oil-air lubricated angular contact ball bearings based on the artificial neural network and genetic algorithm. The results show that the prediction model has good accuracy, stability and robustness.


Author(s):  
Sai Venkatramana Prasada G.S ◽  
G. Seshikala ◽  
S. Niranjana

Background: This paper presents the comparative study of power dissipation, delay and power delay product (PDP) of different full adders and multiplier designs. Methods: Full adder is the fundamental operation for any processors, DSP architectures and VLSI systems. Here ten different full adder structures were analyzed for their best performance using a Mentor Graphics tool with 180nm technology. Results: From the analysis result high performance full adder is extracted for further higher level designs. 8T full adder exhibits high speed, low power delay and low power delay product and hence it is considered to construct four different multiplier designs, such as Array multiplier, Baugh Wooley multiplier, Braun multiplier and Wallace Tree multiplier. These different structures of multipliers were designed using 8T full adder and simulated using Mentor Graphics tool in a constant W/L aspect ratio. Conclusion: From the analysis, it is concluded that Wallace Tree multiplier is the high speed multiplier but dissipates comparatively high power. Baugh Wooley multiplier dissipates less power but exhibits more time delay and low PDP.


Sign in / Sign up

Export Citation Format

Share Document