scholarly journals Protecting Image Processing Pipelines against Configuration Memory Errors in SRAM-Based FPGAs

Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 322 ◽  
Author(s):  
Luis Aranda ◽  
Pedro Reviriego ◽  
Juan Maestro

Image processing systems are widely used in space applications, so different radiation-induced malfunctions may occur in the system depending on the device that is implementing the algorithm. SRAM-based FPGAs are commonly used to speed up the image processing algorithm, but then the system could be vulnerable to configuration memory errors caused by single event upsets (SEUs). In those systems, the captured image is streamed pixel by pixel from the camera to the FPGA. Certain local operations such as median or rank filters need to process the image locally instead of pixel by pixel, so some particular pixel caching structures such as line-buffer-based pipelines can be used to accelerate the filtering process. However, an SRAM-based FPGA implementation of these pipelines may have malfunctions due to the mentioned configuration memory errors, so an error mitigation technique is required. In this paper, a novel method to protect line-buffer-based pipelines against SRAM-based FPGA configuration memory errors is presented. Experimental results show that, using our protection technique, considerable savings in terms of FPGA resources can be achieved while maintaining the SEU protection coverage provided by other classic pipeline protection schemes.

2021 ◽  
Author(s):  
Sheldon Mark Foulds

Over the last few years evolution in electronics technology has led to the shrinkage of electronic circuits. While this has led to the emergence of more powerful computing systems it has also caused a dramatic increase in the occurrence of soft errors and a steady climb in failure in time (FIT) rates. This problem is most prevalent in FPGA based systems which are highly susceptible to radiation induced errors. Depending upon the severity of the problem a number of methods exist to counter these effects including Triple Modular Redundancy (TMR), Error Control Coding (ECC), scrubbing systems etc. The following project presents a simulation of an FPGA based system that employs one of the popular error control code techniques called the Hamming Code. A resulting analysis shows that Hamming Code is able to mitigate the effects of single event upsets (SEUs) but suffers due to a number of limitations.


2010 ◽  
Author(s):  
Graeme Smecher ◽  
François Aubin ◽  
Oleg Djazovski ◽  
Matt Dobbs ◽  
Gordon Faulkner ◽  
...  

Template matching forms the basis of many image processing algorithms and hence the computer vision algorithms. There are many existing template matching algorithms like Sum of Absolute Difference (SAD), Normalized SAD (NSAD), Correlation methods (CORR), Normalized CORR(NCORR), Sum of Squared Difference (SSD), and Normalized SSD(NSSD). In general, as image requires more memory space for storage and much time for processing. The above said methods involves much computation. In any processing, efficiency constraints include many factors, especially accuracy of the results and speed of processing. An approach to reduce the execution time is always most appreciated. As a result of this, a novel method of partial NCC (PNCC) template matching technique is proposed in this paper. A block window approach is used to reduce the number of operations and hence to speed up the processing. A comparative study between existing NCC algorithm and the proposed partial NCC, PNCC algorithm is done. It is experimented and results proves that the execution time is reduced by 8 - 47 times approximately based on the various template images for different main images in PNCC. The accuracy of the result obtained is 100%. This proposed algorithm works for various types of images. The experiment is repeated for various sizes of templates and different sizes of main image. Further improvement in the speed of execution can be achieved by implementation of the proposed algorithm using parallel processors. It may find its importance in the real time image processing


MRS Bulletin ◽  
2003 ◽  
Vol 28 (2) ◽  
pp. 117-120 ◽  
Author(s):  
Robert Baumann

AbstractThe once-ephemeral soft error phenomenon has recently caused considerable concern for manufacturers of advanced silicon technology. Soft errors, if unchecked, now have the potential for inducing a higher failure rate than all of the other reliability-failure mechanisms combined. This article briefly reviews the three dominant radiation mechanisms responsible for soft errors in terrestrial applications and how soft errors are generated by the collection of radiation-induced charge. Scaling trends in the soft error sensitivity of various memory and logic components are presented, along with a consideration of which applications are most likely to require intervention. Some of the mitigation strategies that can be employed to reduce the soft error rate in these devices are also discussed.


2014 ◽  
Vol 23 (06) ◽  
pp. 1450081 ◽  
Author(s):  
REZA OMIDI GOSHEBLAGH ◽  
KARIM MOHAMMADI

Modern SRAM-based field programmable gate array (FPGA) devices offer high capability in implementing satellites and space systems. Unfortunately, these devices are extremely sensitive to various kinds of unwanted effects induced by space radiations especially single-event upsets (SEUs) as soft errors in configuration memory. To face this challenge, a variety of soft error mitigation techniques have been adopted in literature. In this paper, we describe an area-efficient multiplier architecture based on SRAM-FPGA that provides the self-checking capability against SEU faults. The proposed design approach, which is based on parity prediction, is able to concurrently detect the SEU faults. The implementation results of the proposed architecture reveal that the average area and delay overheads are respectively 25% and 34% in comparison with the plain version while the conventional duplication with comparison (DWC) architecture imposes 117% and 22% overheads. Moreover, the single and multi-upset fault injection experiments reveal that the proposed architecture averagely provides the failure coverage of 83% and 79% while the failure coverage of the duplicated structure is 85% and 84%, respectively for SEU and MEU faults.


2021 ◽  
Author(s):  
Sheldon Mark Foulds

Over the last few years evolution in electronics technology has led to the shrinkage of electronic circuits. While this has led to the emergence of more powerful computing systems it has also caused a dramatic increase in the occurrence of soft errors and a steady climb in failure in time (FIT) rates. This problem is most prevalent in FPGA based systems which are highly susceptible to radiation induced errors. Depending upon the severity of the problem a number of methods exist to counter these effects including Triple Modular Redundancy (TMR), Error Control Coding (ECC), scrubbing systems etc. The following project presents a simulation of an FPGA based system that employs one of the popular error control code techniques called the Hamming Code. A resulting analysis shows that Hamming Code is able to mitigate the effects of single event upsets (SEUs) but suffers due to a number of limitations.


2018 ◽  
Author(s):  
Keith A. Serrels ◽  
Kris Dickson ◽  
Dan Bodoh ◽  
Kent Erington ◽  
Anusha Weerakoon ◽  
...  

Abstract We present the first experimental demonstration of stuck-at scan chain fault isolation through the exploitation of Single Event Upsets (SEU) in a Laser-Induced Fault Analysis (LIFA) system. By observing a pass/fail flag, we can spatially map all flops after a defect in a failing scan chain through induced SEU sites produced by a fiber-amplified 25 ps 1064 nm diode laser. In addition, a custom fault isolation methodology is presented in which the result highlights only the first working flop immediately after the defect mechanism causing the stuck-at chain failure. This work demonstrates a novel method for rapid scan chain fault isolation that significantly improves localization efficacy over conventional best-known methods (BKM) based on frequency mapping. Moreover, experimental results are presented to demonstrate that LIFA can be extended to interrogate the data state of flip flops in a scan chain. Results are also presented to establish that LIFA can be configured as a hardware-based diagnostics platform.


2021 ◽  
Author(s):  
Tahir Jaffer

A new local image processing algorithm, the Tahir algorithm, is an adaptation to the standard low-pass filter. Its design is for images that have the spectrum of pixel intensity concentrated at the lower end of the intensity spectrum. Window memoization is a specialization of memoization. Memoization is a technique to reduce computational redundancy by skipping redundant calculations and storing results in memory. An adaptation for window memozation is developed based on improved symbol generation and a new eviction policy. On implementation, the mean lower-bound speed-up achieved was between 0.32 (slowdown of approximately 3) and 3.70 with a peak of 4.86. Lower-bound speed-up is established by accounting for the time to create and delete the cache. Window memoization was applied to: the convolution technique, Trajkovic corner detection algorithm and the Tahir algorithm. Window memoization can be evaluated by calculating both the speed-up achieved and the error introduced to the output image.


2021 ◽  
Author(s):  
Tahir Jaffer

A new local image processing algorithm, the Tahir algorithm, is an adaptation to the standard low-pass filter. Its design is for images that have the spectrum of pixel intensity concentrated at the lower end of the intensity spectrum. Window memoization is a specialization of memoization. Memoization is a technique to reduce computational redundancy by skipping redundant calculations and storing results in memory. An adaptation for window memozation is developed based on improved symbol generation and a new eviction policy. On implementation, the mean lower-bound speed-up achieved was between 0.32 (slowdown of approximately 3) and 3.70 with a peak of 4.86. Lower-bound speed-up is established by accounting for the time to create and delete the cache. Window memoization was applied to: the convolution technique, Trajkovic corner detection algorithm and the Tahir algorithm. Window memoization can be evaluated by calculating both the speed-up achieved and the error introduced to the output image.


Sign in / Sign up

Export Citation Format

Share Document