scholarly journals A Highly Efficient Heterogeneous Processor for SAR Imaging

Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3409 ◽  
Author(s):  
Shiyu Wang ◽  
Shengbing Zhang ◽  
Xiaoping Huang ◽  
Jianfeng An ◽  
Libo Chang

The expansion and improvement of synthetic aperture radar (SAR) technology have greatly enhanced its practicality. SAR imaging requires real-time processing with limited power consumption for large input images. Designing a specific heterogeneous array processor is an effective approach to meet the power consumption constraints and real-time processing requirements of an application system. In this paper, taking a commonly used algorithm for SAR imaging—the chirp scaling algorithm (CSA)—as an example, the characteristics of each calculation stage in the SAR imaging process is analyzed, and the data flow model of SAR imaging is extracted. A heterogeneous array architecture for SAR imaging that effectively supports Fast Fourier Transformation/Inverse Fast Fourier Transform (FFT/IFFT) and phase compensation operations is proposed. First, a heterogeneous array architecture consisting of fixed-point PE units and floating-point FPE units, which are respectively proposed for the FFT/IFFT and phase compensation operations, increasing energy efficiency by 50% compared with the architecture using floating-point units. Second, data cross-placement and simultaneous access strategies are proposed to support the intra-block parallel processing of SAR block imaging, achieving up to 115.2 GOPS throughput. Third, a resource management strategy for heterogeneous computing arrays is designed, which supports the pipeline processing of FFT/IFFT and phase compensation operation, improving PE utilization by a factor of 1.82 and increasing energy efficiency by a factor of 1.5. Implemented in 65-nm technology, the experimental results show that the processor can achieve energy efficiency of up to 254 GOPS/W. The imaging fidelity and accuracy of the proposed processor were verified by evaluating the image quality of the actual scene.

Author(s):  
Shiyu Wang ◽  
Shengbing Zhang ◽  
Xiaoping Huang ◽  
Hao Lyu

Spaceborne SAR(synthetic aperture radar) imaging requires real-time processing of enormous amount of input data with limited power consumption. Designing advanced heterogeneous array processors is an effective way to meet the requirements of power constraints and real-time processing of application systems. To design an efficient SAR imaging processor, the on-chip data organization structure and access strategy are of critical importance. Taking the typical SAR imaging algorithm-chirp scaling algorithm-as the targeted algorithm, this paper analyzes the characteristics of each calculation stage engaged in the SAR imaging process, and extracts the data flow model of SAR imaging, and proposes a storage strategy of cross-region cross-placement and data sorting synchronization execution to ensure FFT/IFFT calculation pipelining parallel operation. The memory wall problem can be alleviated through on-chip multi-level data buffer structure, ensuring the sufficient data providing of the imaging calculation pipeline. Based on this memory organization and access strategy, the SAR imaging pipeline process that effectively supports FFT/IFFT and phase compensation operations is therefore optimized. The processor based on this storage strategy can realize the throughput of up to 115.2 GOPS, and the energy efficiency of up to 254 GOPS/W can be achieved by implementing 65 nm technology. Compared with conventional CPU+GPU acceleration solutions, the performance to power consumption ratio is increased by 63.4 times. The proposed architecture can not only improve the real-time performance, but also reduces the design complexity of the SAR imaging system, which facilitates excellent performance in tailoring and scalability, satisfying the practical needs of different SAR imaging platforms.


2021 ◽  
Vol 251 ◽  
pp. 04009
Author(s):  
Roel Aaij ◽  
Daniel Hugo Cámpora Pérez ◽  
Tommaso Colombo ◽  
Conor Fitzpatrick ◽  
Vladimir Vava Gligorov ◽  
...  

The upgraded LHCb detector, due to start datataking in 2022, will have to process an average data rate of 4 TB/s in real time. Because LHCb’s physics objectives require that the full detector information for every LHC bunch crossing is read out and made available for real-time processing, this bandwidth challenge is equivalent to that of the ATLAS and CMS HL-LHC software read-out, but deliverable five years earlier. Over the past six years, the LHCb collaboration has undertaken a bottom-up rewrite of its software infrastructure, pattern recognition, and selection algorithms to make them better able to efficiently exploit modern highly parallel computing architectures. We review the impact of this reoptimization on the energy efficiency of the realtime processing software and hardware which will be used for the upgrade of the LHCb detector. We also review the impact of the decision to adopt a hybrid computing architecture consisting of GPUs and CPUs for the real-time part of LHCb’s future data processing. We discuss the implications of these results on how LHCb’s real-time power requirements may evolve in the future, particularly in the context of a planned second upgrade of the detector.


Author(s):  
Melvyn Wright

The digital revolution is transforming astronomy from a data-starved to a data-submerged science. Instruments such as the Atacama Large Millimeter Array (ALMA), the Large Synoptic Survey Telescope (LSST), and the Square Kilometre Array (SKA) will measure their accumulated data in petabytes. The capacity to produce enormous volumes of data must be matched with the computing power to process that data and produce meaningful results. In addition to handling huge data rates, we need adaptive calibration and beamforming to handle atmospheric fluctuations and radio frequency interference, and to provide a user environment which makes the full power of large telescope arrays accessible to both expert and non-expert users. Delayed calibration and analysis limit the science which can be done. To make the best use of both telescope and human resources we must reduce the burden of data reduction. We propose to build a heterogeneous computing platform for real-time processing of radio telescope array data. Our instrumentation comprises a flexible correlator, beam former, and imager that is based on state-of-the-art digital signal processing closely coupled with a computing cluster. This instrumentation will be highly accessible to scientists, engineers, and students for research and development of real-time processing algorithms, and will tap into the pool of talented and innovative students and visiting scientists from engineering, computing, and astronomy backgrounds. The instrument can be deployed on several telescopes to get feedback from dealing with real sky data on working telescopes. Adaptive real-time imaging will transform radio astronomy by providing real-time feedback to observers. Calibration of the data is made in close to real time using a model of the sky brightness distribution. The derived calibration parameters are fed back into the imagers and beam formers. The regions imaged are used to update and improve the a priori model, which becomes the final calibrated image by the time the observations are complete.


Author(s):  
Daiki Matsumoto ◽  
Ryuji Hirayama ◽  
Naoto Hoshikawa ◽  
Hirotaka Nakayama ◽  
Tomoyoshi Shimobaba ◽  
...  

Author(s):  
David J. Lobina

The study of cognitive phenomena is best approached in an orderly manner. It must begin with an analysis of the function in intension at the heart of any cognitive domain (its knowledge base), then proceed to the manner in which such knowledge is put into use in real-time processing, concluding with a domain’s neural underpinnings, its development in ontogeny, etc. Such an approach to the study of cognition involves the adoption of different levels of explanation/description, as prescribed by David Marr and many others, each level requiring its own methodology and supplying its own data to be accounted for. The study of recursion in cognition is badly in need of a systematic and well-ordered approach, and this chapter lays out the blueprint to be followed in the book by focusing on a strict separation between how this notion applies in linguistic knowledge and how it manifests itself in language processing.


2020 ◽  
pp. 1-25
Author(s):  
Theres Grüter ◽  
Hannah Rohde

Abstract This study examines the use of discourse-level information to create expectations about reference in real-time processing, testing whether patterns previously observed among native speakers of English generalize to nonnative speakers. Findings from a visual-world eye-tracking experiment show that native (L1; N = 53) but not nonnative (L2; N = 52) listeners’ proactive coreference expectations are modulated by grammatical aspect in transfer-of-possession events. Results from an offline judgment task show these L2 participants did not differ from L1 speakers in their interpretation of aspect marking on transfer-of-possession predicates in English, indicating it is not lack of linguistic knowledge but utilization of this knowledge in real-time processing that distinguishes the groups. English proficiency, although varying substantially within the L2 group, did not modulate L2 listeners’ use of grammatical aspect for reference processing. These findings contribute to the broader endeavor of delineating the role of prediction in human language processing in general, and in the processing of discourse-level information among L2 users in particular.


2021 ◽  
pp. 100489
Author(s):  
Paul La Plante ◽  
P.K.G. Williams ◽  
M. Kolopanis ◽  
J.S. Dillon ◽  
A.P. Beardsley ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document