scholarly journals An Efficient Residual-Based Method for Railway Image Dehazing

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6204
Author(s):  
Qinghong Liu ◽  
Yong Qin ◽  
Zhengyu Xie ◽  
Zhiwei Cao ◽  
Limin Jia

Trains shuttle in semiopen environments, and the surrounding environment plays an important role in the safety of train operation. The weather is one of the factors that affect the surrounding environment of railways. Under haze conditions, railway monitoring and staff vision could be blurred, threatening railway safety. This paper tackles image dehazing for railways. The contributions of this paper for railway video image dehazing are as follows: (1) this paper proposes an end-to-end residual block-based haze removal method that consists of two subnetworks, namely fine-grained and coarse-grained network can directly generate the clean image from input hazy image, called RID-Net (Railway Image Dehazing Network). (2) The combined loss function (per-pixel loss and perceptual loss functions) is proposed to achieve both low-level features and high-level features so to generate the high-quality restored images. (3) We take the full-reference criterion (PSNR&SSIM), object detection, running time, and sensory vision to evaluate the proposed dehazing method. Experimental results on railway synthesized dataset, benchmark indoor dataset, and real-world dataset demonstrate our method has superior performance compared to the state-of-the-art methods.

Author(s):  
Weichun Liu ◽  
Xiaoan Tang ◽  
Chenglin Zhao

Recently, deep trackers based on the siamese networking are enjoying increasing popularity in the tracking community. Generally, those trackers learn a high-level semantic embedding space for feature representation but lose low-level fine-grained details. Meanwhile, the learned high-level semantic features are not updated during online tracking, which results in tracking drift in presence of target appearance variation and similar distractors. In this paper, we present a novel end-to-end trainable Convolutional Neural Network (CNN) based on the siamese network for distractor-aware tracking. It enhances target appearance representation in both the offline training stage and online tracking stage. In the offline training stage, this network learns both the low-level fine-grained details and high-level coarse-grained semantics simultaneously in a multi-task learning framework. The low-level features with better resolution are complementary to semantic features and able to distinguish the foreground target from background distractors. In the online stage, the learned low-level features are fed into a correlation filter layer and updated in an interpolated manner to encode target appearance variation adaptively. The learned high-level features are fed into a cross-correlation layer without online update. Therefore, the proposed tracker benefits from both the adaptability of the fine-grained correlation filter and the generalization capability of the semantic embedding. Extensive experiments are conducted on the public OTB100 and UAV123 benchmark datasets. Our tracker achieves state-of-the-art performance while running with a real-time frame-rate.


2007 ◽  
Vol 12 (4) ◽  
pp. 409-418
Author(s):  
Igoris Belovas ◽  
Vadimas Starikovičius

Stable distributions have a wide sphere of application: probability theory, physics, electronics, economics, sociology. Particularly important role they play in financial mathematics, since the classical models of financial market, which are based on the hypothesis of the normality, often become inadequate. However, the practical implementation of stable models is a nontrivial task, because the probability density functions of α‐stable distributions have no analytical representations (with a few exceptions). In this work we exploit the parallel computing technologies for acceleration of numerical solution of stable modelling problems. Specifically, we are solving the stable law parameters estimation problem by the maximum likelihood method. If we need to deal with a big number of long financial series, only the means of parallel technologies can allow us to get results in a adequate time. We have distinguished and defined several hierarchical levels of parallelism. We show that coarse‐grained Multi‐Sets parallelization is very efficient on computer clusters. Fine‐grained Maximum Likelihood level is very efficient on shared memory machines with Symmetric multiprocessing and Hyper‐threading technologies. Hybrid application, which is utilizing both of those levels, has shown superior performance compared to single level (MS) parallel application on cluster of Pentium 4 HT nodes.


1994 ◽  
Vol 6 (3) ◽  
pp. 365-374 ◽  
Author(s):  
Philip T. Leat ◽  
Jane H. Scarrow

From at least the Early Jurassic to the Miocene, eastward subduction of oceanic crust took place beneath the Antarctic Peninsula. Magmatism associated with the subduction generated a N-S linear belt of volcanic rocks known as the Antarctic Peninsula Volcanic Group (APVG), and which erosion has now exposed at about the plutonic/volcanic interface. Large central volcanoes from the APVG are described here for the first time. The structures are situated in north-west Palmer Land within the main Mesozoic magmatic arc. One centre, Zonda Towers, is recognized by the presence of a 160 m thick silicic ignimbrite, containing accidental lava blocks up to 25 m in diameter. This megabreccia is interpreted as a caldera-fill deposit which formed by land sliding of steep caldera walls during ignimbrite eruption and deposition. A larger centre, Mount Edgell-Wright Spires, is dominated by coarse-grained debris flow deposits and silicic ignimbrites which, with minor lavas and fine-grained tuffs, form a volcanic succession some 1.5 km thick. Basic intermediate and silicic sills c. 50 m thick intrude the succession. A central gabbro-granite intrusion is interpreted to be a high-level magma chamber of the Mount Edgell volcano.


2012 ◽  
Vol 2012 ◽  
pp. 1-15 ◽  
Author(s):  
Ilia Lebedev ◽  
Christopher Fletcher ◽  
Shaoyi Cheng ◽  
James Martin ◽  
Austin Doupnik ◽  
...  

We present a highly productive approach to hardware design based on a many-core microarchitectural template used to implement compute-bound applications expressed in a high-level data-parallel language such as OpenCL. The template is customized on a per-application basis via a range of high-level parameters such as the interconnect topology or processing element architecture. The key benefits of this approach are that it (i) allows programmers to express parallelism through an API defined in a high-level programming language, (ii) supports coarse-grained multithreading and fine-grained threading while permitting bit-level resource control, and (iii) reduces the effort required to repurpose the system for different algorithms or different applications. We compare template-driven design to both full-custom and programmable approaches by studying implementations of a compute-bound data-parallel Bayesian graph inference algorithm across several candidate platforms. Specifically, we examine a range of template-based implementations on both FPGA and ASIC platforms and compare each against full custom designs. Throughout this study, we use a general-purpose graphics processing unit (GPGPU) implementation as a performance and area baseline. We show that our approach, similar in productivity to programmable approaches such as GPGPU applications, yields implementations with performance approaching that of full-custom designs on both FPGA and ASIC platforms.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 447
Author(s):  
Aditi Gupta ◽  
Rinkaj Goyal

Software clones are code fragments with similar or nearly similar functionality or structures. These clones are introduced in a project either accidentally or deliberately during software development or maintenance process. The presence of clones poses a significant threat to the maintenance of software systems and is on the top of the list of code smell types. Clones can be simple (fine-grained) or high-level (coarse-grained), depending on the chosen granularity of code for the clone detection. Simple clones are generally viewed at the lines/statements level, whereas high-level clones have granularity as a block, method, class, or file. High-level clones are said to be composed of multiple simple clones. This study aims to detect high-level conceptual code clones (having granularity as java methods) in java-based projects, which is extendable to the projects developed in other languages as well. Conceptual code clones are the ones implementing a similar higher-level abstraction such as an Abstract Data Type (ADT) list. Based on the assumption that “similar documentation implies similar methods”, the proposed mechanism uses “documentation” associated with methods to identify method-level concept clones. As complete documentation does not contribute to the method’s semantics, we extracted only the description part of the method’s documentation, which led to two benefits: increased efficiency and reduced text corpus size. Further, we used Latent Semantic Indexing (LSI) with different combinations of weight and similarity measures to identify similar descriptions in the text corpus. To show the efficacy of the proposed approach, we validated it using three java open source systems of sufficient length. The findings suggest that the proposed mechanism can detect methods implementing similar high-level concepts with improved recall values.


2020 ◽  
Vol 34 (05) ◽  
pp. 8952-8959
Author(s):  
Yawei Sun ◽  
Lingling Zhang ◽  
Gong Cheng ◽  
Yuzhong Qu

Semantic parsing transforms a natural language question into a formal query over a knowledge base. Many existing methods rely on syntactic parsing like dependencies. However, the accuracy of producing such expressive formalisms is not satisfying on long complex questions. In this paper, we propose a novel skeleton grammar to represent the high-level structure of a complex question. This dedicated coarse-grained formalism with a BERT-based parsing algorithm helps to improve the accuracy of the downstream fine-grained semantic parsing. Besides, to align the structure of a question with the structure of a knowledge base, our multi-strategy method combines sentence-level and word-level semantics. Our approach shows promising performance on several datasets.


2020 ◽  
Vol 36 (16) ◽  
pp. 4458-4465 ◽  
Author(s):  
Ruichu Cai ◽  
Xuexin Chen ◽  
Yuan Fang ◽  
Min Wu ◽  
Yuexing Hao

Abstract Motivation Synthetic lethality (SL) is a promising form of gene interaction for cancer therapy, as it is able to identify specific genes to target at cancer cells without disrupting normal cells. As high-throughput wet-lab settings are often costly and face various challenges, computational approaches have become a practical complement. In particular, predicting SLs can be formulated as a link prediction task on a graph of interacting genes. Although matrix factorization techniques have been widely adopted in link prediction, they focus on mapping genes to latent representations in isolation, without aggregating information from neighboring genes. Graph convolutional networks (GCN) can capture such neighborhood dependency in a graph. However, it is still challenging to apply GCN for SL prediction as SL interactions are extremely sparse, which is more likely to cause overfitting. Results In this article, we propose a novel dual-dropout GCN (DDGCN) for learning more robust gene representations for SL prediction. We employ both coarse-grained node dropout and fine-grained edge dropout to address the issue that standard dropout in vanilla GCN is often inadequate in reducing overfitting on sparse graphs. In particular, coarse-grained node dropout can efficiently and systematically enforce dropout at the node (gene) level, while fine-grained edge dropout can further fine-tune the dropout at the interaction (edge) level. We further present a theoretical framework to justify our model architecture. Finally, we conduct extensive experiments on human SL datasets and the results demonstrate the superior performance of our model in comparison with state-of-the-art methods. Availability and implementation DDGCN is implemented in Python 3.7, open-source and freely available at https://github.com/CXX1113/Dual-DropoutGCN. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Wang Zheng-fang ◽  
Z.F. Wang

The main purpose of this study highlights on the evaluation of chloride SCC resistance of the material,duplex stainless steel,OOCr18Ni5Mo3Si2 (18-5Mo) and its welded coarse grained zone(CGZ).18-5Mo is a dual phases (A+F) stainless steel with yield strength:512N/mm2 .The proportion of secondary Phase(A phase) accounts for 30-35% of the total with fine grained and homogeneously distributed A and F phases(Fig.1).After being welded by a specific welding thermal cycle to the material,i.e. Tmax=1350°C and t8/5=20s,microstructure may change from fine grained morphology to coarse grained morphology and from homogeneously distributed of A phase to a concentration of A phase(Fig.2).Meanwhile,the proportion of A phase reduced from 35% to 5-10°o.For this reason it is known as welded coarse grained zone(CGZ).In association with difference of microstructure between base metal and welded CGZ,so chloride SCC resistance also differ from each other.Test procedures:Constant load tensile test(CLTT) were performed for recording Esce-t curve by which corrosion cracking growth can be described, tf,fractured time,can also be recorded by the test which is taken as a electrochemical behavior and mechanical property for SCC resistance evaluation. Test environment:143°C boiling 42%MgCl2 solution is used.Besides, micro analysis were conducted with light microscopy(LM),SEM,TEM,and Auger energy spectrum(AES) so as to reveal the correlation between the data generated by the CLTT results and micro analysis.


2020 ◽  
Vol 2020 (1) ◽  
pp. 74-77
Author(s):  
Simone Bianco ◽  
Luigi Celona ◽  
Flavio Piccoli

In this work we propose a method for single image dehazing that exploits a physical model to recover the haze-free image by estimating the atmospheric scattering parameters. Cycle consistency is used to further improve the reconstruction quality of local structures and objects in the scene as well. Experimental results on four real and synthetic hazy image datasets show the effectiveness of the proposed method in terms of two commonly used full-reference image quality metrics.


2021 ◽  
Author(s):  
Sai Phani Kumar Malladi ◽  
Jayanta Mukhopadhyay ◽  
Chaker Larabi ◽  
Santanu Chaudhury

Sign in / Sign up

Export Citation Format

Share Document