scholarly journals The Nonlocalp-Laplacian Evolution for Image Interpolation

2011 ◽  
Vol 2011 ◽  
pp. 1-11 ◽  
Author(s):  
Yi Zhan

This paper presents an image interpolation model with nonlocalp-Laplacian regularization. The nonlocalp-Laplacian regularization overcomes the drawback of the partial differential equation (PDE) proposed by Belahmidi and Guichard (2004) that image density diffuses in the directions pointed bylocalgradient. The grey values of images diffuse along image feature direction not gradient direction under the control of the proposed model, that is, minimal smoothing in the directions across the image features and maximal smoothing in the directions along the image features. The total regularizer combines the advantages of nonlocalp-Laplacian regularization and total variation (TV) regularization (preserving discontinuities and 1D image structures). The derived model efficiently reconstructs the real image, leading to a natural interpolation, with reduced blurring and staircase artifacts. We present experimental results that prove the potential and efficacy of the method.

2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Yi Zhan ◽  
Sheng Jie Li ◽  
Meng Li

This paper presents an image interpolation model with local and nonlocal regularization. A nonlocal bounded variation (BV) regularizer is formulated by an exponential function including gradient. It acts as the Perona-Malik equation. Thus our nonlocal BV regularizer possesses the properties of the anisotropic diffusion equation and nonlocal functional. The local total variation (TV) regularizer dissipates image energy along the orthogonal direction to the gradient to avoid blurring image edges. The derived model efficiently reconstructs the real image, leading to a natural interpolation which reduces blurring and staircase artifacts. We present experimental results that prove the potential and efficacy of the method.


Author(s):  
TERRY CAELLI ◽  
ANDREW MCCABE ◽  
GARRY BRISCOE

This paper is concerned with an application of Hidden Markov Models (HMMs) to the generation of shape boundaries from image features. In the proposed model, shape classes are defined by sequences of "shape states" each of which has a probability distribution of expected image feature types (feature "symbols").The tracking procedure uses a generalization of the well-known Viterbi method by replacing its search by a type of "beam-search" so allowing the procedure, at any time, to consider less likely features (symbols) as well the search for an instantiable optimal state sequences. We have evaluated the model's performance on a variety of image and shape types and have also developed a new performance measure defined by an expected Hamming distance between predicted and observed symbol sequences. Results point to the use of this type of model for the depiction of shape boundaries when it is necessary to have accurate boundary annotations as, for example, occurs in Cartography.


2020 ◽  
Vol 34 (07) ◽  
pp. 12813-12820 ◽  
Author(s):  
Kaihua Zhang ◽  
Jin Chen ◽  
Bo Liu ◽  
Qingshan Liu

Object co-segmentation is to segment the shared objects in multiple relevant images, which has numerous applications in computer vision. This paper presents a spatial and semantic modulated deep network framework for object co-segmentation. A backbone network is adopted to extract multi-resolution image features. With the multi-resolution features of the relevant images as input, we design a spatial modulator to learn a mask for each image. The spatial modulator captures the correlations of image feature descriptors via unsupervised learning. The learned mask can roughly localize the shared foreground object while suppressing the background. For the semantic modulator, we model it as a supervised image classification task. We propose a hierarchical second-order pooling module to transform the image features for classification use. The outputs of the two modulators manipulate the multi-resolution features by a shift-and-scale operation so that the features focus on segmenting co-object regions. The proposed model is trained end-to-end without any intricate post-processing. Extensive experiments on four image co-segmentation benchmark datasets demonstrate the superior accuracy of the proposed method compared to state-of-the-art methods. The codes are available at http://kaihuazhang.net/.


Author(s):  
W. Krakow ◽  
D. A. Smith

The successful determination of the atomic structure of [110] tilt boundaries in Au stems from the investigation of microscope performance at intermediate accelerating voltages (200 and 400kV) as well as a detailed understanding of how grain boundary image features depend on dynamical diffraction processes variation with specimen and beam orientations. This success is also facilitated by improving image quality by digital image processing techniques to the point where a structure image is obtained and each atom position is represented by a resolved image feature. Figure 1 shows an example of a low angle (∼10°) Σ = 129/[110] tilt boundary in a ∼250Å Au film, taken under tilted beam brightfield imaging conditions, to illustrate the steps necessary to obtain the atomic structure configuration from the image. The original image of Fig. 1a shows the regular arrangement of strain-field images associated with the cores of ½ [10] primary dislocations which are separated by ∼15Å.


2016 ◽  
Vol 20 (2) ◽  
pp. 191-201 ◽  
Author(s):  
Wei Lu ◽  
Yan Cui ◽  
Jun Teng

To decrease the cost of instrumentation for the strain and displacement monitoring method that uses sensors as well as considers the structural health monitoring challenges in sensor installation, it is necessary to develop a machine vision-based monitoring method. For this method, the most important step is the accurate extraction of the image feature. In this article, the edge detection operator based on multi-scale structure elements and the compound mathematical morphological operator is proposed to provide improved image feature extraction. The proposed method can not only achieve an improved filtering effect and anti-noise ability but can also detect the edge more accurately. Furthermore, the required image features (vertex of a square calibration board and centroid of a circular target) can be accurately extracted using the extracted image edge information. For validation, the monitoring tests for the structural local mean strain and in-plane displacement were designed accordingly. Through analysis of the error between the measured and calculated values of the structural strain and displacement, the feasibility and effectiveness of the proposed edge detection operator are verified.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


Author(s):  
Huimin Lu ◽  
Rui Yang ◽  
Zhenrong Deng ◽  
Yonglin Zhang ◽  
Guangwei Gao ◽  
...  

Chinese image description generation tasks usually have some challenges, such as single-feature extraction, lack of global information, and lack of detailed description of the image content. To address these limitations, we propose a fuzzy attention-based DenseNet-BiLSTM Chinese image captioning method in this article. In the proposed method, we first improve the densely connected network to extract features of the image at different scales and to enhance the model’s ability to capture the weak features. At the same time, a bidirectional LSTM is used as the decoder to enhance the use of context information. The introduction of an improved fuzzy attention mechanism effectively improves the problem of correspondence between image features and contextual information. We conduct experiments on the AI Challenger dataset to evaluate the performance of the model. The results show that compared with other models, our proposed model achieves higher scores in objective quantitative evaluation indicators, including BLEU , BLEU , METEOR, ROUGEl, and CIDEr. The generated description sentence can accurately express the image content.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 291 ◽  
Author(s):  
Hamdi Sahloul ◽  
Shouhei Shirafuji ◽  
Jun Ota

Local image features are invariant to in-plane rotations and robust to minor viewpoint changes. However, the current detectors and descriptors for local image features fail to accommodate out-of-plane rotations larger than 25°–30°. Invariance to such viewpoint changes is essential for numerous applications, including wide baseline matching, 6D pose estimation, and object reconstruction. In this study, we present a general embedding that wraps a detector/descriptor pair in order to increase viewpoint invariance by exploiting input depth maps. The proposed embedding locates smooth surfaces within the input RGB-D images and projects them into a viewpoint invariant representation, enabling the detection and description of more viewpoint invariant features. Our embedding can be utilized with different combinations of descriptor/detector pairs, according to the desired application. Using synthetic and real-world objects, we evaluated the viewpoint invariance of various detectors and descriptors, for both standalone and embedded approaches. While standalone local image features fail to accommodate average viewpoint changes beyond 33.3°, our proposed embedding boosted the viewpoint invariance to different levels, depending on the scene geometry. Objects with distinct surface discontinuities were on average invariant up to 52.8°, and the overall average for all evaluated datasets was 45.4°. Similarly, out of a total of 140 combinations involving 20 local image features and various objects with distinct surface discontinuities, only a single standalone local image feature exceeded the goal of 60° viewpoint difference in just two combinations, as compared with 19 different local image features succeeding in 73 combinations when wrapped in the proposed embedding. Furthermore, the proposed approach operates robustly in the presence of input depth noise, even that of low-cost commodity depth sensors, and well beyond.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 828
Author(s):  
Wai Lun Lo ◽  
Henry Shu Hung Chung ◽  
Hong Fu

Estimation of Meteorological visibility from image characteristics is a challenging problem in the research of meteorological parameters estimation. Meteorological visibility can be used to indicate the weather transparency and this indicator is important for transport safety. This paper summarizes the outcomes of the experimental evaluation of a Particle Swarm Optimization (PSO) based transfer learning method for meteorological visibility estimation method. This paper proposes a modified approach of the transfer learning method for visibility estimation by using PSO feature selection. Image data are collected at fixed location with fixed viewing angle. The database images were gone through a pre-processing step of gray-averaging so as to provide information of static landmark objects for automatic extraction of effective regions from images. Effective regions are then extracted from image database and the image features are then extracted from the Neural Network. Subset of Image features are selected based on the Particle Swarming Optimization (PSO) methods to obtain the image feature vectors for each effective sub-region. The image feature vectors are then used to estimate the visibilities of the images by using the Multiple Support Vector Regression (SVR) models. Experimental results show that the proposed method can give an accuracy more than 90% for visibility estimation and the proposed method is effective and robust.


2021 ◽  
Author(s):  
Chris Onof ◽  
Yuting Chen ◽  
Li-Pen Wang ◽  
Amy Jones ◽  
Susana Ochoa Rodriguez

<p>In this work a two-stage (rainfall nowcasting + flood prediction) analogue model for real-time urban flood forecasting is presented. The proposed approach accounts for the complexities of urban rainfall nowcasting while avoiding the expensive computational requirements of real-time urban flood forecasting.</p><p>The model has two consecutive stages:</p><ul><li><strong>(1) Rainfall nowcasting: </strong>0-6h lead time ensemble rainfall nowcasting is achieved by means of an analogue method, based on the assumption that similar climate condition will define similar patterns of temporal evolution of the rainfall. The framework uses the NORA analogue-based forecasting tool (Panziera et al., 2011), consisting of two layers. In the <strong>first layer, </strong>the 120 historical atmospheric (forcing) conditions most similar to the current atmospheric conditions are extracted, with the historical database consisting of ERA5 reanalysis data from the ECMWF and the current conditions derived from the US Global Forecasting System (GFS). In the <strong>second layer</strong>, twelve historical radar images most similar to the current one are extracted from amongst the historical radar images linked to the aforementioned 120 forcing analogues. Lastly, for each of the twelve analogues, the rainfall fields (at resolution of 1km/5min) observed after the present time are taken as one ensemble member. Note that principal component analysis (PCA) and uncorrelated multilinear PCA methods were tested for image feature extraction prior to applying the nearest neighbour technique for analogue selection.</li> <li><strong>(2) Flood prediction: </strong>we predict flood extent using the high-resolution rainfall forecast from Stage 1, along with a database of pre-run flood maps at 1x1 km<sup>2</sup> solution from 157 catalogued historical flood events. A deterministic flood prediction is obtained by using the averaged response from the twelve flood maps associated to the twelve ensemble rainfall nowcasts, where for each gridded area the median value is adopted (assuming flood maps are equiprobabilistic). A probabilistic flood prediction is obtained by generating a quantile-based flood map. Note that the flood maps were generated through rolling ball-based mapping of the flood volumes predicted at each node of the InfoWorks ICM sewer model of the pilot area.</li> </ul><p>The Minworth catchment in the UK (~400 km<sup>2</sup>) was used to demonstrate the proposed model. Cross‑assessment was undertaken for each of 157 flooding events by leaving one event out from training in each iteration and using it for evaluation. With a focus on the spatial replication of flood/non-flood patterns, the predicted flood maps were converted to binary (flood/non-flood) maps. Quantitative assessment was undertaken by means of a contingency table. An average accuracy rate (i.e. proportion of correct predictions, out of all test events) of 71.4% was achieved, with individual accuracy rates ranging from 57.1% to 78.6%). Further testing is needed to confirm initial findings and flood mapping refinement will be pursued.</p><p>The proposed model is fast, easy and relatively inexpensive to operate, making it suitable for direct use by local authorities who often lack the expertise on and/or capabilities for flood modelling and forecasting.</p><p><strong>References: </strong>Panziera et al. 2011. NORA–Nowcasting of Orographic Rainfall by means of Analogues. Quarterly Journal of the Royal Meteorological Society. 137, 2106-2123.</p>


Sign in / Sign up

Export Citation Format

Share Document