scholarly journals Off-Grid DOA Estimation Based on Circularly Fully Convolutional Networks (CFCN) Using Space-Frequency Pseudo-Spectrum

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2767
Author(s):  
Wenqiong Zhang ◽  
Yiwei Huang ◽  
Jianfei Tong ◽  
Ming Bao ◽  
Xiaodong Li

Low-frequency multi-source direction-of-arrival (DOA) estimation has been challenging for micro-aperture arrays. Deep learning (DL)-based models have been introduced to this problem. Generally, existing DL-based methods formulate DOA estimation as a multi-label multi-classification problem. However, the accuracy of these methods is limited by the number of grids, and the performance is overly dependent on the training data set. In this paper, we propose an off-grid DL-based DOA estimation. The backbone is based on circularly fully convolutional networks (CFCN), trained by the data set labeled by space-frequency pseudo-spectra, and provides on-grid DOA proposals. Then, the regressor is developed to estimate the precise DOAs according to corresponding proposals and features. In this framework, spatial phase features are extracted by the circular convolution calculation. The improvement in spatial resolution is converted to increasing the dimensionality of features by rotating convolutional networks. This model ensures that the DOA estimations at different sub-bands have the same interpretation ability and effectively reduce network model parameters. The simulation and semi-anechoic chamber experiment results show that CFCN-based DOA is superior to existing methods in terms of generalization ability, resolution, and accuracy.

2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Siyu Ji ◽  
Chenglin Wen

Neural network is a data-driven algorithm; the process established by the network model requires a large amount of training data, resulting in a significant amount of time spent in parameter training of the model. However, the system modal update occurs from time to time. Prediction using the original model parameters will cause the output of the model to deviate greatly from the true value. Traditional methods such as gradient descent and least squares methods are all centralized, making it difficult to adaptively update model parameters according to system changes. Firstly, in order to adaptively update the network parameters, this paper introduces the evaluation function and gives a new method to evaluate the parameters of the function. The new method without changing other parameters of the model updates some parameters in the model in real time to ensure the accuracy of the model. Then, based on the evaluation function, the Mean Impact Value (MIV) algorithm is used to calculate the weight of the feature, and the weighted data is brought into the established fault diagnosis model for fault diagnosis. Finally, the validity of this algorithm is verified by the example of UCI-Combined Cycle Power Plant (UCI-ccpp) simulation of standard data set.


2018 ◽  
Vol 7 (04) ◽  
pp. 871-888 ◽  
Author(s):  
Sophie J. Lee ◽  
Howard Liu ◽  
Michael D. Ward

Improving geolocation accuracy in text data has long been a goal of automated text processing. We depart from the conventional method and introduce a two-stage supervised machine-learning algorithm that evaluates each location mention to be either correct or incorrect. We extract contextual information from texts, i.e., N-gram patterns for location words, mention frequency, and the context of sentences containing location words. We then estimate model parameters using a training data set and use this model to predict whether a location word in the test data set accurately represents the location of an event. We demonstrate these steps by constructing customized geolocation event data at the subnational level using news articles collected from around the world. The results show that the proposed algorithm outperforms existing geocoders even in a case added post hoc to test the generality of the developed algorithm.


Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 141
Author(s):  
Jianguang Li ◽  
Wen Li ◽  
Cong Jin ◽  
Lijuan Yang ◽  
Hui He

The segmentation of buildings in remote-sensing (RS) images plays an important role in monitoring landscape changes. Quantification of these changes can be used to balance economic and environmental benefits and most importantly, to support the sustainable urban development. Deep learning has been upgrading the techniques for RS image analysis. However, it requires a large-scale data set for hyper-parameter optimization. To address this issue, the concept of “one view per city” is proposed and it explores the use of one RS image for parameter settings with the purpose of handling the rest images of the same city by the trained model. The proposal of this concept comes from the observation that buildings of a same city in single-source RS images demonstrate similar intensity distributions. To verify the feasibility, a proof-of-concept study is conducted and five fully convolutional networks are evaluated on five cities in the Inria Aerial Image Labeling database. Experimental results suggest that the concept can be explored to decrease the number of images for model training and it enables us to achieve competitive performance in buildings segmentation with decreased time consumption. Based on model optimization and universal image representation, it is full of potential to improve the segmentation performance, to enhance the generalization capacity, and to extend the application of the concept in RS image analysis.


2018 ◽  
Vol 119 (6) ◽  
pp. 2265-2275 ◽  
Author(s):  
Seong-Cheol Park ◽  
Chun Kee Chung

The objective of this study was to introduce a new machine learning guided by outcome of resective epilepsy surgery defined as the presence/absence of seizures to improve data mining for interictal pathological activities in neocortical epilepsy. Electrocorticographies for 39 patients with medically intractable neocortical epilepsy were analyzed. We separately analyzed 38 frequencies from 0.9 to 800 Hz including both high-frequency activities and low-frequency activities to select bands related to seizure outcome. An automatic detector using amplitude-duration-number thresholds was used. Interictal electrocorticography data sets of 8 min for each patient were selected. In the first training data set of 20 patients, the automatic detector was optimized to best differentiate the seizure-free group from not-seizure-free-group based on ranks of resection percentages of activities detected using a genetic algorithm. The optimization was validated in a different data set of 19 patients. There were 16 (41%) seizure-free patients. The mean follow-up duration was 21 ± 11 mo (range, 13–44 mo). After validation, frequencies significantly related to seizure outcome were 5.8, 8.4–25, 30, 36, 52, and 75 among low-frequency activities and 108 and 800 Hz among high-frequency activities. Resection for 5.8, 8.4–25, 108, and 800 Hz activities consistently improved seizure outcome. Resection effects of 17–36, 52, and 75 Hz activities on seizure outcome were variable according to thresholds. We developed and validated an automated detector for monitoring interictal pathological and inhibitory/physiological activities in neocortical epilepsy using a data-driven approach through outcome-guided machine learning. NEW & NOTEWORTHY Outcome-guided machine learning based on seizure outcome was used to improve detections for interictal electrocorticographic low- and high-frequency activities. This method resulted in better separation of seizure outcome groups than others reported in the literature. The automatic detector can be trained without human intervention and no prior information. It is based only on objective seizure outcome data without relying on an expert’s manual annotations. Using the method, we could find and characterize pathological and inhibitory activities.


2019 ◽  
Vol 15 (1) ◽  
pp. 155014771882052 ◽  
Author(s):  
Bowen Qin ◽  
Fuyuan Xiao

Due to its efficiency to handle uncertain information, Dempster–Shafer evidence theory has become the most important tool in many information fusion systems. However, how to determine basic probability assignment, which is the first step in evidence theory, is still an open issue. In this article, a new method integrating interval number theory and k-means++ cluster method is proposed to determine basic probability assignment. At first, k-means++ clustering method is used to calculate lower and upper bound values of interval number with training data. Then, the differentiation degree based on distance and similarity of interval number between the test sample and constructed models are defined to generate basic probability assignment. Finally, Dempster’s combination rule is used to combine multiple basic probability assignments to get the final basic probability assignment. The experiments on Iris data set that is widely used in classification problem illustrated that the proposed method is effective in determining basic probability assignment and classification problem, and the proposed method shows more accurate results in which the classification accuracy reaches 96.7%.


2020 ◽  
Vol 12 (7) ◽  
pp. 1099 ◽  
Author(s):  
Ahram Song ◽  
Yongil Kim

Change detection (CD) networks based on supervised learning have been used in diverse CD tasks. However, such supervised CD networks require a large amount of data and only use information from current images. In addition, it is time consuming to manually acquire the ground truth data for newly obtained images. Here, we proposed a novel method for CD in case of a lack of training data in an area near by another one with the available ground truth data. The proposed method automatically entails generating training data and fine-tuning the CD network. To detect changes in target images without ground truth data, the difference images were generated using spectral similarity measure, and the training data were selected via fuzzy c-means clustering. Recurrent fully convolutional networks with multiscale three-dimensional filters were used to extract objects of various sizes from unmanned aerial vehicle (UAV) images. The CD network was pre-trained on labeled source domain data; then, the network was fine-tuned on target images using generated training data. Two further CD networks were trained with a combined weighted loss function. The training data in the target domain were iteratively updated using he prediction map of the CD network. Experiments on two hyperspectral UAV datasets confirmed that the proposed method is capable of transferring change rules and improving CD results based on training data extracted in an unsupervised way.


Author(s):  
WENTAO MAO ◽  
JIUCHENG XU ◽  
SHENGJIE ZHAO ◽  
MEI TIAN

Recently, extreme learning machines (ELMs) have been a promising tool in solving a wide range of regression and classification applications. However, when modeling multiple related tasks in which only limited training data per task are available and the dimension is low, ELMs are generally hard to get impressive performance due to little help from the informative domain knowledge across tasks. To solve this problem, this paper extends ELM to the scenario of multi-task learning (MTL). First, based on the assumption that model parameters of related tasks are close to each other, a new regularization-based MTL algorithm for ELM is proposed to learn related tasks jointly via simple matrix inversion. For improving the learning performance, the algorithm proposed above is further formulated as a mixed integer programming in order to identify the grouping structure in which parameters are closer than others, and finally an alternating minimization method is presented to solve this optimization. Experiments conducted on a toy problem as well as real-life data set demonstrate the effectiveness of the proposed MTL algorithm compared to the classical ELM and the standard MTL algorithm.


2018 ◽  
Vol 8 (12) ◽  
pp. 2670 ◽  
Author(s):  
Hao Guo ◽  
Guo Wei ◽  
Jubai An

Damping Bragg scattering from the ocean surface is the basic underlying principle of synthetic aperture radar (SAR) oil slick detection, and they produce dark spots on SAR images. Dark spot detection is the first step in oil spill detection, which affects the accuracy of oil spill detection. However, some natural phenomena (such as waves, ocean currents, and low wind belts, as well as human factors) may change the backscatter intensity on the surface of the sea, resulting in uneven intensity, high noise, and blurred boundaries of oil slicks or lookalikes. In this paper, Segnet is used as a semantic segmentation model to detect dark spots in oil spill areas. The proposed method is applied to a data set of 4200 from five original SAR images of an oil spill. The effectiveness of the method is demonstrated through the comparison with fully convolutional networks (FCN), an initiator of semantic segmentation models, and some other segmentation methods. It is here observed that the proposed method can not only accurately identify the dark spots in SAR images, but also show a higher robustness under high noise and fuzzy boundary conditions.


2019 ◽  
Vol 12 (9) ◽  
pp. 4713-4724
Author(s):  
Chaojun Shi ◽  
Yatong Zhou ◽  
Bo Qiu ◽  
Jingfei He ◽  
Mu Ding ◽  
...  

Abstract. Cloud segmentation plays a very important role in astronomical observatory site selection. At present, few researchers segment cloud in nocturnal all-sky imager (ASI) images. This paper proposes a new automatic cloud segmentation algorithm that utilizes the advantages of deep-learning fully convolutional networks (FCNs) to segment cloud pixels from diurnal and nocturnal ASI images; it is called the enhancement fully convolutional network (EFCN). Firstly, all the ASI images in the data set from the Key Laboratory of Optical Astronomy at the National Astronomical Observatories of Chinese Academy of Sciences (CAS) are converted from the red–green–blue (RGB) color space to hue saturation intensity (HSI) color space. Secondly, the I channel of the HSI color space is enhanced by histogram equalization. Thirdly, all the ASI images are converted from the HSI color space to RGB color space. Then after 100 000 iterative trainings based on the ASI images in the training set, the optimum associated parameters of the EFCN-8s model are obtained. Finally, we use the trained EFCN-8s to segment the cloud pixels of the ASI image in the test set. In the experiments our proposed EFCN-8s was compared with four other algorithms (OTSU, FCN-8s, EFCN-32s, and EFCN-16s) using four evaluation metrics. Experiments show that the EFCN-8s is much more accurate in cloud segmentation for diurnal and nocturnal ASI images than the other four algorithms.


2005 ◽  
Vol 7 (4) ◽  
pp. 291-296 ◽  
Author(s):  
P. Hettiarachchi ◽  
M. J. Hall ◽  
A. W. Minns

The last decade has seen increasing interest in the application of Artificial Neural Networks (ANNs) for the modelling of the relationship between rainfall and streamflow. Since multi-layer, feed-forward ANNs have the property of being universal approximators, they are able to capture the essence of most input–output relationships, provided that an underlying deterministic relationship exists. Unfortunately, owing to the standardisation of inputs and outputs that is required to run ANNs, a problem arises in extrapolation: if the training data set does not contain the maximum possible output value, an unmodified network will be unable to synthesise this peak value. The occurrence of high magnitude, low frequency events within short periods of record is largely fortuitous. Therefore, the confidence in the neural network model can be greatly enhanced if some methodology can be found for incorporating domain knowledge about such events into the calibration and verification procedure in addition to the available measured data sets. One possible form of additional domain knowledge is the Estimated Maximum Flood (EMF), a notional event with a small but non-negligible probability of exceedence. This study investigates the suitability of including an EMF estimate in the training set of a rainfall–runoff ANN in order to improve the extrapolation characteristics of the network. A study has been carried out in which EMFs have been included, along with recorded flood events, in the training of ANN models for six catchments in the south west of England. The results demonstrate that, with prior transformation of the runoff data to logarithms of flows, the inclusion of domain knowledge in the form of such extreme synthetic events improves the generalisation capabilities of the ANN model and does not disrupt the training process. Where guidelines are available for EMF estimation, the application of this approach is recommended as an alternative means of overcoming the inherent extrapolation problems of multi-layer, feed-forward ANNs.


Sign in / Sign up

Export Citation Format

Share Document