scholarly journals A Study of Vertical Structures and Microphysical Characteristics of Different Convective Cloud–Precipitation Types Using Ka-Band Millimeter Wave Radar Measurements

2019 ◽  
Vol 11 (15) ◽  
pp. 1810 ◽  
Author(s):  
Zheng ◽  
Zhang ◽  
Liu ◽  
Liu ◽  
Che

Millimeter wave cloud radar (MMCR) is one of the primary instruments employed to observe cloud–precipitation. With appropriate data processing, measurements of the Doppler spectra, spectral moments, and retrievals can be used to study the physical processes of cloud–precipitation. This study mainly analyzed the vertical structures and microphysical characteristics of different kinds of convective cloud–precipitation in South China during the pre-flood season using a vertical pointing Ka-band MMCR. Four kinds of convection, namely, multi-cell, isolated-cell, convective–stratiform mixed, and warm-cell convection, are discussed herein. The results show that the multi-cell and convective–stratiform mixed convections had similar vertical structures, and experienced nearly the same microphysical processes in terms of particle phase change, particle size distribution, hydrometeor growth, and breaking. A forward pattern was proposed to specifically characterize the vertical structure and provide radar spectra models reflecting the different microphysical and dynamic features and variations in different parts of the cloud body. Vertical air motion played key roles in the microphysical processes of the isolated- and warm-cell convections, and deeply affected the ground rainfall properties. Stronger, thicker, and slanted updrafts caused heavier showers with stronger rain rates and groups of larger raindrops. The microphysical parameters for the warm-cell cloud–precipitation were retrieved from the radar data and further compared with the ground-measured results from a disdrometer. The comparisons indicated that the radar retrievals were basically reliable; however, the radar signal weakening caused biases to some extent, especially for the particle number concentration. Note that the differences in sensitivity and detectable height of the two instruments also contributed to the compared deviation.

2021 ◽  
Vol 13 (6) ◽  
pp. 1064
Author(s):  
Zhangjing Wang ◽  
Xianhan Miao ◽  
Zhen Huang ◽  
Haoran Luo

The development of autonomous vehicles and unmanned aerial vehicles has led to a current research focus on improving the environmental perception of automation equipment. The unmanned platform detects its surroundings and then makes a decision based on environmental information. The major challenge of environmental perception is to detect and classify objects precisely; thus, it is necessary to perform fusion of different heterogeneous data to achieve complementary advantages. In this paper, a robust object detection and classification algorithm based on millimeter-wave (MMW) radar and camera fusion is proposed. The corresponding regions of interest (ROIs) are accurately calculated from the approximate position of the target detected by radar and cameras. A joint classification network is used to extract micro-Doppler features from the time-frequency spectrum and texture features from images in the ROIs. A fusion dataset between radar and camera is established using a fusion data acquisition platform and includes intersections, highways, roads, and playgrounds in schools during the day and at night. The traditional radar signal algorithm, the Faster R-CNN model and our proposed fusion network model, called RCF-Faster R-CNN, are evaluated in this dataset. The experimental results indicate that the mAP(mean Average Precision) of our network is up to 89.42% more accurate than the traditional radar signal algorithm and up to 32.76% higher than Faster R-CNN, especially in the environment of low light and strong electromagnetic clutter.


2015 ◽  
Vol 8 (9) ◽  
pp. 3685-3699 ◽  
Author(s):  
A. Chandra ◽  
C. Zhang ◽  
P. Kollias ◽  
S. Matrosov ◽  
W. Szyrmer

Abstract. The use of millimeter wavelength radars for probing precipitation has recently gained interest. However, estimation of precipitation variables is not straightforward due to strong signal attenuation, radar receiver saturation, antenna wet radome effects and natural microphysical variability. Here, an automated algorithm is developed for routinely retrieving rain rates from the profiling Ka-band (35-GHz) ARM (Atmospheric Radiation Measurement) zenith radars (KAZR). A 1-dimensional, simple, steady state microphysical model is used to estimate impacts of microphysical processes and attenuation on the profiles of radar observables at 35-GHz and thus provide criteria for identifying situations when attenuation or microphysical processes dominate KAZR observations. KAZR observations are also screened for signal saturation and wet radome effects. The algorithm is implemented in two steps: high rain rates are retrieved by using the amount of attenuation in rain layers, while low rain rates are retrieved from the reflectivity–rain rate (Ze–R) relation. Observations collected by the KAZR, rain gauge, disdrometer and scanning precipitating radars during the DYNAMO/AMIE field campaign at the Gan Island of the tropical Indian Ocean are used to validate the proposed approach. The differences in the rain accumulation from the proposed algorithm are quantified. The results indicate that the proposed algorithm has a potential for deriving continuous rain rate statistics in the tropics.


1998 ◽  
Vol 25 (10) ◽  
pp. 1645-1648 ◽  
Author(s):  
Gerald G. Mace ◽  
Christian Jakob ◽  
Kenneth P. Moran

Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 259
Author(s):  
Kang Zhang ◽  
Shengchang Lan ◽  
Guiyuan Zhang

The purpose of this paper was to investigate the effect of a training state-of-the-art convolution neural network (CNN) for millimeter-wave radar-based hand gesture recognition (MR-HGR). Focusing on the small training dataset problem in MR-HGR, this paper first proposed to transfer the knowledge with the CNN models in computer vision to MR-HGR by fine-tuning the models with radar data samples. Meanwhile, for the different data modality in MR-HGR, a parameterized representation of temporal space-velocity (TSV) spectrogram was proposed as an integrated data modality of the time-evolving hand gesture features in the radar echo signals. The TSV spectrograms representing six common gestures in human–computer interaction (HCI) from nine volunteers were used as the data samples in the experiment. The evaluated models included ResNet with 50, 101, and 152 layers, DenseNet with 121, 161 and 169 layers, as well as light-weight MobileNet V2 and ShuffleNet V2, mostly proposed by many latest publications. In the experiment, not only self-testing (ST), but also more persuasive cross-testing (CT), were implemented to evaluate whether the fine-tuned models generalize to the radar data samples. The CT results show that the best fine-tuned models can reach to an average accuracy higher than 93% with a comparable ST average accuracy almost 100%. Moreover, in order to alleviate the problem caused by private gesture habits, an auxiliary test was performed by augmenting four shots of the gestures with the heaviest misclassifications into the training set. This enriching test is similar with the scenario that a tablet reacts to a new user. The results of two different volunteer in the enriching test shows that the average accuracy of the enriched gesture can be improved from 55.59% and 65.58% to 90.66% and 95.95% respectively. Compared with some baseline work in MR-HGR, the investigation by this paper can be beneficial in promoting MR-HGR in future industry applications and consumer electronic design.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5421
Author(s):  
Yang Li ◽  
Yutong Liu ◽  
Yanping Wang ◽  
Yun Lin ◽  
Wenjie Shen

Compared with the commonly used lidar and visual sensors, the millimeter-wave radar has all-day and all-weather performance advantages and more stable performance in the face of different scenarios. However, using the millimeter-wave radar as the Simultaneous Localization and Mapping (SLAM) sensor is also associated with other problems, such as small data volume, more outliers, and low precision, which reduce the accuracy of SLAM localization and mapping. This paper proposes a millimeter-wave radar SLAM assisted by the Radar Cross Section (RCS) feature of the target and Inertial Measurement Unit (IMU). Using IMU to combine continuous radar scanning point clouds into “Multi-scan,” the problem of small data volume is solved. The Density-based Spatial Clustering of Applications with Noise (DBSCAN) clustering algorithm is used to filter outliers from radar data. In the clustering, the RCS feature of the target is considered, and the Mahalanobis distance is used to measure the similarity of the radar data. At the same time, in order to alleviate the problem of the lower accuracy of SLAM positioning due to the low precision of millimeter-wave radar data, an improved Correlative Scan Matching (CSM) method is proposed in this paper, which matches the radar point cloud with the local submap of the global grid map. It is a “Scan to Map” point cloud matching method, which achieves the tight coupling of localization and mapping. In this paper, three groups of actual data are collected to verify the proposed method in part and in general. Based on the comparison of the experimental results, it is proved that the proposed millimeter-wave radar SLAM assisted by the RCS feature of the target and IMU has better accuracy and robustness in the face of different scenarios.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1951
Author(s):  
Fahad Jibrin Abdu ◽  
Yixiong Zhang ◽  
Maozhong Fu ◽  
Yuhan Li ◽  
Zhenmiao Deng

The progress brought by the deep learning technology over the last decade has inspired many research domains, such as radar signal processing, speech and audio recognition, etc., to apply it to their respective problems. Most of the prominent deep learning models exploit data representations acquired with either Lidar or camera sensors, leaving automotive radars rarely used. This is despite the vital potential of radars in adverse weather conditions, as well as their ability to simultaneously measure an object’s range and radial velocity seamlessly. As radar signals have not been exploited very much so far, there is a lack of available benchmark data. However, recently, there has been a lot of interest in applying radar data as input to various deep learning algorithms, as more datasets are being provided. To this end, this paper presents a survey of various deep learning approaches processing radar signals to accomplish some significant tasks in an autonomous driving application, such as detection and classification. We have itemized the review based on different radar signal representations, as it is one of the critical aspects while using radar data with deep learning models. Furthermore, we give an extensive review of the recent deep learning-based multi-sensor fusion models exploiting radar signals and camera images for object detection tasks. We then provide a summary of the available datasets containing radar data. Finally, we discuss the gaps and important innovations in the reviewed papers and highlight some possible future research prospects.


Data in Brief ◽  
2020 ◽  
Vol 31 ◽  
pp. 105996
Author(s):  
Ennio Gambi ◽  
Gianluca Ciattaglia ◽  
Adelmo De Santis ◽  
Linda Senigagliesi

Sign in / Sign up

Export Citation Format

Share Document