scholarly journals A Robust Deep Learning Approach for Spatiotemporal Estimation of Satellite AOD and PM2.5

2020 ◽  
Vol 12 (2) ◽  
pp. 264 ◽  
Author(s):  
Lianfa Li

Accurate estimation of fine particulate matter with diameter ≤2.5 μm (PM2.5) at a high spatiotemporal resolution is crucial for the evaluation of its health effects. Previous studies face multiple challenges including limited ground measurements and availability of spatiotemporal covariates. Although the multiangle implementation of atmospheric correction (MAIAC) retrieves satellite aerosol optical depth (AOD) at a high spatiotemporal resolution, massive non-random missingness considerably limits its application in PM2.5 estimation. Here, a deep learning approach, i.e., bootstrap aggregating (bagging) of autoencoder-based residual deep networks, was developed to make robust imputation of MAIAC AOD and further estimate PM2.5 at a high spatial (1 km) and temporal (daily) resolution. The base model consisted of autoencoder-based residual networks where residual connections were introduced to improve learning performance. Bagging of residual networks was used to generate ensemble predictions for better accuracy and uncertainty estimates. As a case study, the proposed approach was applied to impute daily satellite AOD and subsequently estimate daily PM2.5 in the Jing-Jin-Ji metropolitan region of China in 2015. The presented approach achieved competitive performance in AOD imputation (mean test R2: 0.96; mean test RMSE: 0.06) and PM2.5 estimation (test R2: 0.90; test RMSE: 22.3 μg/m3). In the additional independent tests using ground AERONET AOD and PM2.5 measurements at the monitoring station of the U.S. Embassy in Beijing, this approach achieved high R2 (0.82–0.97). Compared with the state-of-the-art machine learning method, XGBoost, the proposed approach generated more reasonable spatial variation for predicted PM2.5 surfaces. Publically available covariates used included meteorology, MERRA2 PBLH and AOD, coordinates, and elevation. Other covariates such as cloud fractions or land-use were not used due to unavailability. The results of validation and independent testing demonstrate the usefulness of the proposed approach in exposure assessment of PM2.5 using satellite AOD having massive missing values.

2021 ◽  
Author(s):  
Muhammad Sajid

Abstract Machine learning is proving its successes in all fields of life including medical, automotive, planning, engineering, etc. In the world of geoscience, ML showed impressive results in seismic fault interpretation, advance seismic attributes analysis, facies classification, and geobodies extraction such as channels, carbonates, and salt, etc. One of the challenges faced in geoscience is the availability of label data which is one of the most time-consuming requirements in supervised deep learning. In this paper, an advanced learning approach is proposed for geoscience where the machine observes the seismic interpretation activities and learns simultaneously as the interpretation progresses. Initial testing showed that through the proposed method along with transfer learning, machine learning performance is highly effective, and the machine accurately predicts features requiring minor post prediction filtering to be accepted as the optimal interpretation.


Lab on a Chip ◽  
2021 ◽  
Author(s):  
Xiaopeng Chen ◽  
Junyu Ping ◽  
Yixuan Sun ◽  
Chengqiang Yi ◽  
Sijian Liu ◽  
...  

Volumetric imaging of dynamic signals in a large, moving, and light-scattering specimen is extremely challenging, owing to the requirement on high spatiotemporal resolution and difficulty in obtaining high-contrast signals. Here...


2019 ◽  
Author(s):  
Jong-Hwan Jang ◽  
Junggu Choi ◽  
Hyun Woong Roh ◽  
Sang Joon Son ◽  
Chang Hyung Hong ◽  
...  

BACKGROUND Data collected by an actigraphy device worn on the wrist or waist can provide objective measurements for studies related to physical activity; however, some data may contain intervals where values are missing. In previous studies, statistical methods have been applied to impute missing values on the basis of statistical assumptions. Deep learning algorithms, however, can learn features from the data without any such assumptions and may outperform previous approaches in imputation tasks. OBJECTIVE The aim of this study was to impute missing values in data using a deep learning approach. METHODS To develop an imputation model for missing values in accelerometer-based actigraphy data, a denoising convolutional autoencoder was adopted. We trained and tested our deep learning–based imputation model with the National Health and Nutrition Examination Survey data set and validated it with the external Korea National Health and Nutrition Examination Survey and the Korean Chronic Cerebrovascular Disease Oriented Biobank data sets which consist of daily records measuring activity counts. The partial root mean square error and partial mean absolute error of the imputed intervals (partial RMSE and partial MAE, respectively) were calculated using our deep learning–based imputation model (zero-inflated denoising convolutional autoencoder) as well as using other approaches (mean imputation, zero-inflated Poisson regression, and Bayesian regression). RESULTS The zero-inflated denoising convolutional autoencoder exhibited a partial RMSE of 839.3 counts and partial MAE of 431.1 counts, whereas mean imputation achieved a partial RMSE of 1053.2 counts and partial MAE of 545.4 counts, the zero-inflated Poisson regression model achieved a partial RMSE of 1255.6 counts and partial MAE of 508.6 counts, and Bayesian regression achieved a partial RMSE of 924.5 counts and partial MAE of 605.8 counts. CONCLUSIONS Our deep learning–based imputation model performed better than the other methods when imputing missing values in actigraphy data.


10.2196/16113 ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. e16113
Author(s):  
Jong-Hwan Jang ◽  
Junggu Choi ◽  
Hyun Woong Roh ◽  
Sang Joon Son ◽  
Chang Hyung Hong ◽  
...  

Background Data collected by an actigraphy device worn on the wrist or waist can provide objective measurements for studies related to physical activity; however, some data may contain intervals where values are missing. In previous studies, statistical methods have been applied to impute missing values on the basis of statistical assumptions. Deep learning algorithms, however, can learn features from the data without any such assumptions and may outperform previous approaches in imputation tasks. Objective The aim of this study was to impute missing values in data using a deep learning approach. Methods To develop an imputation model for missing values in accelerometer-based actigraphy data, a denoising convolutional autoencoder was adopted. We trained and tested our deep learning–based imputation model with the National Health and Nutrition Examination Survey data set and validated it with the external Korea National Health and Nutrition Examination Survey and the Korean Chronic Cerebrovascular Disease Oriented Biobank data sets which consist of daily records measuring activity counts. The partial root mean square error and partial mean absolute error of the imputed intervals (partial RMSE and partial MAE, respectively) were calculated using our deep learning–based imputation model (zero-inflated denoising convolutional autoencoder) as well as using other approaches (mean imputation, zero-inflated Poisson regression, and Bayesian regression). Results The zero-inflated denoising convolutional autoencoder exhibited a partial RMSE of 839.3 counts and partial MAE of 431.1 counts, whereas mean imputation achieved a partial RMSE of 1053.2 counts and partial MAE of 545.4 counts, the zero-inflated Poisson regression model achieved a partial RMSE of 1255.6 counts and partial MAE of 508.6 counts, and Bayesian regression achieved a partial RMSE of 924.5 counts and partial MAE of 605.8 counts. Conclusions Our deep learning–based imputation model performed better than the other methods when imputing missing values in actigraphy data.


2020 ◽  
Vol 54 (18) ◽  
pp. 11037-11047 ◽  
Author(s):  
Weeberb J. Requia ◽  
Qian Di ◽  
Rachel Silvern ◽  
James T. Kelly ◽  
Petros Koutrakis ◽  
...  

2019 ◽  
Vol 32 (17) ◽  
pp. 13233-13244 ◽  
Author(s):  
Adrián Sánchez-Morales ◽  
José-Luis Sancho-Gómez ◽  
Juan-Antonio Martínez-García ◽  
Aníbal R. Figueiras-Vidal

2018 ◽  
Vol 6 (3) ◽  
pp. 122-126
Author(s):  
Mohammed Ibrahim Khan ◽  
◽  
Akansha Singh ◽  
Anand Handa ◽  
◽  
...  

2020 ◽  
Vol 17 (3) ◽  
pp. 299-305 ◽  
Author(s):  
Riaz Ahmad ◽  
Saeeda Naz ◽  
Muhammad Afzal ◽  
Sheikh Rashid ◽  
Marcus Liwicki ◽  
...  

This paper presents a deep learning benchmark on a complex dataset known as KFUPM Handwritten Arabic TexT (KHATT). The KHATT data-set consists of complex patterns of handwritten Arabic text-lines. This paper contributes mainly in three aspects i.e., (1) pre-processing, (2) deep learning based approach, and (3) data-augmentation. The pre-processing step includes pruning of white extra spaces plus de-skewing the skewed text-lines. We deploy a deep learning approach based on Multi-Dimensional Long Short-Term Memory (MDLSTM) networks and Connectionist Temporal Classification (CTC). The MDLSTM has the advantage of scanning the Arabic text-lines in all directions (horizontal and vertical) to cover dots, diacritics, strokes and fine inflammation. The data-augmentation with a deep learning approach proves to achieve better and promising improvement in results by gaining 80.02% Character Recognition (CR) over 75.08% as baseline.


Sign in / Sign up

Export Citation Format

Share Document