Airborne SAR on circular trajectories to reduce layover and shadow effects of urban scenes

2016 ◽  
Author(s):  
Stephan Palm ◽  
Rainer Sommer ◽  
Nils Pohl ◽  
Uwe Stilla
Keyword(s):  
Author(s):  
S. Palm ◽  
N. Pohl ◽  
U. Stilla

Airborne SAR on small and flexible platforms guarantees the evaluation of local damages after natural disasters and is both weather and daylight independent. The processing of circular flight trajectories can further improve the reconstruction of target scenes especially in complex urban scenarios as shadowing and foreshortening effects can be reduced by multiple views from different aspect angles (hyper- or full- aspect). A dataset collected with the Miranda 35 GHz radar system with 1 GHz bandwidth on a small ultralight aircraft on a circular trajectory over an urban scene was processed using a time domain approach. The SAR processing chain and the effects of the navigational data for such highly nonlinear trajectories and unstable platforms are described. The generated SAR image stack over the entire trajectory consists of 240 individual SAR images, each image visualizing the scene from a slightly different aspect angle. First results for the fusion of multiple aspect views to create one resulting image with reduced shadow areas and the possibility to find hidden targets are demonstrated. Further potentials of such particular datasets like moving target indication are discussed.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2803
Author(s):  
Rabeea Jaffari ◽  
Manzoor Ahmed Hashmani ◽  
Constantino Carlos Reyes-Aldasoro

The segmentation of power lines (PLs) from aerial images is a crucial task for the safe navigation of unmanned aerial vehicles (UAVs) operating at low altitudes. Despite the advances in deep learning-based approaches for PL segmentation, these models are still vulnerable to the class imbalance present in the data. The PLs occupy only a minimal portion (1–5%) of the aerial images as compared to the background region (95–99%). Generally, this class imbalance problem is addressed via the use of PL-specific detectors in conjunction with the popular class balanced cross entropy (BBCE) loss function. However, these PL-specific detectors do not work outside their application areas and a BBCE loss requires hyperparameter tuning for class-wise weights, which is not trivial. Moreover, the BBCE loss results in low dice scores and precision values and thus, fails to achieve an optimal trade-off between dice scores, model accuracy, and precision–recall values. In this work, we propose a generalized focal loss function based on the Matthews correlation coefficient (MCC) or the Phi coefficient to address the class imbalance problem in PL segmentation while utilizing a generic deep segmentation architecture. We evaluate our loss function by improving the vanilla U-Net model with an additional convolutional auxiliary classifier head (ACU-Net) for better learning and faster model convergence. The evaluation of two PL datasets, namely the Mendeley Power Line Dataset and the Power Line Dataset of Urban Scenes (PLDU), where PLs occupy around 1% and 2% of the aerial images area, respectively, reveal that our proposed loss function outperforms the popular BBCE loss by 16% in PL dice scores on both the datasets, 19% in precision and false detection rate (FDR) values for the Mendeley PL dataset and 15% in precision and FDR values for the PLDU with a minor degradation in the accuracy and recall values. Moreover, our proposed ACU-Net outperforms the baseline vanilla U-Net for the characteristic evaluation parameters in the range of 1–10% for both the PL datasets. Thus, our proposed loss function with ACU-Net achieves an optimal trade-off for the characteristic evaluation parameters without any bells and whistles. Our code is available at Github.


2021 ◽  
Vol 13 (11) ◽  
pp. 6464
Author(s):  
Chris Neale ◽  
Stephanie Lopez ◽  
Jenny Roe

It is well-evidenced that exposure to natural environments increases psychological restoration as compared to non-natural settings, increasing our ability to recover from stress, low mood, and mental fatigue and encouraging positive social interactions that cultivate social cohesion. However, very few studies have explored how the inclusion of people within a given environment—either urban or natural settings—affect restorative health outcomes. We present three laboratory-based studies examining, first, the effect of nature vs. urban scenes, and second, investigating nature ‘with’ vs. ‘without’ people—using static and moving imagery—on psychological restoration and social wellbeing. Our third study explores differences between urban and natural settings both with vs. without people, using video stimuli to understand potential restorative and social wellbeing effects. Outcome measures across all studies included perceived social belonging, loneliness, subjective mood, and perceived restorativeness. Studies 1 and 2 both used a within group, randomized crossover design. Study 1 (n = 45, mean age = 20.7) explored static imagery of environmental conditions without people; findings were consistent with restorative theories showing a positive effect of nature exposure on all outcome measures. Study 2 compared nature scenes with vs. without people (n = 47, mean age = 20.9) and we found no significant differences on our outcome measures between either social scenario, though both scenarios generated positive wellbeing outcomes. Study 3, conducted on Amazon Mechanical Turk, employed an independent group design with subjects randomly assigned to one of four conditions; an urban vs. nature setting, with vs. without people. We explored the effect of moving imagery on psychological restoration (n = 200, mean age = 35.7) and our findings showed no impact on belonging, loneliness, or mood between conditions, but did show that—regardless of the inclusion of people—the nature settings were more restorative than the urban. There were no differences in psychological restoration between nature conditions with vs. without people. We discuss the implications for restorative environment research exploring social-environmental interactions.


Author(s):  
Jianlai Chen ◽  
Buge Liang ◽  
Junchao Zhang ◽  
De-Gui Yang ◽  
Yuhui Deng ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 970
Author(s):  
Miguel Ángel Martínez-Domingo ◽  
Juan Luis Nieves ◽  
Eva M. Valero

Saliency prediction is a very important and challenging task within the computer vision community. Many models exist that try to predict the salient regions on a scene from its RGB image values. Several new models are developed, and spectral imaging techniques may potentially overcome the limitations found when using RGB images. However, the experimental study of such models based on spectral images is difficult because of the lack of available data to work with. This article presents the first eight-channel multispectral image database of outdoor urban scenes together with their gaze data recorded using an eyetracker over several observers performing different visualization tasks. Besides, the information from this database is used to study whether the complexity of the images has an impact on the saliency maps retrieved from the observers. Results show that more complex images do not correlate with higher differences in the saliency maps obtained.


Sign in / Sign up

Export Citation Format

Share Document