scholarly journals A Feature Integrated Saliency Estimation Model for Omnidirectional Immersive Images

Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1538 ◽  
Author(s):  
Pramit Mazumdar ◽  
Kamal Lamichhane ◽  
Marco Carli ◽  
Federica Battisti

Omnidirectional, or 360°, cameras are able to capture the surrounding space, thus providing an immersive experience when the acquired data is viewed using head mounted displays. Such an immersive experience inherently generates an illusion of being in a virtual environment. The popularity of 360° media has been growing in recent years. However, due to the large amount of data, processing and transmission pose several challenges. To this aim, efforts are being devoted to the identification of regions that can be used for compressing 360° images while guaranteeing the immersive feeling. In this contribution, we present a saliency estimation model that considers the spherical properties of the images. The proposed approach first divides the 360° image into multiple patches that replicate the positions (viewports) looked at by a subject while viewing a 360° image using a head mounted display. Next, a set of low-level features able to depict various properties of an image scene is extracted from each patch. The extracted features are combined to estimate the 360° saliency map. Finally, bias induced during image exploration and illumination variation is fine-tuned for estimating the final saliency map. The proposed method is evaluated using a benchmark 360° image dataset and is compared with two baselines and eight state-of-the-art approaches for saliency estimation. The obtained results show that the proposed model outperforms existing saliency estimation models.

Author(s):  
Huao Li ◽  
Michael Lewis ◽  
Katia Sycara

Trust is an important factor in the interaction between humans and automation that can mediate the reliance of human operators. In this work, we evaluate a computational model of human trust on swarm systems based on Sheridan (2019)’s modified Kalman estimation model using existing experiment data (Nam, Li, Li, Lewis, & Sycara, 2018). Results show that our Kalman Filter model outperforms existing state of the art alternatives including dynamic Bayesian networks and inverse reinforcement learning. This work is novel in that: 1) The Kalman estimator is the first computational model formulating the human trust evolution as a combination of both open-loop trust anticipation and closed-loop trust feedback. 2) The proposed model considers the operator’s cognitive time lag between perceiving and processing the system display. 3) The proposed model provides a personalized model for each individual and reaches a better level of fitness than state-of-the-art alternatives.


2021 ◽  
pp. 1-16
Author(s):  
Ibtissem Gasmi ◽  
Mohamed Walid Azizi ◽  
Hassina Seridi-Bouchelaghem ◽  
Nabiha Azizi ◽  
Samir Brahim Belhaouari

Context-Aware Recommender System (CARS) suggests more relevant services by adapting them to the user’s specific context situation. Nevertheless, the use of many contextual factors can increase data sparsity while few context parameters fail to introduce the contextual effects in recommendations. Moreover, several CARSs are based on similarity algorithms, such as cosine and Pearson correlation coefficients. These methods are not very effective in the sparse datasets. This paper presents a context-aware model to integrate contextual factors into prediction process when there are insufficient co-rated items. The proposed algorithm uses Latent Dirichlet Allocation (LDA) to learn the latent interests of users from the textual descriptions of items. Then, it integrates both the explicit contextual factors and their degree of importance in the prediction process by introducing a weighting function. Indeed, the PSO algorithm is employed to learn and optimize weights of these features. The results on the Movielens 1 M dataset show that the proposed model can achieve an F-measure of 45.51% with precision as 68.64%. Furthermore, the enhancement in MAE and RMSE can respectively reach 41.63% and 39.69% compared with the state-of-the-art techniques.


2021 ◽  
Vol 11 (8) ◽  
pp. 3636
Author(s):  
Faria Zarin Subah ◽  
Kaushik Deb ◽  
Pranab Kumar Dhar ◽  
Takeshi Koshiba

Autism spectrum disorder (ASD) is a complex and degenerative neuro-developmental disorder. Most of the existing methods utilize functional magnetic resonance imaging (fMRI) to detect ASD with a very limited dataset which provides high accuracy but results in poor generalization. To overcome this limitation and to enhance the performance of the automated autism diagnosis model, in this paper, we propose an ASD detection model using functional connectivity features of resting-state fMRI data. Our proposed model utilizes two commonly used brain atlases, Craddock 200 (CC200) and Automated Anatomical Labelling (AAL), and two rarely used atlases Bootstrap Analysis of Stable Clusters (BASC) and Power. A deep neural network (DNN) classifier is used to perform the classification task. Simulation results indicate that the proposed model outperforms state-of-the-art methods in terms of accuracy. The mean accuracy of the proposed model was 88%, whereas the mean accuracy of the state-of-the-art methods ranged from 67% to 85%. The sensitivity, F1-score, and area under receiver operating characteristic curve (AUC) score of the proposed model were 90%, 87%, and 96%, respectively. Comparative analysis on various scoring strategies show the superiority of BASC atlas over other aforementioned atlases in classifying ASD and control.


Author(s):  
Mingliang Xu ◽  
Qingfeng Li ◽  
Jianwei Niu ◽  
Hao Su ◽  
Xiting Liu ◽  
...  

Quick response (QR) codes are usually scanned in different environments, so they must be robust to variations in illumination, scale, coverage, and camera angles. Aesthetic QR codes improve the visual quality, but subtle changes in their appearance may cause scanning failure. In this article, a new method to generate scanning-robust aesthetic QR codes is proposed, which is based on a module-based scanning probability estimation model that can effectively balance the tradeoff between visual quality and scanning robustness. Our method locally adjusts the luminance of each module by estimating the probability of successful sampling. The approach adopts the hierarchical, coarse-to-fine strategy to enhance the visual quality of aesthetic QR codes, which sequentially generate the following three codes: a binary aesthetic QR code, a grayscale aesthetic QR code, and the final color aesthetic QR code. Our approach also can be used to create QR codes with different visual styles by adjusting some initialization parameters. User surveys and decoding experiments were adopted for evaluating our method compared with state-of-the-art algorithms, which indicates that the proposed approach has excellent performance in terms of both visual quality and scanning robustness.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3527
Author(s):  
Melanija Vezočnik ◽  
Roman Kamnik ◽  
Matjaz B. Juric

Inertial sensor-based step length estimation has become increasingly important with the emergence of pedestrian-dead-reckoning-based (PDR-based) indoor positioning. So far, many refined step length estimation models have been proposed to overcome the inaccuracy in estimating distance walked. Both the kinematics associated with the human body during walking and actual step lengths are rarely used in their derivation. Our paper presents a new step length estimation model that utilizes acceleration magnitude. To the best of our knowledge, we are the first to employ principal component analysis (PCA) to characterize the experimental data for the derivation of the model. These data were collected from anatomical landmarks on the human body during walking using a highly accurate optical measurement system. We evaluated the performance of the proposed model for four typical smartphone positions for long-term human walking and obtained promising results: the proposed model outperformed all acceleration-based models selected for the comparison producing an overall mean absolute stride length estimation error of 6.44 cm. The proposed model was also least affected by walking speed and smartphone position among acceleration-based models and is unaffected by smartphone orientation. Therefore, the proposed model can be used in the PDR-based indoor positioning with an important advantage that no special care regarding orientation is needed in attaching the smartphone to a particular body segment. All the sensory data acquired by smartphones that we utilized for evaluation are publicly available and include more than 10 h of walking measurements.


Author(s):  
Masoumeh Zareapoor ◽  
Jie Yang

Image-to-Image translation aims to learn an image from a source domain to a target domain. However, there are three main challenges, such as lack of paired datasets, multimodality, and diversity, that are associated with these problems and need to be dealt with. Convolutional neural networks (CNNs), despite of having great performance in many computer vision tasks, they fail to detect the hierarchy of spatial relationships between different parts of an object and thus do not form the ideal representative model we look for. This article presents a new variation of generative models that aims to remedy this problem. We use a trainable transformer, which explicitly allows the spatial manipulation of data within training. This differentiable module can be augmented into the convolutional layers in the generative model, and it allows to freely alter the generated distributions for image-to-image translation. To reap the benefits of proposed module into generative model, our architecture incorporates a new loss function to facilitate an effective end-to-end generative learning for image-to-image translation. The proposed model is evaluated through comprehensive experiments on image synthesizing and image-to-image translation, along with comparisons with several state-of-the-art algorithms.


2021 ◽  
Vol 11 (12) ◽  
pp. 5383
Author(s):  
Huachen Gao ◽  
Xiaoyu Liu ◽  
Meixia Qu ◽  
Shijie Huang

In recent studies, self-supervised learning methods have been explored for monocular depth estimation. They minimize the reconstruction loss of images instead of depth information as a supervised signal. However, existing methods usually assume that the corresponding points in different views should have the same color, which leads to unreliable unsupervised signals and ultimately damages the reconstruction loss during the training. Meanwhile, in the low texture region, it is unable to predict the disparity value of pixels correctly because of the small number of extracted features. To solve the above issues, we propose a network—PDANet—that integrates perceptual consistency and data augmentation consistency, which are more reliable unsupervised signals, into a regular unsupervised depth estimation model. Specifically, we apply a reliable data augmentation mechanism to minimize the loss of the disparity map generated by the original image and the augmented image, respectively, which will enhance the robustness of the image in the prediction of color fluctuation. At the same time, we aggregate the features of different layers extracted by a pre-trained VGG16 network to explore the higher-level perceptual differences between the input image and the generated one. Ablation studies demonstrate the effectiveness of each components, and PDANet shows high-quality depth estimation results on the KITTI benchmark, which optimizes the state-of-the-art method from 0.114 to 0.084, measured by absolute relative error for depth estimation.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Hai Wang ◽  
Lei Dai ◽  
Yingfeng Cai ◽  
Long Chen ◽  
Yong Zhang

Traditional salient object detection models are divided into several classes based on low-level features and contrast between pixels. In this paper, we propose a model based on a multilevel deep pyramid (MLDP), which involves fusing multiple features on different levels. Firstly, the MLDP uses the original image as the input for a VGG16 model to extract high-level features and form an initial saliency map. Next, the MLDP further extracts high-level features to form a saliency map based on a deep pyramid. Then, the MLDP obtains the salient map fused with superpixels by extracting low-level features. After that, the MLDP applies background noise filtering to the saliency map fused with superpixels in order to filter out the interference of background noise and form a saliency map based on the foreground. Lastly, the MLDP combines the saliency map fused with the superpixels with the saliency map based on the foreground, which results in the final saliency map. The MLDP is not limited to low-level features while it fuses multiple features and achieves good results when extracting salient targets. As can be seen in our experiment section, the MLDP is better than the other 7 state-of-the-art models across three different public saliency datasets. Therefore, the MLDP has superiority and wide applicability in extraction of salient targets.


Author(s):  
Yinfei Yang ◽  
Gustavo Hernandez Abrego ◽  
Steve Yuan ◽  
Mandy Guo ◽  
Qinlan Shen ◽  
...  

In this paper, we present an approach to learn multilingual sentence embeddings using a bi-directional dual-encoder with additive margin softmax. The embeddings are able to achieve state-of-the-art results on the United Nations (UN) parallel corpus retrieval task. In all the languages tested, the system achieves P@1 of 86% or higher. We use pairs retrieved by our approach to train NMT models that achieve similar performance to models trained on gold pairs. We explore simple document-level embeddings constructed by averaging our sentence embeddings. On the UN document-level retrieval task, document embeddings achieve around 97% on P@1 for all experimented language pairs. Lastly, we evaluate the proposed model on the BUCC mining task. The learned embeddings with raw cosine similarity scores achieve competitive results compared to current state-of-the-art models, and with a second-stage scorer we achieve a new state-of-the-art level on this task.


Author(s):  
Kaixuan Chen ◽  
Lina Yao ◽  
Dalin Zhang ◽  
Bin Guo ◽  
Zhiwen Yu

Multi-modality is an important feature of sensor based activity recognition. In this work, we consider two inherent characteristics of human activities, the spatially-temporally varying salience of features and the relations between activities and corresponding body part motions. Based on these, we propose a multi-agent spatial-temporal attention model. The spatial-temporal attention mechanism helps intelligently select informative modalities and their active periods. And the multiple agents in the proposed model represent activities with collective motions across body parts by independently selecting modalities associated with single motions. With a joint recognition goal, the agents share gained information and coordinate their selection policies to learn the optimal recognition model. The experimental results on four real-world datasets demonstrate that the proposed model outperforms the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document