gallery image
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 1)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
V. V. Kniaz ◽  
P. Moshkantseva

Abstract. Object Re-Identification (ReID) is the task of matching a given object in the new environment with its image captured in a different environment. The input for a ReID method includes two sets of images. The probe set includes one or more images of the object that must be identified in the new environment. The gallery set includes images that may contain the object from the probe image. The ReID task’s complexity arises from the differences in the object appearance in the probe and gallery sets. Such difference may originate from changes in illumination or viewpoint locations for multiple cameras that capture images in the probe and gallery sets. This paper focuses on developing a deep learning ThermalReID framework for cross-modality object ReID in thermal images. Our framework aims to provide continuous object detection and re-identification while monitoring a region from a UAV. Given an input probe image captured in the visible range, our ThermalReID framework detects objects in a thermal image and performs the ReID. We evaluate our ThermalReID framework and modern baselines using various metrics. We use the IoU and mAP metrics for the object detection task. We use the cumulative matching characteristic (CMC) curves and normalized area-under-curve (nAUC) for the ReID task. The evaluation demonstrated encouraging results and proved that our ThermalReID framework outperforms existing baselines in the ReID accuracy. Furthermore, we demonstrated that the fusion of the semantic data with the input thermal gallery image increases the object detection and localization scores. We developed the ThermalReID framework for cross-modality object re-identification. We evaluated our framework and two modern baselines on the task of object ReID for four object classes. Our framework successfully performs object ReID in the thermal gallery image from the color probe image. The evaluation using real and synthetic data demonstrated that our ThermalReID framework increases the ReID accuracy compared to modern ReID baselines.


Author(s):  
Yaqing Zhang ◽  
Xi Li ◽  
Zhongfei Zhang

Person re-identification (Re-ID) is typically cast as the problem of semantic representation and alignment, which requires precisely discovering and modeling the inherent spatial structure information on person images. Motivated by this observation, we propose a Key-Value Memory Matching Network (KVM-MN) model that consists of key-value memory representation and key-value co-attention matching. The proposed KVM-MN model is capable of building an effective local-position-aware person representation that encodes the spatial feature information in the form of multi-head key-value memory. Furthermore, the proposed KVM-MN model makes use of multi-head co-attention to automatically learn a number of cross-person-matching patterns, resulting in more robust and interpretable matching results. Finally, we build a setwise learning mechanism that implements a more generalized query-to-gallery-image-set learning procedure. Experimental results demonstrate the effectiveness of the proposed model against the state-of-the-art.


Author(s):  
Rishav Singh ◽  
Ritika Singh ◽  
Aakriti Acharya ◽  
Shrikant Tiwari ◽  
Hari Om

Recently a lot of face recognition systems are being designed to identify individuals in a semi controlled environment where pose and illumination are controlled. However, in the case of newborns it is not easy to click the photographs with similar pose and illumination. Here, in this paper a hybrid approach using Speeded Up Robust Features (SURF) and Local Binary Pattern (LBP) is proposed for newborns. Moreover, in this paper the experiment is done for a single gallery image with improved results. It shows that the proposed method has 97.18% accuracy which is an 8% improvement over LBP and 8.6% improvement over SURF for Rank 5.


Author(s):  
Lei Deng ◽  
Jing Shi ◽  
Yulong Wang

This paper presents a novel method for video-based face recognition (VFR) based on M-estimator and image set collaborative representation. Since a video is essentially an image set, the VFR problem can be cast as a special case of the image set-based face recognition (FR) problem. To measure the distance between the query image set and the gallery image set, we develop an M-estimator-based image set collaborative representation (MISCR) model. To implement MISCR, we devise an efficient half-quadratic-based optimization algorithm to tackle the complicated optimization problem. We also establish the convergence property of the devised algorithm. Our other contribution is to propose an MISCR-based classifier for the general image set classification problem, including VFR as a special case. The experiments using real-world benchmark databases demonstrate the efficacy and robustness of the proposed method for VFR.


Author(s):  
Ajay Jaiswal ◽  
Nitin Kumar ◽  
R. K. Agrawal

Pose variation leads to significant decline in the performance of the face recognition systems. In this paper, the authors propose a new approach HLLR, based on conjunction of hybrid-eigenfaces and local linear regression (LLR), to perform face recognition across pose. In this approach, LLR on hybrid-eigenfaces is used to generate virtual views. These virtual views in frontal and non-frontal poses are obtained using frontal gallery image. The performance of the proposed approach is compared for classification accuracy with another efficient method based on global linear regression on hybrid eigenface (HGLR). They also investigate the effect of number of images used to construct hybrid-eigenfaces on classification accuracy. Experimental results on two well known publicly available face databases demonstrate the effectiveness of the proposed approach. The suitability of proposed approach is also noticed when the number of available images is small.


Telematika ◽  
2012 ◽  
Vol 14 (1) ◽  
pp. 1-12
Author(s):  
Fahmi Anwar ◽  

Technology is growing rapidly, especially in communication with various types of information services such as internet-based messages. One of the most popular internet-based messages in Indonesia is WhatsApp Messenger. WhatsApp is a chat application that can be used on many platforms. Message sending on WhatsApp is carried out end-to-end encryption from the sender to the message recipient. The sending of messages in PNG images is secured using end-to-end encryption and compressed according to predefined rules. This study analyzes Image Compression and Alpha channel in PNG by comparing PNG images before being sent with PNG images that have gone through the sending process on WhatsApp using the test-driven development (TDD) method. The analysis results contain comparisons based on the RMSE, SSIM, PSNR, and MD5 hash values. Delivery with a gallery image attachment type using an image transparent background changes to a white image background. While those with a background other than transparent have good image quality because it has a PSNR value of more than 35 dB, and submissions with document attachment types do not experience changes in MD5 hash value and image quality.


Author(s):  
SHAOKANG CHEN ◽  
BRIAN C. LOVELL ◽  
TING SHAN

Recognizing faces with uncontrolled pose, illumination, and expression is a challenging task due to the fact that features insensitive to one variation may be highly sensitive to the other variations. Existing techniques dealing with just one of these variations are very often unable to cope with the other variations. The problem is even more difficult in applications where only one gallery image per person is available. In this paper, we describe a recognition method, Adapted Principal Component Analysis (APCA), that can simultaneously deal with large variations in both illumination and facial expression using only a single gallery image per person. We have now extended this method to handle head pose variations in two steps. The first step is to apply an Active Appearance Model (AAM) to the non-frontal face image to construct a synthesized frontal face image. The second is to use APCA for classification robust to lighting and pose. The proposed technique is evaluated on three public face databases — Asian Face, Yale Face, and FERET Database — with images under different lighting conditions, facial expressions, and head poses. Experimental results show that our method performs much better than other recognition methods including PCA, FLD, PRM and LTP. More specifically, we show that by using AAM for frontal face synthesis from high pose angle faces, the recognition rate of our APCA method increases by up to a factor of 4.


Sign in / Sign up

Export Citation Format

Share Document