scholarly journals Registration and Fusion of the Autofluorescent and Infrared Retinal Images

2008 ◽  
Vol 2008 ◽  
pp. 1-11 ◽  
Author(s):  
Radim Kolar ◽  
Libor Kubecka ◽  
Jiri Jan

This article deals with registration and fusion of multimodal opththalmologic images obtained by means of a laser scanning device (Heidelberg retina angiograph). The registration framework has been designed and tested for combination of autofluorescent and infrared images. This process is a necessary step for consecutive pixel level fusion and analysis utilizing information from both modalities. Two fusion methods are presented and compared.

2018 ◽  
Vol 26 (11) ◽  
pp. 631-640 ◽  
Author(s):  
Takahiro Matsuda ◽  
Shinsuke Onoe ◽  
Yoshiho Seo ◽  
Satoshi Ouchi

Rank level fusion is one of the after matching fusion methods used in multibiometric systems. The problem of rank information aggregation has been raised before in various fields. This chapter extensively discusses the rank level fusion methodology, starting with existing literature from the last decade in different application scenarios. Several approaches of existing biometric rank level fusion methods, such as plurality voting method, highest rank method, Borda count method, logistic regression method, and quality-based rank fusion method, are discussed along with their advantages and disadvantages in the context of the current state-of-the-art in the discipline.


Author(s):  
Mina Farmanbar ◽  
Önsen Toygar

This paper proposes hybrid approaches based on both feature level and score level fusion strategies to provide a robust recognition system against the distortions of individual modalities. In order to compare the proposed schemes, a virtual multimodal database is formed from FERET face and PolyU palmprint databases. The proposed hybrid systems concatenate features extracted by local and global feature extraction methods such as Local Binary Patterns, Log Gabor, Principal Component Analysis and Linear Discriminant Analysis. Match score level fusion is performed in order to show the effectiveness and accuracy of the proposed schemes. The experimental results based on these databases reported a significant improvement of the proposed schemes compared with unimodal systems and other multimodal face–palmprint fusion methods.


Author(s):  
Zibo Meng ◽  
Shizhong Han ◽  
Min Chen ◽  
Yan Tong

Recognizing facial actions is challenging, especially when they are accompanied with speech. Instead of employing information solely from the visual channel, this work aims to exploit information from both visual and audio channels in recognizing speech-related facial action units (AUs). In this work, two feature-level fusion methods are proposed. The first method is based on a kind of human-crafted visual feature. The other method utilizes visual features learned by a deep convolutional neural network (CNN). For both methods, features are independently extracted from visual and audio channels and aligned to handle the difference in time scales and the time shift between the two signals. These temporally aligned features are integrated via feature-level fusion for AU recognition. Experimental results on a new audiovisual AU-coded dataset have demonstrated that both fusion methods outperform their visual counterparts in recognizing speech-related AUs. The improvement is more impressive with occlusions on the facial images, which would not affect the audio channel.


2011 ◽  
Vol 115 (8) ◽  
pp. 1942-1954 ◽  
Author(s):  
Rubén Valbuena ◽  
Francisco Mauro ◽  
Francisco José Arjonilla ◽  
José Antonio Manzanera

Sign in / Sign up

Export Citation Format

Share Document