Deep neural networks for mobile person recognition with audio-visual signals

2017 ◽  
pp. 97-129
Author(s):  
M. R. Alam ◽  
M. Bennamoun ◽  
R. Togneri ◽  
F. Sohel
2018 ◽  
Vol 4 (1) ◽  
pp. 61-72 ◽  
Author(s):  
Chang Liu ◽  
Fuchun Sun ◽  
Bo Zhang

Modern computational models have leveraged biological advances in human brain research. This study addresses the problem of multimodal learning with the help of brain-inspired models. Specifically, a unified multimodal learning architecture is proposed based on deep neural networks, which are inspired by the biology of the visual cortex of the human brain. This unified framework is validated by two practical multimodal learning tasks: image captioning, involving visual and natural language signals, and visual-haptic fusion, involving haptic and visual signals. Extensive experiments are conducted under the framework, and competitive results are achieved.


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2018 ◽  
Author(s):  
Chi Zhang ◽  
Xiaohan Duan ◽  
Ruyuan Zhang ◽  
Li Tong

Author(s):  
Daniel Povey ◽  
Gaofeng Cheng ◽  
Yiming Wang ◽  
Ke Li ◽  
Hainan Xu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document