Late Fusion via Subspace Search With Consistency Preservation

2019 ◽  
Vol 28 (1) ◽  
pp. 518-528 ◽  
Author(s):  
Xuanyi Dong ◽  
Yan Yan ◽  
Mingkui Tan ◽  
Yi Yang ◽  
Ivor W. Tsang
2021 ◽  
pp. 1-20
Author(s):  
Tianqi Wang ◽  
Yin Hong ◽  
Quanyi Wang ◽  
Rongfeng Su ◽  
Manwa Lawrence Ng ◽  
...  

Background: Previous studies explored the use of noninvasive biomarkers of speech and language for the detection of mild cognitive impairment (MCI). Yet, most of them employed single task which might not have adequately captured all aspects of their cognitive functions. Objective: The present study aimed to achieve the state-of-the-art accuracy in detecting individuals with MCI using multiple spoken tasks and uncover task-specific contributions with a tentative interpretation of features. Methods: Fifty patients clinically diagnosed with MCI and 60 healthy controls completed three spoken tasks (picture description, semantic fluency, and sentence repetition), from which multidimensional features were extracted to train machine learning classifiers. With a late-fusion configuration, predictions from multiple tasks were combined and correlated with the participants’ cognitive ability assessed using the Montreal Cognitive Assessment (MoCA). Statistical analyses on pre-defined features were carried out to explore their association with the diagnosis. Results: The late-fusion configuration could effectively boost the final classification result (SVM: F1 = 0.95; RF: F1 = 0.96; LR: F1 = 0.93), outperforming each individual task classifier. Besides, the probability estimates of MCI were strongly correlated with the MoCA scores (SVM: –0.74; RF: –0.71; LR: –0.72). Conclusion: Each single task tapped more dominantly to distinct cognitive processes and have specific contributions to the prediction of MCI. Specifically, picture description task characterized communications at the discourse level, while semantic fluency task was more specific to the controlled lexical retrieval processes. With greater demands on working memory load, sentence repetition task uncovered memory deficits through modified speech patterns in the reproduced sentences.


2021 ◽  
Vol 11 (3) ◽  
pp. 1064
Author(s):  
Jenq-Haur Wang ◽  
Yen-Tsang Wu ◽  
Long Wang

In social networks, users can easily share information and express their opinions. Given the huge amount of data posted by many users, it is difficult to search for relevant information. In addition to individual posts, it would be useful if we can recommend groups of people with similar interests. Past studies on user preference learning focused on single-modal features such as review contents or demographic information of users. However, such information is usually not easy to obtain in most social media without explicit user feedback. In this paper, we propose a multimodal feature fusion approach to implicit user preference prediction which combines text and image features from user posts for recommending similar users in social media. First, we use the convolutional neural network (CNN) and TextCNN models to extract image and text features, respectively. Then, these features are combined using early and late fusion methods as a representation of user preferences. Lastly, a list of users with the most similar preferences are recommended. The experimental results on real-world Instagram data show that the best performance can be achieved when we apply late fusion of individual classification results for images and texts, with the best average top-k accuracy of 0.491. This validates the effectiveness of utilizing deep learning methods for fusing multimodal features to represent social user preferences. Further investigation is needed to verify the performance in different types of social media.


2020 ◽  
Vol 39 (4) ◽  
Author(s):  
Chia-Hsing Chiu ◽  
Yuki Koyama ◽  
Yu-Chi Lai ◽  
Takeo Igarashi ◽  
Yonghao Yue

2021 ◽  
Author(s):  
Esaú Villatoro-Tello ◽  
S. Pavankumar Dubagunta ◽  
Julian Fritsch ◽  
Gabriela Ramírez-de-la-Rosa ◽  
Petr Motlicek ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document