Objective Measures of Cognitive Load Using Deep Multi-Modal Learning

Author(s):  
Justin C. Wilson ◽  
Suku Nair ◽  
Sandro Scielzo ◽  
Eric C. Larson

The capability of measuring human performance objectively is hard to overstate, especially in the context of the instructor and student relationship within the process of learning. In this work, we investigate the automated classification of cognitive load leveraging the aviation domain as a surrogate for complex task workload induction. We use a mixed virtual and physical flight environment, given a suite of biometric sensors utilizing the HTC Vive Pro Eye and the E4 Empatica. We create and evaluate multiple models. And we have taken advantage of advancements in deep learning such as generative learning, multi-modal learning, multi-task learning, and x-vector architectures to classify multiple tasks across 40 subjects inclusive of three subject types --- pilots, operators, and novices. Our cognitive load model can automate the evaluation of cognitive load agnostic to subject, subject type, and flight maneuver (task) with an accuracy of over 80%. Further, this approach is validated with real-flight data from five test pilots collected over two test and evaluation flights on a C-17 aircraft.

2020 ◽  
Author(s):  
Eric Prince ◽  
Ros Whelan ◽  
David M. Mirsky ◽  
Todd C. Hankinson

AbstractModern Deep Learning (DL) networks routinely achieve classification accuracy superior to human experts, leveraging scenarios with vast amounts of training data. Community focus has now seen a shift towards the design of accurate classifiers for scenarios with limited training data. Such an example is the uncommon pediatric brain tumor, Adamantinomatous Craniopharyngioma (ACP). Recent work has demonstrated the efficacy of Transfer Learning (TL) and novel loss functions for the training of DL networks in limited data scenarios. This work describes a DL approach utilizing TL and a state-of-the-art custom loss function for predicting ACP diagnosis from radiographic data, achieving performance (CT AUPR=0.99±0.01, MRI AUPR=0.99±0.02) superior to reported human performance (0.87).


2021 ◽  
Vol 132 ◽  
pp. S287-S288
Author(s):  
Jianling Ji ◽  
Ryan Schmidt ◽  
Westley Sherman ◽  
Ryan Peralta ◽  
Megan Roytman ◽  
...  

Author(s):  
Amira S. Ashour ◽  
Merihan M. Eissa ◽  
Maram A. Wahba ◽  
Radwa A. Elsawy ◽  
Hamada Fathy Elgnainy ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Song-Quan Ong ◽  
Hamdan Ahmad ◽  
Gomesh Nair ◽  
Pradeep Isawasan ◽  
Abdul Hafiz Ab Majid

AbstractClassification of Aedes aegypti (Linnaeus) and Aedes albopictus (Skuse) by humans remains challenging. We proposed a highly accessible method to develop a deep learning (DL) model and implement the model for mosquito image classification by using hardware that could regulate the development process. In particular, we constructed a dataset with 4120 images of Aedes mosquitoes that were older than 12 days old and had common morphological features that disappeared, and we illustrated how to set up supervised deep convolutional neural networks (DCNNs) with hyperparameter adjustment. The model application was first conducted by deploying the model externally in real time on three different generations of mosquitoes, and the accuracy was compared with human expert performance. Our results showed that both the learning rate and epochs significantly affected the accuracy, and the best-performing hyperparameters achieved an accuracy of more than 98% at classifying mosquitoes, which showed no significant difference from human-level performance. We demonstrated the feasibility of the method to construct a model with the DCNN when deployed externally on mosquitoes in real time.


Sign in / Sign up

Export Citation Format

Share Document