NeckFace

Author(s):  
Tuochao Chen ◽  
Yaxuan Li ◽  
Songyun Tao ◽  
Hyunchul Lim ◽  
Mose Sakashita ◽  
...  

Facial expressions are highly informative for computers to understand and interpret a person's mental and physical activities. However, continuously tracking facial expressions, especially when the user is in motion, is challenging. This paper presents NeckFace, a wearable sensing technology that can continuously track the full facial expressions using a neck-piece embedded with infrared (IR) cameras. A customized deep learning pipeline called NeckNet based on Resnet34 is developed to learn the captured infrared (IR) images of the chin and face and output 52 parameters representing the facial expressions. We demonstrated NeckFace on two common neck-mounted form factors: a necklace and a neckband (e.g., neck-mounted headphones), which was evaluated in a user study with 13 participants. The study results showed that NeckFace worked well when the participants were sitting, walking, or after remounting the device. We discuss the challenges and opportunities of using NeckFace in real-world applications.

Author(s):  
Ruidong Zhang ◽  
Mingyang Chen ◽  
Benjamin Steeper ◽  
Yaxuan Li ◽  
Zihan Yan ◽  
...  

This paper presents SpeeChin, a smart necklace that can recognize 54 English and 44 Chinese silent speech commands. A customized infrared (IR) imaging system is mounted on a necklace to capture images of the neck and face from under the chin. These images are first pre-processed and then deep learned by an end-to-end deep convolutional-recurrent-neural-network (CRNN) model to infer different silent speech commands. A user study with 20 participants (10 participants for each language) showed that SpeeChin could recognize 54 English and 44 Chinese silent speech commands with average cross-session accuracies of 90.5% and 91.6%, respectively. To further investigate the potential of SpeeChin in recognizing other silent speech commands, we conducted another study with 10 participants distinguishing between 72 one-syllable nonwords. Based on the results from the user studies, we further discuss the challenges and opportunities of deploying SpeeChin in real-world applications.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


2013 ◽  
Vol 5 (2) ◽  
pp. 1-20 ◽  
Author(s):  
Matthias Baldauf ◽  
Peter Fröhlich ◽  
Jasmin Buchta ◽  
Theresa Stürmer

Today’s smartphones provide the technical means to serve as interfaces for public displays in various ways. Even though recent research has identified several new approaches for mobile-display interaction, inter-technique comparisons of respective methods are scarce. The authors conducted an experimental user study on four currently relevant mobile-display interaction techniques (‘Touchpad’, ‘Pointer’, ‘Mini Video’, and ‘Smart Lens’) and learned that their suitability strongly depends on the task and use case at hand. The study results indicate that mobile-display interactions based on a traditional touchpad metaphor are time-consuming but highly accurate in standard target acquisition tasks. The direct interaction techniques Mini Video and Smart Lens had comparably good completion times, and especially Mini Video appeared to be best suited for complex visual manipulation tasks like drawing. Smartphone-based pointing turned out to be generally inferior to the other alternatives. Examples for the application of these differentiated results to real-world use cases are provided.


2020 ◽  
Vol 11 (12) ◽  
pp. 709-714
Author(s):  
Janani Prabu ◽  
Sai Saranesh ◽  
Dr.S. Ajitha

Face is one among the foremost important human's biometrics which is used frequently in every day human communication and due to some of its unique characteristics plays a major role in conveying identity and emotion. So far numerous methods have been proposed for face recognition, but it's still remained very challenging in real world applications and up to date; there is no technique which equals human ability to recognize faces despite many variations in appearance that the face can have in a scene and provides a strong solution to all situations.


Kybernetes ◽  
2017 ◽  
Vol 46 (4) ◽  
pp. 693-705 ◽  
Author(s):  
Yasser F. Hassan

Purpose This paper aims to utilize machine learning and soft computing to propose a new method of rough sets using deep learning architecture for many real-world applications. Design/methodology/approach The objective of this work is to propose a model for deep rough set theory that uses more than decision table and approximating these tables to a classification system, i.e. the paper propose a novel framework of deep learning based on multi-decision tables. Findings The paper tries to coordinate the local properties of individual decision table to provide an appropriate global decision from the system. Research limitations/implications The rough set learning assumes the existence of a single decision table, whereas real-world decision problem implies several decisions with several different decision tables. The new proposed model can handle multi-decision tables. Practical implications The proposed classification model is implemented on social networks with preferred features which are freely distribute as social entities with accuracy around 91 per cent. Social implications The deep learning using rough sets theory simulate the way of brain thinking and can solve the problem of existence of different information about same problem in different decision systems Originality/value This paper utilizes machine learning and soft computing to propose a new method of rough sets using deep learning architecture for many real-world applications.


2020 ◽  
Vol 2 (2) ◽  
pp. 112-119
Author(s):  
Kawal Arora ◽  
Ankur Singh Bist ◽  
Roshan Prakash ◽  
Saksham Chaurasia

Recent advancements in the area of Optical Character Recognition (OCR) using deep learning techniques made it possible to use for real world applications with good accuracy. In this paper we present a system named as OCRXNet. OCRXNetv1, OCRXNetv2 and OCRXNetv3 are proposed and compared on different identity documents. Image processing methods and various text detectors have been used to identify best fitted process for custom ocr of identity documents. We also introduced the end to end pipeline to implement OCR for various use cases.


2020 ◽  
Author(s):  
Sathappan Muthiah ◽  
Debanjan Datta ◽  
Mohammad Raihanul Islam ◽  
Patrick Butler ◽  
Andrew Warren ◽  
...  

AbstractToxin classification of protein sequences is a challenging task with real world applications in healthcare and synthetic biology. Due to an ever expanding database of proteins and the inordinate cost of manual annotation, automated machine learning based approaches are crucial. Approaches need to overcome challenges of homology, multi-functionality, and structural diversity among proteins in this task. We propose a novel deep learning based method ProtTox, that aims to address some of the shortcomings of previous approaches in classifying proteins as toxins or not. Our method achieves a performance of 0.812 F1-score which is about 5% higher than the closest performing baseline.


Author(s):  
Haotong Qin

Quantization is emerging as an efficient approach to promote hardware-friendly deep learning and run deep neural networks on resource-limited hardware. However, it still causes a significant decrease to the network in accuracy. We summarize challenges of quantization into two categories: Quantization for Diverse Architectures and Quantization on Complex Scenes. Our studies focus mainly on applying quantization on various architectures and scenes and pushing the limit of quantization to extremely compress and accelerate networks. The comprehensive research on quantization will achieve more powerful, more efficient, and more flexible hardware-friendly deep learning, and make it better suited to more real-world applications.


Entropy ◽  
2018 ◽  
Vol 20 (12) ◽  
pp. 982 ◽  
Author(s):  
Khaled Almgren ◽  
Murali Krishnan ◽  
Fatima Aljanobi ◽  
Jeongkyu Lee

The processing and analyzing of multimedia data has become a popular research topic due to the evolution of deep learning. Deep learning has played an important role in addressing many challenging problems, such as computer vision, image recognition, and image detection, which can be useful in many real-world applications. In this study, we analyzed visual features of images to detect advertising images from scanned images of various magazines. The aim is to identify key features of advertising images and to apply them to real-world application. The proposed work will eventually help improve marketing strategies, which requires the classification of advertising images from magazines. We employed convolutional neural networks to classify scanned images as either advertisements or non-advertisements (i.e., articles). The results show that the proposed approach outperforms other classifiers and the related work in terms of accuracy.


Sign in / Sign up

Export Citation Format

Share Document