scholarly journals Urban Morphological Feature Extraction and Multi-Dimensional Similarity Analysis Based on Deep Learning Approaches

2021 ◽  
Vol 13 (12) ◽  
pp. 6859
Author(s):  
Chenyi Cai ◽  
Zifeng Guo ◽  
Baizhou Zhang ◽  
Xiao Wang ◽  
Biao Li ◽  
...  

The study of urban morphology contributes to the evolution of cities and sustainable development. Urban morphological feature extraction and similarity analysis represents a practical framework in many studies to interpret and introduce the current built environment to aid in proposing novel designs. In conventional methods, morphological features are represented based on qualitative descriptions, symbolical interpretation, or manually selected indicators. However, these methods could cause subjective bias and limit the generalizability. This study proposes a hybrid data-driven approach to support quantitative morphological descriptions and multi-dimensional similarity analysis for urban design decision-making and to further morphology-related studies using information abundance via a deep-learning approach. We constructed a dataset of 3817 residential plots with geometrical and related infrastructure information. A deep convolutional neural network, GoogLeNet, was implemented with the plots’ figure–ground images, by quantifying the morphological features into 2048-dimensional feature vectors. We conducted a similarity analysis of the plots by calculating the Euclidean distance between the high-dimensional feature vectors. Then, a comparison study was performed by retrieving cases based on the plot shape and plots with buildings separately. The proposed method considers the overall characteristics of the urban morphology and social infrastructure situations for similarity analysis. This method is flexible and effective. The proposed framework indicates the feasibility and potential of integrating task-oriented information to introduce custom and adequate references via deep learning methods, which could support decision making and association studies on morphology with urban consequences. This work could serve as a basis for further typo-morphology studies and other morphology-related ecological, social, and economic studies for sustainable built environments.

Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Changxin Lai ◽  
Shijie Zhou ◽  
Natalia Trayanova

Introduction: Deep learning (DL) has achieved promising performance on common heart rhythms classification using 12-lead electrocardiogram (ECG). However, two major concerns hinder the DL’s application - lack of interpretability and overfitting caused by using the full 12-lead ECG as input. Objective: We proposed a hybrid DL model with enhanced interpretability to detect 9 common types of heart rhythms from an optimal ECG lead subset, and to quantitively analyze the overfitting. Methods: We used a multicenter dataset of 6,877 annotated 12-lead ECG recordings. The proposed model (Fig. 1A) consists of a feature extraction and a decision-making. The feature extraction used 12 separate neural networks to extract features from each lead. The features were then fed into a random-forest classifier in the decision-making step to classify heart-rhythm types. The classifier was used to interpret the correlations between the heart rhythms and the ECG leads, to find an optimal subset of ECG leads, and to analyze whether using 12-lead ECG added unnecessary complexity to the model and undermined its generalizability. Results: The proposed model detected the correlations between the heart-rhythm types and the ECG leads (Fig. 1B), and identified an optimal ECG lead subset (leads II, aVR, V1, V4). The optimal subset was, in comparison with using 12-lead ECG, significantly better (F1 =0.776 vs. F1 = 0.767, P=0.02) on the validation set for classifying the 9 common heart rhythms. There was no statistical difference on the test set. No overfitting caused by 12-lead ECG was detected in this study. Conclusion: The hybrid DL model based on an optimal 4-lead ECG can interpret rhythm types without significant loss of accuracy in comparison with the 12-lead ECG.


2021 ◽  
Author(s):  
Adam Hakim ◽  
Itamar Golan ◽  
Sharon Yefet ◽  
Dino J. Levy

<div> <p>There is an increasing demand within consumer-neuroscience (or neuromarketing) for objective neural measures to quantify consumers’ preferences and predict responses to marketing campaigns. However, the properties of EEG datasets raise various difficulties when performing predictions on them, such as the small size of data sets, high dimensionality, the need for elaborate feature extraction, intrinsic noise, and unpredictable between-subject variations. We aimed to overcome these limitations by combining unique techniques within a Deep Learning (DL) framework, while providing interpretable results for neuroscientific and decision-making insight. In this study, we developed a DL model to predict subject-specific preferences based on their EEG data. In each trial, 213 subjects observed a product’s image, out of 72 possible products, and then reported how much they were willing to pay (WTP) for the product. The DL employed EEG recordings from product observation to predict the corresponding reported WTP values. Our results showed 75.09% accuracy in predicting high vs. low WTP, surpassing other models and a manual feature extraction approach. Meanwhile, network visualizations provided the predictive frequencies of neural activity and their scalp distributions, shedding light on the neural mechanism involved with evaluation. In conclusion, we show that DLNs may be the superior method to perform EEG-based predictions, to the benefit of decision-making researchers and marketing practitioners alike.</p> </div> <br>


2021 ◽  
Author(s):  
Adam Hakim ◽  
Itamar Golan ◽  
Sharon Yefet ◽  
Dino J. Levy

<div> <p>There is an increasing demand within consumer-neuroscience (or neuromarketing) for objective neural measures to quantify consumers’ preferences and predict responses to marketing campaigns. However, the properties of EEG datasets raise various difficulties when performing predictions on them, such as the small size of data sets, high dimensionality, the need for elaborate feature extraction, intrinsic noise, and unpredictable between-subject variations. We aimed to overcome these limitations by combining unique techniques within a Deep Learning (DL) framework, while providing interpretable results for neuroscientific and decision-making insight. In this study, we developed a DL model to predict subject-specific preferences based on their EEG data. In each trial, 213 subjects observed a product’s image, out of 72 possible products, and then reported how much they were willing to pay (WTP) for the product. The DL employed EEG recordings from product observation to predict the corresponding reported WTP values. Our results showed 75.09% accuracy in predicting high vs. low WTP, surpassing other models and a manual feature extraction approach. Meanwhile, network visualizations provided the predictive frequencies of neural activity and their scalp distributions, shedding light on the neural mechanism involved with evaluation. In conclusion, we show that DLNs may be the superior method to perform EEG-based predictions, to the benefit of decision-making researchers and marketing practitioners alike.</p> </div> <br>


2021 ◽  
Author(s):  
Adam Hakim ◽  
Itamar Golan ◽  
Sharon Yefet ◽  
Dino J. Levy

<div> <p>There is an increasing demand within consumer-neuroscience (or neuromarketing) for objective neural measures to quantify consumers’ preferences and predict responses to marketing campaigns. However, the properties of EEG datasets raise various difficulties when performing predictions on them, such as the small size of data sets, high dimensionality, the need for elaborate feature extraction, intrinsic noise, and unpredictable between-subject variations. We aimed to overcome these limitations by combining unique techniques within a Deep Learning (DL) framework, while providing interpretable results for neuroscientific and decision-making insight. In this study, we developed a DL model to predict subject-specific preferences based on their EEG data. In each trial, 213 subjects observed a product’s image, out of 72 possible products, and then reported how much they were willing to pay (WTP) for the product. The DL employed EEG recordings from product observation to predict the corresponding reported WTP values. Our results showed 75.09% accuracy in predicting high vs. low WTP, surpassing other models and a manual feature extraction approach. Meanwhile, network visualizations provided the predictive frequencies of neural activity and their scalp distributions, shedding light on the neural mechanism involved with evaluation. In conclusion, we show that DLNs may be the superior method to perform EEG-based predictions, to the benefit of decision-making researchers and marketing practitioners alike.</p> </div> <br>


Author(s):  
Amrutha Krishnamoorthy ◽  
Vijayasimha Reddy Sindhura ◽  
Devarakonda Gowtham ◽  
C. Jyotsna ◽  
J. Amudha

Extraction of eye gaze events is highly dependent on automated powerful software that charges exorbitant prices. The proposed open-source intelligent tool StimulEye helps to detect and classify eye gaze events and analyse various metrics related to these events. The algorithms for eye event detection in use today heavily depend on hand-crafted signal features and thresholding, which are computed from the stream of raw gaze data. These algorithms leave most of their parametric decisions on the end user which might result in ambiguity and inaccuracy. StimulEye uses deep learning techniques to automate eye gaze event detection which neither requires manual decision making nor parametric definitions. StimulEye provides an end to end solution which takes raw streams of data from an eye tracker in text form, analyses these to classify the inputs into the events, namely saccades, fixations, and blinks. It provides the user with insights such as scanpath, fixation duration, radii, etc.


2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Junkai Yi ◽  
Yacong Zhang ◽  
Xianghui Zhao ◽  
Jing Wan

Text clustering is an effective approach to collect and organize text documents into meaningful groups for mining valuable information on the Internet. However, there exist some issues to tackle such as feature extraction and data dimension reduction. To overcome these problems, we present a novel approach named deep-learning vocabulary network. The vocabulary network is constructed based on related-word set, which contains the “cooccurrence” relations of words or terms. We replace term frequency in feature vectors with the “importance” of words in terms of vocabulary network and PageRank, which can generate more precise feature vectors to represent the meaning of text clustering. Furthermore, sparse-group deep belief network is proposed to reduce the dimensionality of feature vectors, and we introduce coverage rate for similarity measure in Single-Pass clustering. To verify the effectiveness of our work, we compare the approach to the representative algorithms, and experimental results show that feature vectors in terms of deep-learning vocabulary network have better clustering performance.


2020 ◽  
Vol 39 (4) ◽  
pp. 5699-5711
Author(s):  
Shirong Long ◽  
Xuekong Zhao

The smart teaching mode overcomes the shortcomings of traditional teaching online and offline, but there are certain deficiencies in the real-time feature extraction of teachers and students. In view of this, this study uses the particle swarm image recognition and deep learning technology to process the intelligent classroom video teaching image and extracts the classroom task features in real time and sends them to the teacher. In order to overcome the shortcomings of the premature convergence of the standard particle swarm optimization algorithm, an improved strategy for multiple particle swarm optimization algorithms is proposed. In order to improve the premature problem in the search performance algorithm of PSO algorithm, this paper combines the algorithm with the useful attributes of other algorithms to improve the particle diversity in the algorithm, enhance the global search ability of the particle, and achieve effective feature extraction. The research indicates that the method proposed in this paper has certain practical effects and can provide theoretical reference for subsequent related research.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


2019 ◽  
Vol 33 (3) ◽  
pp. 89-109 ◽  
Author(s):  
Ting (Sophia) Sun

SYNOPSIS This paper aims to promote the application of deep learning to audit procedures by illustrating how the capabilities of deep learning for text understanding, speech recognition, visual recognition, and structured data analysis fit into the audit environment. Based on these four capabilities, deep learning serves two major functions in supporting audit decision making: information identification and judgment support. The paper proposes a framework for applying these two deep learning functions to a variety of audit procedures in different audit phases. An audit data warehouse of historical data can be used to construct prediction models, providing suggested actions for various audit procedures. The data warehouse will be updated and enriched with new data instances through the application of deep learning and a human auditor's corrections. Finally, the paper discusses the challenges faced by the accounting profession, regulators, and educators when it comes to applying deep learning.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Steven A. Hicks ◽  
Jonas L. Isaksen ◽  
Vajira Thambawita ◽  
Jonas Ghouse ◽  
Gustav Ahlberg ◽  
...  

AbstractDeep learning-based tools may annotate and interpret medical data more quickly, consistently, and accurately than medical doctors. However, as medical doctors are ultimately responsible for clinical decision-making, any deep learning-based prediction should be accompanied by an explanation that a human can understand. We present an approach called electrocardiogram gradient class activation map (ECGradCAM), which is used to generate attention maps and explain the reasoning behind deep learning-based decision-making in ECG analysis. Attention maps may be used in the clinic to aid diagnosis, discover new medical knowledge, and identify novel features and characteristics of medical tests. In this paper, we showcase how ECGradCAM attention maps can unmask how a novel deep learning model measures both amplitudes and intervals in 12-lead electrocardiograms, and we show an example of how attention maps may be used to develop novel ECG features.


Sign in / Sign up

Export Citation Format

Share Document