scholarly journals Sportive Fashion Trend Reports: A Hybrid Style Analysis Based on Deep Learning Techniques

2021 ◽  
Vol 13 (17) ◽  
pp. 9530
Author(s):  
Hyosun An ◽  
Sunghoon Kim ◽  
Yerim Choi

This study aimed to use quantitative methods and deep learning techniques to report sportive fashion trends. We collected sportive fashion images from fashion collections of the past decades and utilized the multi-label graph convolutional network (ML-GCN) model to detect and explore hybrid styles. Based on the literature review, we proposed a theoretical framework to investigate sportive fashion trends. The ML-GCN was designed to classify five style categories, “street,” “retro,” “sexy,” “modern,” and “sporty,” and the predictive probabilities of the five styles of fashion images were extracted. We statistically validated the hybrid style results derived from the ML-GCN model and suggested an application method of deep learning-based trend reports in the fashion industry. This study reported sportive fashion by hybrid style dependency, forecasting, and brand clustering. We visualized the predicted probability for a hybrid style to a three-dimensional scale expected to help designers and researchers in the field of fashion to achieve digital design innovation cooperating with deep learning techniques.

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3046
Author(s):  
Shervin Minaee ◽  
Mehdi Minaei ◽  
Amirali Abdolrashidi

Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.


Computers ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 37 ◽  
Author(s):  
Luca Cappelletti ◽  
Tommaso Fontana ◽  
Guido Walter Di Donato ◽  
Lorenzo Di Tucci ◽  
Elena Casiraghi ◽  
...  

Missing data imputation has been a hot topic in the past decade, and many state-of-the-art works have been presented to propose novel, interesting solutions that have been applied in a variety of fields. In the past decade, the successful results achieved by deep learning techniques have opened the way to their application for solving difficult problems where human skill is not able to provide a reliable solution. Not surprisingly, some deep learners, mainly exploiting encoder-decoder architectures, have also been designed and applied to the task of missing data imputation. However, most of the proposed imputation techniques have not been designed to tackle “complex data”, that is high dimensional data belonging to datasets with huge cardinality and describing complex problems. Precisely, they often need critical parameters to be manually set or exploit complex architecture and/or training phases that make their computational load impracticable. In this paper, after clustering the state-of-the-art imputation techniques into three broad categories, we briefly review the most representative methods and then describe our data imputation proposals, which exploit deep learning techniques specifically designed to handle complex data. Comparative tests on genome sequences show that our deep learning imputers outperform the state-of-the-art KNN-imputation method when filling gaps in human genome sequences.


2019 ◽  
Vol 9 (18) ◽  
pp. 3698 ◽  
Author(s):  
Shanshan Liu ◽  
Xin Zhang ◽  
Sheng Zhang ◽  
Hui Wang ◽  
Weiming Zhang

Machine reading comprehension (MRC), which requires a machine to answer questions based on a given context, has attracted increasing attention with the incorporation of various deep-learning techniques over the past few years. Although research on MRC based on deep learning is flourishing, there remains a lack of a comprehensive survey summarizing existing approaches and recent trends, which motivated the work presented in this article. Specifically, we give a thorough review of this research field, covering different aspects including (1) typical MRC tasks: their definitions, differences, and representative datasets; (2) the general architecture of neural MRC: the main modules and prevalent approaches to each; and (3) new trends: some emerging areas in neural MRC as well as the corresponding challenges. Finally, considering what has been achieved so far, the survey also envisages what the future may hold by discussing the open issues left to be addressed.


2020 ◽  
Author(s):  
Haojie Wang ◽  
Limin Zhang

<p>Landslide detection is an essential component of landslide risk assessment and hazard mitigation. It can be used to produce landslide inventories which are considered as one of the fundamental auxiliary data for regional landslide susceptibility analysis. In order to achieve high landslide interpretation accuracy, visual interpretation is frequently used, but suffers in time efficiency and labour demand. Hence, an automatic landslide detection method utilizing deep learning techniques is implemented in this work to conduct high-accuracy and fast landslide interpretation. As the ground characteristics and terrain features can precisely capture the three-dimensional space form of landslides, high-resolution digital terrain model (DTM) is taken as the data source for landslide detection. A case study in Hong Kong, China is conducted to validate the applicability of deep learning techniques in landslide detection. The case study takes multiple data layers derived from the DTM (e.g., elevation, slope gradient, aspect, etc.) and a local landslide inventory named enhanced natural terrain landslide inventory (ENTLI) as its data sources, and integrates them into a database for learning. Then, a deep learning technique (e.g., convolutional neural network) is used to train models on the database and perform landslide detection. Results of the case study show great performance and capacity of the applied deep learning techniques, which provides valuable references for advancing landslide detection.</p>


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 157
Author(s):  
Saidrasul Usmankhujaev ◽  
Bunyodbek Ibrokhimov ◽  
Shokhrukh Baydadaev ◽  
Jangwoo Kwon

Deep neural networks (DNN) have proven to be efficient in computer vision and data classification with an increasing number of successful applications. Time series classification (TSC) has been one of the challenging problems in data mining in the last decade, and significant research has been proposed with various solutions, including algorithm-based approaches as well as machine and deep learning approaches. This paper focuses on combining the two well-known deep learning techniques, namely the Inception module and the Fully Convolutional Network. The proposed method proved to be more efficient than the previous state-of-the-art InceptionTime method. We tested our model on the univariate TSC benchmark (the UCR/UEA archive), which includes 85 time-series datasets, and proved that our network outperforms the InceptionTime in terms of the training time and overall accuracy on the UCR archive.


2022 ◽  
Vol 3 (4) ◽  
pp. 322-335
Author(s):  
C. R. Nagarathna ◽  
M. Kusuma

Since the past decade, the deep learning techniques are widely used in research. The objective of various applications is achieved using these techniques. The deep learning technique in the medical field helps to find medicines and diagnosis of diseases. The Alzheimer’s is a physical brain disease, on which recently many research are experimented to develop an efficient model that diagnoses the early stages of Alzheimer’s disease. In this paper, a Hybrid model is proposed, which is a combination of VGG19 with additional layers, and a CNN deep learning model for detecting and classifying the different stages of Alzheimer’s and the performance is compared with the CNN model. The Magnetic Resonance Images are used to analyse both models received from the Kaggle dataset. The result shows that the Hybrid model works efficiently in detecting and classifying the different stages of Alzheimer’s.


2021 ◽  
Vol 40 ◽  
pp. 03030
Author(s):  
Mehdi Surani ◽  
Ramchandra Mangrulkar

Over the past years the exponential growth of social media usage has given the power to every individual to share their opinions freely. This has led to numerous threats allowing users to exploit their freedom of speech, thus spreading hateful comments, using abusive language, carrying out personal attacks, and sometimes even to the extent of cyberbullying. However, determining abusive content is not a difficult task and many social media platforms have solutions available already but at the same time, many are searching for more efficient ways and solutions to overcome this issue. Traditional models explore machine learning models to identify negative content posted on social media. Shaming categories are explored, and content is put in place according to the label. Such categorization is easy to detect as the contextual language used is direct. However, the use of irony to mock or convey contempt is also a part of public shaming and must be considered while categorizing the shaming labels. In this research paper, various shaming types, namely toxic, severe toxic, obscene, threat, insult, identity hate, and sarcasm are predicted using deep learning approaches like CNN and LSTM. These models have been studied along with traditional models to determine which model gives the most accurate results.


2021 ◽  
Vol 12 (25) ◽  
pp. 85
Author(s):  
Giacomo Patrucco ◽  
Francesco Setragno

<p class="VARAbstract">Digitisation processes of movable heritage are becoming increasingly popular to document the artworks stored in our museums. A growing number of strategies for the three-dimensional (3D) acquisition and modelling of these invaluable assets have been developed in the last few years. Their objective is to efficiently respond to this documentation need and contribute to deepening the knowledge of the masterpieces investigated constantly by researchers operating in many fieldworks. Nowadays, one of the most effective solutions is represented by the development of image-based techniques, usually connected to a Structure-from-Motion (SfM) photogrammetric approach. However, while images acquisition is relatively rapid, the processes connected to data processing are very time-consuming and require the operator’s substantial manual involvement. Developing deep learning-based strategies can be an effective solution to enhance the automatism level. In this research, which has been carried out in the framework of the digitisation of a wooden maquettes collection stored in the ‘Museo Egizio di Torino’, using a photogrammetric approach, an automatic masking strategy using deep learning techniques is proposed, to increase the level of automatism and therefore, optimise the photogrammetric pipeline. Starting from a manually annotated dataset, a neural network was trained to automatically perform a semantic classification to isolate the maquettes from the background. The proposed methodology allowed the researchers to obtain automatically segmented masks with a high degree of accuracy. The workflow is described (as regards acquisition strategies, dataset processing, and neural network training). In addition, the accuracy of the results is evaluated and discussed. Finally, the researchers proposed the possibility of performing a multiclass segmentation on the digital images to recognise different object categories in the images, as well as to define a semantic hierarchy to perform automatic classification of different elements in the acquired images.</p><p><strong>Highlights:</strong></p><ul><li><p>In the framework of movable heritage digitisation processes, many procedures are very time-consuming, and they still require the operator’s substantial manual involvement.</p></li><li><p>This research proposes using deep learning techniques to enhance the automatism level in the generation of exclusion masks, improving the optimisation of the photogrammetric procedures.</p></li><li><p>Following this strategy, the possibility of performing a multiclass semantic segmentation (on the 2D images and, consequently, on the 3D point cloud) is also discussed, considering the accuracy of the obtainable results.</p></li></ul>


2022 ◽  
Vol 11 (1) ◽  
pp. 45
Author(s):  
Xuanming Fu ◽  
Zhengfeng Yang ◽  
Zhenbing Zeng ◽  
Yidan Zhang ◽  
Qianting Zhou

Deep learning techniques have been successfully applied in handwriting recognition. Oracle bone inscriptions (OBI) are the earliest hieroglyphs in China and valuable resources for studying the etymology of Chinese characters. OBI are of important historical and cultural value in China; thus, textual research surrounding the characters of OBI is a huge challenge for archaeologists. In this work, we built a dataset named OBI-100, which contains 100 classes of oracle bone inscriptions collected from two OBI dictionaries. The dataset includes more than 128,000 character samples related to the natural environment, humans, animals, plants, etc. In addition, we propose improved models based on three typical deep convolutional network structures to recognize the OBI-100 dataset. By modifying the parameters, adjusting the network structures, and adopting optimization strategies, we demonstrate experimentally that these models perform fairly well in OBI recognition. For the 100-category OBI classification task, the optimal model achieves an accuracy of 99.5%, which shows competitive performance compared with other state-of-the-art approaches. We hope that this work can provide a valuable tool for character recognition of OBI.


2019 ◽  
Author(s):  
Lu Liu ◽  
Ahmed Elazab ◽  
Baiying Lei ◽  
Tianfu Wang

BACKGROUND Echocardiography has a pivotal role in the diagnosis and management of cardiovascular diseases since it is real-time, cost-effective, and non-invasive. The development of artificial intelligence (AI) techniques have led to more intelligent and automatic computer-aided diagnosis (CAD) systems in echocardiography over the past few years. Automatic CAD mainly includes classification, detection of anatomical structures, tissue segmentation, and disease diagnosis, which are mainly completed by machine learning techniques and the recent developed deep learning techniques. OBJECTIVE This review aims to provide a guide for researchers and clinicians on relevant aspects of AI, machine learning, and deep learning. In addition, we review the recent applications of these methods in echocardiography and identify how echocardiography could incorporate AI in the future. METHODS This paper first summarizes the overview of machine learning and deep learning. Second, it reviews current use of AI in echocardiography by searching literature in the main databases for the past 10 years and finally discusses potential limitations and challenges in the future. RESULTS AI has showed promising improvements in analysis and interpretation of echocardiography to a new stage in the fields of standard views detection, automated analysis of chamber size and function, and assessment of cardiovascular diseases. CONCLUSIONS Compared with machine learning, deep learning methods have achieved state-of-the-art performance across different applications in echocardiography. Although there are challenges such as the required large dataset, AI can provide satisfactory results by devising various strategies. We believe AI has the potential to improve accuracy of diagnosis, reduce time consumption, and decrease the load of cardiologists.


Sign in / Sign up

Export Citation Format

Share Document