scholarly journals Dance Movement Recognition Based on Feature Expression and Attribute Mining

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Xianfeng Zhai

There are complex posture changes in dance movements, which lead to the low accuracy of dance movement recognition. And none of the current motion recognition uses the dancer’s attributes. The attribute feature of dancer is the important high-level semantic information in the action recognition. Therefore, a dance movement recognition algorithm based on feature expression and attribute mining is designed to learn the complicated and changeable dancer movements. Firstly, the original image information is compressed by the time-domain fusion module, and the information of action and attitude can be expressed completely. Then, a two-way feature extraction network is designed, which extracts the details of the actions along the way and takes the sequence image as the input of the network. Then, in order to enhance the expression ability of attribute features, a multibranch spatial channel attention integration module (MBSC) based on an attention mechanism is designed to extract the features of each attribute. Finally, using the semantic inference and information transfer function of the graph convolution network, the relationship between attribute features and dancer features can be mined and deduced, and more expressive action features can be obtained; thus, high-performance dance motion recognition is realized. The test and analysis results on the data set show that the algorithm can recognize the dance movement and improve the accuracy of the dance movement recognition effectively, thus realizing the movement correction function of the dancer.

Author(s):  
C. Sauer ◽  
F. Bagusat ◽  
M.-L. Ruiz-Ripoll ◽  
C. Roller ◽  
M. Sauer ◽  
...  

AbstractThis work aims at the characterization of a modern concrete material. For this purpose, we perform two experimental series of inverse planar plate impact (PPI) tests with the ultra-high performance concrete B4Q, using two different witness plate materials. Hugoniot data in the range of particle velocities from 180 to 840 m/s and stresses from 1.1 to 7.5 GPa is derived from both series. Within the experimental accuracy, they can be seen as one consistent data set. Moreover, we conduct corresponding numerical simulations and find a reasonably good agreement between simulated and experimentally obtained curves. From the simulated curves, we derive numerical Hugoniot results that serve as a homogenized, mean shock response of B4Q and add further consistency to the data set. Additionally, the comparison of simulated and experimentally determined results allows us to identify experimental outliers. Furthermore, we perform a parameter study which shows that a significant influence of the applied pressure dependent strength model on the derived equation of state (EOS) parameters is unlikely. In order to compare the current results to our own partially reevaluated previous work and selected recent results from literature, we use simulations to numerically extrapolate the Hugoniot results. Considering their inhomogeneous nature, a consistent picture emerges for the shock response of the discussed concrete and high-strength mortar materials. Hugoniot results from this and earlier work are presented for further comparisons. In addition, a full parameter set for B4Q, including validated EOS parameters, is provided for the application in simulations of impact and blast scenarios.


2021 ◽  
pp. 016555152110184
Author(s):  
Gunjan Chandwani ◽  
Anil Ahlawat ◽  
Gaurav Dubey

Document retrieval plays an important role in knowledge management as it facilitates us to discover the relevant information from the existing data. This article proposes a cluster-based inverted indexing algorithm for document retrieval. First, the pre-processing is done to remove the unnecessary and redundant words from the documents. Then, the indexing of documents is done by the cluster-based inverted indexing algorithm, which is developed by integrating the piecewise fuzzy C-means (piFCM) clustering algorithm and inverted indexing. After providing the index to the documents, the query matching is performed for the user queries using the Bhattacharyya distance. Finally, the query optimisation is done by the Pearson correlation coefficient, and the relevant documents are retrieved. The performance of the proposed algorithm is analysed by the WebKB data set and Twenty Newsgroups data set. The analysis exposes that the proposed algorithm offers high performance with a precision of 1, recall of 0.70 and F-measure of 0.8235. The proposed document retrieval system retrieves the most relevant documents and speeds up the storing and retrieval of information.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 223
Author(s):  
Yen-Ling Tai ◽  
Shin-Jhe Huang ◽  
Chien-Chang Chen ◽  
Henry Horng-Shing Lu

Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi–Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi–Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi–Dirac correction function exhibits better capabilities of image augmentation and segmentation.


2018 ◽  
Vol 10 (8) ◽  
pp. 80
Author(s):  
Lei Zhang ◽  
Xiaoli Zhi

Convolutional neural networks (CNN for short) have made great progress in face detection. They mostly take computation intensive networks as the backbone in order to obtain high precision, and they cannot get a good detection speed without the support of high-performance GPUs (Graphics Processing Units). This limits CNN-based face detection algorithms in real applications, especially in some speed dependent ones. To alleviate this problem, we propose a lightweight face detector in this paper, which takes a fast residual network as backbone. Our method can run fast even on cheap and ordinary GPUs. To guarantee its detection precision, multi-scale features and multi-context are fully exploited in efficient ways. Specifically, feature fusion is used to obtain semantic strongly multi-scale features firstly. Then multi-context including both local and global context is added to these multi-scale features without extra computational burden. The local context is added through a depthwise separable convolution based approach, and the global context by a simple global average pooling way. Experimental results show that our method can run at about 110 fps on VGA (Video Graphics Array)-resolution images, while still maintaining competitive precision on WIDER FACE and FDDB (Face Detection Data Set and Benchmark) datasets as compared with its state-of-the-art counterparts.


2021 ◽  
Vol 10 (7) ◽  
pp. 436
Author(s):  
Amerah Alghanim ◽  
Musfira Jilani ◽  
Michela Bertolotto ◽  
Gavin McArdle

Volunteered Geographic Information (VGI) is often collected by non-expert users. This raises concerns about the quality and veracity of such data. There has been much effort to understand and quantify the quality of VGI. Extrinsic measures which compare VGI to authoritative data sources such as National Mapping Agencies are common but the cost and slow update frequency of such data hinder the task. On the other hand, intrinsic measures which compare the data to heuristics or models built from the VGI data are becoming increasingly popular. Supervised machine learning techniques are particularly suitable for intrinsic measures of quality where they can infer and predict the properties of spatial data. In this article we are interested in assessing the quality of semantic information, such as the road type, associated with data in OpenStreetMap (OSM). We have developed a machine learning approach which utilises new intrinsic input features collected from the VGI dataset. Specifically, using our proposed novel approach we obtained an average classification accuracy of 84.12%. This result outperforms existing techniques on the same semantic inference task. The trustworthiness of the data used for developing and training machine learning models is important. To address this issue we have also developed a new measure for this using direct and indirect characteristics of OSM data such as its edit history along with an assessment of the users who contributed the data. An evaluation of the impact of data determined to be trustworthy within the machine learning model shows that the trusted data collected with the new approach improves the prediction accuracy of our machine learning technique. Specifically, our results demonstrate that the classification accuracy of our developed model is 87.75% when applied to a trusted dataset and 57.98% when applied to an untrusted dataset. Consequently, such results can be used to assess the quality of OSM and suggest improvements to the data set.


2021 ◽  
Author(s):  
Leonardo Mingari ◽  
Andrew Prata ◽  
Federica Pardini

<p>Modelling atmospheric dispersion and deposition of volcanic ash is becoming increasingly valuable for understanding the potential impacts of explosive volcanic eruptions on infrastructures, air quality and aviation. The generation of high-resolution forecasts depends on the accuracy and reliability of the input data for models. Uncertainties in key parameters such as eruption column height injection, physical properties of particles or meteorological fields, represent a major source of error in forecasting airborne volcanic ash. The availability of nearly real time geostationary satellite observations with high spatial and temporal resolutions provides the opportunity to improve forecasts in an operational context. Data assimilation (DA) is one of the most effective ways to reduce the error associated with the forecasts through the incorporation of available observations into numerical models. Here we present a new implementation of an ensemble-based data assimilation system based on the coupling between the FALL3D dispersal model and the Parallel Data Assimilation Framework (PDAF). The implementation is based on the last version release of FALL3D (versions 8.x) tailored to the extreme-scale computing requirements, which has been redesigned and rewritten from scratch in the framework of the EU Center of Excellence for Exascale in Solid Earth (ChEESE). The proposed methodology can be efficiently implemented in an operational environment by exploiting high-performance computing (HPC) resources. The FALL3D+PDAF system can be run in parallel and supports online-coupled DA, which allows an efficient information transfer through parallel communication. Satellite-retrieved data from recent volcanic eruptions were considered as input observations for the assimilation system.</p>


2021 ◽  
Author(s):  
Oliver Stenzel ◽  
Robin Thor ◽  
Martin Hilchenbach

<p>Orbital Laser altimeters deliver a plethora of data that is used to map planetary surfaces [1] and to understand interiors of solar system bodies [2]. Accuracy and precision of laser altimetry measurements depend on the knowledge of spacecraft position and pointing and on the instrument. Both are important for the retrieval of tidal parameters. In order to assess the quality of the altimeter retrievals, we are training and implementing an artificial neural network (ANN) to identify and exclude scans from analysis which yield erroneous data. The implementation is based on the PyTorch framework [3]. We are presenting our results for the MESSENGER Mercury Laser Altimeter (MLA) data set [4], but also in view of future analysis of the BepiColombo Laser Altimeter (BELA) data, which will arrive in orbit around Mercury in 2025 on board the Mercury Planetary Orbiter [5,6]. We further explore conventional methods of error identification and compare these with the machine learning results. Short periods of large residuals or large variation of residuals are identified and used to detect erroneous measurements. Furthermore, long-period systematics, such as those caused by slow variations in instrument pointing, can be modelled by including additional parameters.<br>[1] Zuber, Maria T., David E. Smith, Roger J. Phillips, Sean C. Solomon, Gregory A. Neumann, Steven A. Hauck, Stanton J. Peale, et al. ‘Topography of the Northern Hemisphere of Mercury from MESSENGER Laser Altimetry’. Science 336, no. 6078 (13 April 2012): 217–20. https://doi.org/10.1126/science.1218805.<br>[2] Thor, Robin N., Reinald Kallenbach, Ulrich R. Christensen, Philipp Gläser, Alexander Stark, Gregor Steinbrügge, and Jürgen Oberst. ‘Determination of the Lunar Body Tide from Global Laser Altimetry Data’. Journal of Geodesy 95, no. 1 (23 December 2020): 4. https://doi.org/10.1007/s00190-020-01455-8.<br>[3] Paszke, Adam, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, et al. ‘PyTorch: An Imperative Style, High-Performance Deep Learning Library’. Advances in Neural Information Processing Systems 32 (2019): 8026–37.<br>[4] Cavanaugh, John F., James C. Smith, Xiaoli Sun, Arlin E. Bartels, Luis Ramos-Izquierdo, Danny J. Krebs, Jan F. McGarry, et al. ‘The Mercury Laser Altimeter Instrument for the MESSENGER Mission’. Space Science Reviews 131, no. 1 (1 August 2007): 451–79. https://doi.org/10.1007/s11214-007-9273-4.<br>[5] Thomas, N., T. Spohn, J. -P. Barriot, W. Benz, G. Beutler, U. Christensen, V. Dehant, et al. ‘The BepiColombo Laser Altimeter (BELA): Concept and Baseline Design’. Planetary and Space Science 55, no. 10 (1 July 2007): 1398–1413. https://doi.org/10.1016/j.pss.2007.03.003.<br>[6] Benkhoff, Johannes, Jan van Casteren, Hajime Hayakawa, Masaki Fujimoto, Harri Laakso, Mauro Novara, Paolo Ferri, Helen R. Middleton, and Ruth Ziethe. ‘BepiColombo—Comprehensive Exploration of Mercury: Mission Overview and Science Goals’. Planetary and Space Science, Comprehensive Science Investigations of Mercury: The scientific goals of the joint ESA/JAXA mission BepiColombo, 58, no. 1 (1 January 2010): 2–20. https://doi.org/10.1016/j.pss.2009.09.020.</p>


2021 ◽  
Vol 4 ◽  
Author(s):  
Stefano Markidis

Physics-Informed Neural Networks (PINN) are neural networks encoding the problem governing equations, such as Partial Differential Equations (PDE), as a part of the neural network. PINNs have emerged as a new essential tool to solve various challenging problems, including computing linear systems arising from PDEs, a task for which several traditional methods exist. In this work, we focus first on evaluating the potential of PINNs as linear solvers in the case of the Poisson equation, an omnipresent equation in scientific computing. We characterize PINN linear solvers in terms of accuracy and performance under different network configurations (depth, activation functions, input data set distribution). We highlight the critical role of transfer learning. Our results show that low-frequency components of the solution converge quickly as an effect of the F-principle. In contrast, an accurate solution of the high frequencies requires an exceedingly long time. To address this limitation, we propose integrating PINNs into traditional linear solvers. We show that this integration leads to the development of new solvers whose performance is on par with other high-performance solvers, such as PETSc conjugate gradient linear solvers, in terms of performance and accuracy. Overall, while the accuracy and computational performance are still a limiting factor for the direct use of PINN linear solvers, hybrid strategies combining old traditional linear solver approaches with new emerging deep-learning techniques are among the most promising methods for developing a new class of linear solvers.


2020 ◽  
pp. 865-874
Author(s):  
Enrico Santus ◽  
Tal Schuster ◽  
Amir M. Tahmasebi ◽  
Clara Li ◽  
Adam Yala ◽  
...  

PURPOSE Literature on clinical note mining has highlighted the superiority of machine learning (ML) over hand-crafted rules. Nevertheless, most studies assume the availability of large training sets, which is rarely the case. For this reason, in the clinical setting, rules are still common. We suggest 2 methods to leverage the knowledge encoded in pre-existing rules to inform ML decisions and obtain high performance, even with scarce annotations. METHODS We collected 501 prostate pathology reports from 6 American hospitals. Reports were split into 2,711 core segments, annotated with 20 attributes describing the histology, grade, extension, and location of tumors. The data set was split by institutions to generate a cross-institutional evaluation setting. We assessed 4 systems, namely a rule-based approach, an ML model, and 2 hybrid systems integrating the previous methods: a Rule as Feature model and a Classifier Confidence model. Several ML algorithms were tested, including logistic regression (LR), support vector machine (SVM), and eXtreme gradient boosting (XGB). RESULTS When training on data from a single institution, LR lags behind the rules by 3.5% (F1 score: 92.2% v 95.7%). Hybrid models, instead, obtain competitive results, with Classifier Confidence outperforming the rules by +0.5% (96.2%). When a larger amount of data from multiple institutions is used, LR improves by +1.5% over the rules (97.2%), whereas hybrid systems obtain +2.2% for Rule as Feature (97.7%) and +2.6% for Classifier Confidence (98.3%). Replacing LR with SVM or XGB yielded similar performance gains. CONCLUSION We developed methods to use pre-existing handcrafted rules to inform ML algorithms. These hybrid systems obtain better performance than either rules or ML models alone, even when training data are limited.


2019 ◽  
Vol 30 (3) ◽  
pp. 18-37
Author(s):  
Tawei Wang ◽  
Yen-Yao Wang ◽  
Ju-Chun Yen

This article investigates the transfer of information security breach information between breached firms and their peers. Using a large data set of information security incidents from 2003 to 2013, the results suggest that 1) the effect of information security breach information transfer exists between breached firms and non-breached firms that offer similar products and 2) the effect of information transfer is weaker when the information security breach is due to internal faults or is related to the loss of personally identifiable information. Additional tests demonstrate that the effect of information transfer exhibits consistent patterns across time and with different types of information security breaches. Finally, the effect does not depend on whether the firms are IT intensive. Implications, limitations, and future research are discussed.


Sign in / Sign up

Export Citation Format

Share Document