Deep learning in predictive analytics: A survey

Author(s):  
Vaibhav Kumar ◽  
M. L. Garg
Author(s):  
A John. ◽  
D. Praveen Dominic ◽  
M. Adimoolam ◽  
N. M. Balamurugan

Background:: Predictive analytics has a multiplicity of statistical schemes from predictive modelling, data mining, machine learning. It scrutinizes present and chronological data to make predictions about expectations or if not unexplained measures. Most predictive models are used for business analytics to overcome loses and profit gaining. Predictive analytics is used to exploit the pattern in old and historical data. Objective: People used to follow some strategies for predicting stock value to invest in the more profit-gaining stocks and those strategies to search the stock market prices which are incorporated in some intelligent methods and tools. Such strategies will increase the investor’s profits and also minimize their risks. So prediction plays a vital role in stock market gaining and is also a very intricate and challenging process. Method: The proposed optimized strategies are the Deep Neural Network with Stochastic Gradient for stock prediction. The Neural Network is trained using Back-propagation neural networks algorithm and stochastic gradient descent algorithm as optimal strategies. Results: The experiment is conducted for stock market price prediction using python language with the visual package. In this experiment RELIANCE.NS, TATAMOTORS.NS, and TATAGLOBAL.NS dataset are taken as input dataset and it is downloaded from National Stock Exchange site. The artificial neural network component including Deep Learning model is most effective for more than 100,000 data points to train this model. This proposed model is developed on daily prices of stock market price to understand how to build model with better performance than existing national exchange method.


2021 ◽  
Vol 21 (3) ◽  
pp. 1-21
Author(s):  
Francesco Piccialli ◽  
Fabio Giampaolo ◽  
Edoardo Prezioso ◽  
Danilo Crisci ◽  
Salvatore Cuomo

Nowadays, a sustainable and smart city focuses on energy efficiency and the reduction of polluting emissions through smart mobility projects and initiatives to “sensitize” infrastructure. Smart parking is one of the building blocks of intelligent mobility, innovative mobility that aims to be flexible, integrated, and sustainable and consequently integrated into a Smart City. By using the Internet of Things (IoT) sensors located in the parking areas or the underground car parks in combination with a mobile application, which indicates to citizens the free places in the different areas of the city and guides them toward the chosen parking, it is possible to reduce air pollution and fluidifying noise traffic. In this article, we present and discuss an innovative Deep Learning-based ensemble technique in forecasting the parking space occupancy to reduce the search time for parking and to optimize the flow of cars in particularly congested areas, with an overall positive impact on traffic in urban centres. A genetic algorithm has also been used to optimize predictors parameters. The main goal is to design an intelligent IoT-based service that can predict, in the next few hours, the parking spaces occupancy of a street. The proposed approach has been assessed on a real IoT dataset composed by over than 15M of collected sensor records. Obtained results demonstrate that our method outperforms both single predictors and the widely used strategy of the mean providing inherently robust predictions.


Author(s):  
Neha Warikoo ◽  
Yung-Chun Chang ◽  
Wen-Lian Hsu

Abstract Motivation Natural Language Processing techniques are constantly being advanced to accommodate the influx of data as well as to provide exhaustive and structured knowledge dissemination. Within the biomedical domain, relation detection between bio-entities known as the Bio-Entity Relation Extraction (BRE) task has a critical function in knowledge structuring. Although recent advances in deep learning-based biomedical domain embedding have improved BRE predictive analytics, these works are often task selective or use external knowledge-based pre-/post-processing. In addition, deep learning-based models do not account for local syntactic contexts, which have improved data representation in many kernel classifier-based models. In this study, we propose a universal BRE model, i.e. LBERT, which is a Lexically aware Transformer-based Bidirectional Encoder Representation model, and which explores both local and global contexts representations for sentence-level classification tasks. Results This article presents one of the most exhaustive BRE studies ever conducted over five different bio-entity relation types. Our model outperforms state-of-the-art deep learning models in protein–protein interaction (PPI), drug–drug interaction and protein–bio-entity relation classification tasks by 0.02%, 11.2% and 41.4%, respectively. LBERT representations show a statistically significant improvement over BioBERT in detecting true bio-entity relation for large corpora like PPI. Our ablation studies clearly indicate the contribution of the lexical features and distance-adjusted attention in improving prediction performance by learning additional local semantic context along with bi-directionally learned global context. Availability and implementation Github. https://github.com/warikoone/LBERT. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Anandhavalli Muniasamy ◽  
Sehrish Tabassam ◽  
Mohammad A. Hussain ◽  
Habeeba Sultana ◽  
Vasanthi Muniasamy ◽  
...  

Author(s):  
Balajee Jeyakumar ◽  
M.A. Saleem Durai ◽  
Daphne Lopez

Deep learning is now more popular research domain in machine learning and pattern recognition in the world. It is widely success in the far-reaching area of applications such as Speech recognition, Computer vision, Natural language processing and Reinforcement learning. With the absolute amount of data accessible nowadays, big data brings chances and transformative possible for several sectors, on the other hand, it also performs on the unpredicted defies to connecting data and information. The size of the data is getting larger, and deep learning is imminent to play a vital role in big data predictive analytics solutions. In this paper, we make available a brief outline of deep learning and focus recent research efforts and the challenges in the fields of science, medical and water resource system.


Author(s):  
Balajee Jeyakumar ◽  
M.A. Saleem Durai ◽  
Daphne Lopez

Deep learning is now more popular research domain in machine learning and pattern recognition in the world. It is widely success in the far-reaching area of applications such as Speech recognition, Computer vision, Natural language processing and Reinforcement learning. With the absolute amount of data accessible nowadays, big data brings chances and transformative possible for several sectors, on the other hand, it also performs on the unpredicted defies to connecting data and information. The size of the data is getting larger, and deep learning is imminent to play a vital role in big data predictive analytics solutions. In this paper, we make available a brief outline of deep learning and focus recent research efforts and the challenges in the fields of science, medical and water resource system.


This paper presents a deep learning approach to emotion recognition as applied to virtual reality and music predictive analytics. Firstly, it investigates the deep parameter tuning of the multi-hidden layer neural networks, which are also commonly referred to simply as deep networks that are used to conduct emotion detection in virtual reality (VR)- electroencephalography (EEG) predictive analytics. Deep networks have been studied extensively over the last decade and have shown to be among the most accurate methods for predictive analytics in image recognition and speech processing domains. However, most predictive analytics deep network studies focus on the shallow parameter tuning when attempting to boost prediction accuracies, which includes deep network tuning parameters such as number of hidden layers, number of hidden nodes per hidden layer and the types of activation functions used in the hidden nodes. Much less effort has been put into investigating the tuning of deep parameters such as input dropout ratios, L1 (lasso) regularization and L2 (ridge regularization) parameters of the deep networks. As such, the goal of this study is to perform a parameter tuning investigation on these deep parameters of the deep networks for predicting emotions in a virtual reality environment using electroencephalography (EEG) signal obtained when the user is exposed to immersive content. The results show that deep tuning of deep networks in VR-EEG can improve the accuracies of predicting emotions. The best emotion prediction accuracy was improved to over 96% after deep tuning was conducted on the deep network parameters of input dropout ratio, L1 and L2 regularization parameters. Secondly, it investigates a similar possible approach when applied to 4-quadrant music emotion recognition. Recent studies have been characterizing music based on music genres and various classification techniques have been used to achieve the best accuracy rate. Several researches on deep learning have shown outstanding results in relation to dimensional music emotion recognition. Yet, there is no concrete and concise description to express music. In regards to this research gap, a research using more detailed metadata on twodimensional emotion annotations based on the Russell’s model is conducted. Rather than applying music genres or lyrics into machine learning algorithm to MER, higher representation of music information, acoustic features are used. In conjunction with the four classes classification problem, an available dataset named AMG1608 is feed into a training model built from deep neural network. The dataset is first preprocessed to get full access of variables before any machine learning is done. The classification rate is then collected by running the scripts in R environment. The preliminary result showed a classification rate of 46.0%.


2020 ◽  
Vol 18 (160) ◽  
pp. 731-751
Author(s):  
Lavinia Mihaela CRISTEA ◽  

The IT impact can be noticed in all activity fields of this world, and the audit is no exception from the evolution of this technological trend. Motivation: Given that professionals are progressively pursuing experimentation in working with new technologies, the development of Artificial Intelligence (AI), Blockchain, RPA, Machine Learning through the Deep Learning subset is a particularly interesting case, on which the researcher argues for debate. The objective of the article is to present the latest episode of the new technologies impact that outline the auditor profession, the methods and tools used. The quantitative, applied and technical research method allows the analysis of the emerging technologies impact, completing a previous specialized paper of the same author. The results of this paper propose the integration of AI, Blockchain, RPA, Deep Learning and predictive analytics in financial audit missions. The projections resulted from discussions with auditing and IT specialists from Big Four companies show how the technologies presented in this paper could be applied on concrete cases, facilitating current tasks. Machine Learning and Deep Learning would allow a development for prescriptive analytics, revolutionizing the data analytics process. Both the analysis of the literature and the conducted interviews admit AI as a business solution that contributes to the data analytics in an intelligent way, providing a foundation for the development of RPA.


Sign in / Sign up

Export Citation Format

Share Document