scholarly journals An Empirical Study on Customer Churn Behaviours Prediction Using Arabic Twitter Mining Approach

2021 ◽  
Vol 13 (7) ◽  
pp. 175
Author(s):  
Latifah Almuqren ◽  
Fatma S. Alrayes ◽  
Alexandra I. Cristea

With the rising growth of the telecommunication industry, the customer churn problem has grown in significance as well. One of the most critical challenges in the data and voice telecommunication service industry is retaining customers, thus reducing customer churn by increasing customer satisfaction. Telecom companies have depended on historical customer data to measure customer churn. However, historical data does not reveal current customer satisfaction or future likeliness to switch between telecom companies. The related research reveals that many studies have focused on developing churner prediction models based on historical data. These models face delay issues and lack timelines for targeting customers in real-time. In addition, these models lack the ability to tap into Arabic language social media for real-time analysis. As a result, the design of a customer churn model based on real-time analytics is needed. Therefore, this study offers a new approach to using social media mining to predict customer churn in the telecommunication field. This represents the first work using Arabic Twitter mining to predict churn in Saudi Telecom companies. The newly proposed method proved its efficiency based on various standard metrics and based on a comparison with the ground-truth actual outcomes provided by a telecom company.

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jishnu Bhattacharyya ◽  
Manoj Kumar Dash

PurposeThe purpose of this paper is to investigate the distinct reasons and more common reasons that reduce customer satisfaction and are antecedents to customer churn behavior in the telecommunication industry.Design/methodology/approachThe study adopted the netnography approach to investigate churn behavior by utilizing online user-generated content in qualified social media communities.FindingsThe investigation revealed that “data speed issue”, “ineffective relationship building”, “service area coverage issues” and “billing issues” are some of the most important attributes that influence a consumers' decision to churn. Further, the churn consequence influencers model summarizes the attributes that contribute to overall dissatisfaction and finally results in churn behavior. The study found out the application of the netnography approach in a quantitatively dominant research area and stands out with its insights from a rich qualitative data.Practical implicationsProper clarification of customer expectations and pain points can help reduce customer churn. The study will serve as the basis for developing future churn prediction models that will contribute to the informed decision-making process.Originality/valueContributing to research on customer churn behavior, the study offers a novel attempt to study customer satisfaction and customer churn behavior jointly. The paper is the first attempt that contributes to the extant literature by adopting the unique qualitative approach to understand the reasons for telecommunication churn behavior in the emerging Indian market. Another contribution of this research is that the paper shifts the focus of the electronic word-of-mouth (eWOM) literature to the telecommunications industry, thus adding another block to ongoing research in eWOM communication.Peer reviewThe peer review history for this article is available at: https://publons.com/publon/10.1108/ OIR-02-2020-0048


Author(s):  
Heena Kousar ◽  
B.R. Prasad Babu

<p>Recently with increased adoption of big data, Internet of Things and sensor technology by various organization for provisioning smart intelligent services for various application uses. Data processing on real-time social media and sensor data is been a key area of research in recent times and these data are massive and continuous. Smart application using sensor and social media data can be classified into three class: 1) online processing of streaming data; 2) online processing of historical data; and 3) hybrid processing of both. The existing model are designed considering stream or batch processing. For provisioning real-time processing MapReduce framework using Hadoop framework is considered by state-of-art technique for data inflow forecasting. However, the Hadoop based forecasting model are not efficient in fully utilizing system resource. Agent based MapReduce forecasting model is adopted by state-of-art technique to utilize system efficiently. However, they incurs high computation overhead, thus increase cost of computing cost. To overcome this work present an agent based Data Inflow Forecasting (DIF) model for both stream and non-stream (historical) data by using Multivariate Gaussian Mixture (MGM) model. This work present an Agent based MapReduce (AMR) framework to process data in real-time and utilize system resource efficiently. To provide scalability for processing social media and sensor data DIF-AMR model adopts cloud computing architecture. Experiment are conducted to evaluate performance of DIF-AMR of over existing model shows significant performance improvement in terms of computation time.</p>


2019 ◽  
Vol 118 (6) ◽  
pp. 97-99
Author(s):  
Arockia Jeyasheela A ◽  
Dr.S. Chandramohan

This study is discussed about the viral marketing. It is a one of the key success of marketing. This paper gave the techniques of viral marketing. It can be delivered word of mouth. It can be created by both the representatives of a company and consumer (individuals or communities). The right viral message with go to right consumer to the right time. Viral marketing is easy to attract the consumer. It is most important advertising to consumer. It involves consumer perception, organization contribution, blogs, SMO (Social Media Optimize), SEO (Social Engine Optimize). Principles of viral marketing are social profile gathering, Proximity Market, Real time Key word density.


Author(s):  
Vijay Kumar Dwivedi ◽  
Manoj Madhava Gore

Background: Stock price prediction is a challenging task. The social, economic, political, and various other factors cause frequent abrupt changes in the stock price. This article proposes a historical data-based ensemble system to predict the closing stock price with higher accuracy and consistency over the existing stock price prediction systems. Objective: The primary objective of this article is to predict the closing price of a stock for the next trading in more accurate and consistent manner over the existing methods employed for the stock price prediction. Method: The proposed system combines various machine learning-based prediction models employing least absolute shrinkage and selection operator (LASSO) regression regularization technique to enhance the accuracy of stock price prediction system as compared to any one of the base prediction models. Results: The analysis of results for all the eleven stocks (listed under Information Technology sector on the Bombay Stock Exchange, India) reveals that the proposed system performs best (on all defined metrics of the proposed system) for training datasets and test datasets comprising of all the stocks considered in the proposed system. Conclusion: The proposed ensemble model consistently predicts stock price with a high degree of accuracy over the existing methods used for the prediction.


2019 ◽  
Vol 33 (3) ◽  
pp. 89-109 ◽  
Author(s):  
Ting (Sophia) Sun

SYNOPSIS This paper aims to promote the application of deep learning to audit procedures by illustrating how the capabilities of deep learning for text understanding, speech recognition, visual recognition, and structured data analysis fit into the audit environment. Based on these four capabilities, deep learning serves two major functions in supporting audit decision making: information identification and judgment support. The paper proposes a framework for applying these two deep learning functions to a variety of audit procedures in different audit phases. An audit data warehouse of historical data can be used to construct prediction models, providing suggested actions for various audit procedures. The data warehouse will be updated and enriched with new data instances through the application of deep learning and a human auditor's corrections. Finally, the paper discusses the challenges faced by the accounting profession, regulators, and educators when it comes to applying deep learning.


2021 ◽  
pp. 193896552199308
Author(s):  
Kathryn A. LaTour ◽  
Ana Brant

Most hospitality operators use social media in their communications as a means to communicate brand image and provide information to customers. Our focus is on a two-way exchange whereby a customer’s social posting is reacted to in real-time by the provider to enhance the customer’s current experience. Using social media in this way is new, and the provider needs to carefully balance privacy and personalization. We describe the process by which the Dorchester Collection Customer Experience (CX) Team approached its social listening program and share lessons to identify best practices for hospitality operators wanting to delight their customers through insights gained from social listening.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Suppawong Tuarob ◽  
Poom Wettayakorn ◽  
Ponpat Phetchai ◽  
Siripong Traivijitkhun ◽  
Sunghoon Lim ◽  
...  

AbstractThe explosion of online information with the recent advent of digital technology in information processing, information storing, information sharing, natural language processing, and text mining techniques has enabled stock investors to uncover market movement and volatility from heterogeneous content. For example, a typical stock market investor reads the news, explores market sentiment, and analyzes technical details in order to make a sound decision prior to purchasing or selling a particular company’s stock. However, capturing a dynamic stock market trend is challenging owing to high fluctuation and the non-stationary nature of the stock market. Although existing studies have attempted to enhance stock prediction, few have provided a complete decision-support system for investors to retrieve real-time data from multiple sources and extract insightful information for sound decision-making. To address the above challenge, we propose a unified solution for data collection, analysis, and visualization in real-time stock market prediction to retrieve and process relevant financial data from news articles, social media, and company technical information. We aim to provide not only useful information for stock investors but also meaningful visualization that enables investors to effectively interpret storyline events affecting stock prices. Specifically, we utilize an ensemble stacking of diversified machine-learning-based estimators and innovative contextual feature engineering to predict the next day’s stock prices. Experiment results show that our proposed stock forecasting method outperforms a traditional baseline with an average mean absolute percentage error of 0.93. Our findings confirm that leveraging an ensemble scheme of machine learning methods with contextual information improves stock prediction performance. Finally, our study could be further extended to a wide variety of innovative financial applications that seek to incorporate external insight from contextual information such as large-scale online news articles and social media data.


2021 ◽  
Vol 13 (11) ◽  
pp. 2179
Author(s):  
Pedro Mateus ◽  
Virgílio B. Mendes ◽  
Sandra M. Plecha

The neutral atmospheric delay is one of the major error sources in Space Geodesy techniques such as Global Navigation Satellite Systems (GNSS), and its modeling for high accuracy applications can be challenging. Improving the modeling of the atmospheric delays (hydrostatic and non-hydrostatic) also leads to a more accurate and precise precipitable water vapor estimation (PWV), mostly in real-time applications, where models play an important role, since numerical weather prediction models cannot be used for real-time processing or forecasting. This study developed an improved version of the Hourly Global Pressure and Temperature (HGPT) model, the HGPT2. It is based on 20 years of ERA5 reanalysis data at full spatial (0.25° × 0.25°) and temporal resolution (1-h). Apart from surface air temperature, surface pressure, zenith hydrostatic delay, and weighted mean temperature, the updated model also provides information regarding the relative humidity, zenith non-hydrostatic delay, and precipitable water vapor. The HGPT2 is based on the time-segmentation concept and uses the annual, semi-annual, and quarterly periodicities to calculate the relative humidity anywhere on the Earth’s surface. Data from 282 moisture sensors located close to GNSS stations during 1 year (2020) were used to assess the model coefficients. The HGPT2 meteorological parameters were used to process 35 GNSS sites belonging to the International GNSS Service (IGS) using the GAMIT/GLOBK software package. Results show a decreased root-mean-square error (RMSE) and bias values relative to the most used zenith delay models, with a significant impact on the height component. The HGPT2 was developed to be applied in the most diverse areas that can significantly benefit from an ERA5 full-resolution model.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
James T. H. Teo ◽  
Vlad Dinu ◽  
William Bernal ◽  
Phil Davidson ◽  
Vitaliy Oliynyk ◽  
...  

AbstractAnalyses of search engine and social media feeds have been attempted for infectious disease outbreaks, but have been found to be susceptible to artefactual distortions from health scares or keyword spamming in social media or the public internet. We describe an approach using real-time aggregation of keywords and phrases of freetext from real-time clinician-generated documentation in electronic health records to produce a customisable real-time viral pneumonia signal providing up to 4 days warning for secondary care capacity planning. This low-cost approach is open-source, is locally customisable, is not dependent on any specific electronic health record system and can provide an ensemble of signals if deployed at multiple organisational scales.


2021 ◽  
pp. 016555152110077
Author(s):  
Sulong Zhou ◽  
Pengyu Kan ◽  
Qunying Huang ◽  
Janet Silbernagel

Natural disasters cause significant damage, casualties and economical losses. Twitter has been used to support prompt disaster response and management because people tend to communicate and spread information on public social media platforms during disaster events. To retrieve real-time situational awareness (SA) information from tweets, the most effective way to mine text is using natural language processing (NLP). Among the advanced NLP models, the supervised approach can classify tweets into different categories to gain insight and leverage useful SA information from social media data. However, high-performing supervised models require domain knowledge to specify categories and involve costly labelling tasks. This research proposes a guided latent Dirichlet allocation (LDA) workflow to investigate temporal latent topics from tweets during a recent disaster event, the 2020 Hurricane Laura. With integration of prior knowledge, a coherence model, LDA topics visualisation and validation from official reports, our guided approach reveals that most tweets contain several latent topics during the 10-day period of Hurricane Laura. This result indicates that state-of-the-art supervised models have not fully utilised tweet information because they only assign each tweet a single label. In contrast, our model can not only identify emerging topics during different disaster events but also provides multilabel references to the classification schema. In addition, our results can help to quickly identify and extract SA information to responders, stakeholders and the general public so that they can adopt timely responsive strategies and wisely allocate resource during Hurricane events.


Sign in / Sign up

Export Citation Format

Share Document