scholarly journals Anticipating Cryptocurrency Prices Using Machine Learning

Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Laura Alessandretti ◽  
Abeer ElBahrawy ◽  
Luca Maria Aiello ◽  
Andrea Baronchelli

Machine learning and AI-assisted trading have attracted growing interest for the past few years. Here, we use this approach to test the hypothesis that the inefficiency of the cryptocurrency market can be exploited to generate abnormal profits. We analyse daily data for 1,681 cryptocurrencies for the period between Nov. 2015 and Apr. 2018. We show that simple trading strategies assisted by state-of-the-art machine learning algorithms outperform standard benchmarks. Our results show that nontrivial, but ultimately simple, algorithmic mechanisms can help anticipate the short-term evolution of the cryptocurrency market.

Author(s):  
Carlos Rodríguez-Pardo ◽  
Miguel A. Patricio ◽  
Antonio Berlanga ◽  
José M. Molina

The unprecedented growth in the amount and variety of data we can store about the behaviour of customers has been parallel to the popularization and development of machine learning algorithms. This confluence of factors has created the opportunity of understanding customer behaviour and preferences in ways that were undreamt of in the past. In this chapter, the authors study the possibilities of different state-of-the-art machine learning algorithms for retail and smart tourism applications, which are domains that share common characteristics, such as contextual dependence and the kind of data that can be used to understand customers. They explore how supervised, unsupervised, and recommender systems can be used to profile, segment, and create value for customers.


2020 ◽  
Vol 8 (6) ◽  
pp. 1303-1307

For the past couple of years, Machine learning and trading helped by artificial intelligence has drawn growing interest. Here, the approach is used to test the hypothesis that the inefficiency of cryptocurrency industry can be exploited in order to produce anomalous revenue. For the duration between Nov. 2015 and Apr. 2018, daily data for 1, 681 crypto currencies were analyzed. Simple trade techniques supported by state-of -the-art machine learning algorithms are seen to outperform the traditional benchmarks. The results obtained imply that non-trivial, but fundamentally simple, algorithmic processes will help to predict the short-term future of the cryptocurrency market. The popularity of cryptocurrencies had skyrocketed in 2017 due to several consecutive months of super-exponential growth of market capitalization. There are over 1,500 currently recorded cryptocurrencies actively trading today with the cryptocurrencies sitting on more than $300 billion [2], and a total market capitalization of over $800 billion in January 2018. According to a recent survey, between 2.9 and 5.8 million privates as well as institutional investors are in the numerous investment networks and access to markets has become easier over time. In a number of online markets, major crypto currencies can be purchased using fiat currency, and then used in order to purchase less known crypto currencies. The average trading amount is globally exceeding $15bn. About 170 money market funds had been invested in cryptocurrencies since 2017, and Bitcoin futures are launched in order to satisfy the Bitcoin trading and hedging demand for the market. The main objective of the work is to predict the Bitcoin prices, one of the most popular and widely used cryptocurrency which is a source of attraction for many investors as a source of profit or investment. But the market for the cryptocurrencies been volatile since the day it was first introduced. So, the approach towards the survey is to use LSTM RNN and use the available dataset and train the model to give the highest possible accuracy and to provide a real-time price of the Bitcoin for the following days.


Risks ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 4 ◽  
Author(s):  
Christopher Blier-Wong ◽  
Hélène Cossette ◽  
Luc Lamontagne ◽  
Etienne Marceau

In the past 25 years, computer scientists and statisticians developed machine learning algorithms capable of modeling highly nonlinear transformations and interactions of input features. While actuaries use GLMs frequently in practice, only in the past few years have they begun studying these newer algorithms to tackle insurance-related tasks. In this work, we aim to review the applications of machine learning to the actuarial science field and present the current state of the art in ratemaking and reserving. We first give an overview of neural networks, then briefly outline applications of machine learning algorithms in actuarial science tasks. Finally, we summarize the future trends of machine learning for the insurance industry.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Nalindren Naicker ◽  
Timothy Adeliyi ◽  
Jeanette Wing

Educational Data Mining (EDM) is a rich research field in computer science. Tools and techniques in EDM are useful to predict student performance which gives practitioners useful insights to develop appropriate intervention strategies to improve pass rates and increase retention. The performance of the state-of-the-art machine learning classifiers is very much dependent on the task at hand. Investigating support vector machines has been used extensively in classification problems; however, the extant of literature shows a gap in the application of linear support vector machines as a predictor of student performance. The aim of this study was to compare the performance of linear support vector machines with the performance of the state-of-the-art classical machine learning algorithms in order to determine the algorithm that would improve prediction of student performance. In this quantitative study, an experimental research design was used. Experiments were set up using feature selection on a publicly available dataset of 1000 alpha-numeric student records. Linear support vector machines benchmarked with ten categorical machine learning algorithms showed superior performance in predicting student performance. The results of this research showed that features like race, gender, and lunch influence performance in mathematics whilst access to lunch was the primary factor which influences reading and writing performance.


Information ◽  
2019 ◽  
Vol 10 (3) ◽  
pp. 98 ◽  
Author(s):  
Tariq Ahmad ◽  
Allan Ramsay ◽  
Hanady Ahmed

Assigning sentiment labels to documents is, at first sight, a standard multi-label classification task. Many approaches have been used for this task, but the current state-of-the-art solutions use deep neural networks (DNNs). As such, it seems likely that standard machine learning algorithms, such as these, will provide an effective approach. We describe an alternative approach, involving the use of probabilities to construct a weighted lexicon of sentiment terms, then modifying the lexicon and calculating optimal thresholds for each class. We show that this approach outperforms the use of DNNs and other standard algorithms. We believe that DNNs are not a universal panacea and that paying attention to the nature of the data that you are trying to learn from can be more important than trying out ever more powerful general purpose machine learning algorithms.


2019 ◽  
Author(s):  
Shufen Pan ◽  
Naiqing Pan ◽  
Hanqin Tian ◽  
Pierre Friedlingstein ◽  
Stephen Sitch ◽  
...  

Abstract. Evapotranspiration (ET) is a critical component in global water cycle and links terrestrial water, carbon and energy cycles. Accurate estimate of terrestrial ET is important for hydrological, meteorological, and agricultural research and applications, such as quantifying surface energy and water budgets, weather forecasting, and scheduling of irrigation. However, direct measurement of global terrestrial ET is not feasible. Here, we first gave a retrospective introduction to the basic theory and recent developments of state-of-the-art approaches for estimating global terrestrial ET, including remote sensing-based physical models, machine learning algorithms and land surface models (LSMs). Then, we utilized six remote sensing-based models (including four physical models and two machine learning algorithms) and fourteen LSMs to analyze the spatial and temporal variations in global terrestrial ET. The results showed that the mean annual global terrestrial ET ranged from 50.7 × 103 km3 yr−1(454 mm yr−1)to 75.7 × 103 km3 yr−1 (6977 mm yr−1), with the average being 65.5 × 103 km3 yr−1 (588 mm yr−1), during 1982–2011. LSMs had significant uncertainty in the ET magnitude in tropical regions especially the Amazon Basin, while remote sensing-based ET products showed larger inter-model range in arid and semi-arid regions than LSMs. LSMs and remote sensing-based physical models presented much larger inter-annual variability (IAV) of ET than machine learning algorithms in southwestern U.S. and the Southern Hemisphere, particularly in Australia. LSMs suggested stronger control of precipitation on ET IAV than remote sensing-based models. The ensemble remote sensing-based physical models and machine-learning algorithm suggested significant increasing trends in global terrestrial ET at the rate of 0.62 mm yr−2 (p  0.05), even though most of the individual LSMs reproduced the increasing trend. Moreover, all models suggested a positive effect of vegetation greening on ET intensification. Spatially, all methods showed that ET significantly increased in western and southern Africa, western India and northeastern Australia, but decreased severely in southwestern U.S., southern South America and Mongolia. Discrepancies in ET trend mainly appeared in tropical regions like the Amazon Basin. The ensemble means of the three ET categories showed generally good consistency, however, considerable uncertainties still exist in both the temporal and spatial variations in global ET estimates. The uncertainties were induced by multiple factors, including parameterization of land processes, meteorological forcing, lack of in situ measurements, remote sensing acquisition and scaling effects. Improvements in the representation of water stress and canopy dynamics are essentially needed to reduce uncertainty in LSM-simulated ET. Utilization of latest satellite sensors and deep learning methods, theoretical advancements in nonequilibrium thermodynamics, and application of integrated methods that fuse different ET estimates or relevant key biophysical variables will improve the accuracy of remote sensing-based models.


Author(s):  
Dyapa Sravan Reddy ◽  
Lakshmi Prasanna Reddy ◽  
Kandibanda Sai Santhosh ◽  
Virrat Devaser

SEO Analyst pays a lot of time finding relevant tags for their articles and in some cases, they are unaware of the content topics. The current proposed ML model will recommend content-related tags so that the Content writers/SEO analyst will be having an overview regarding the content and minimizes their time spent on unknown articles. Machine Learning algorithms have a plethora of applications and the extent of their real-life implementations cannot be estimated. Using algorithms like One vs Rest (OVR), Long Short-Term Memory (LSTM), this study has analyzed how Machine Learning can be useful for tag suggestions for a topic. The training of the model with One vs Rest turned out to deliver more accurate results than others. This Study certainly answers how One vs Rest is used for tag suggestions that are needed to promote a website and further studies are required to suggest keywords required.


Author(s):  
Andreas Tsamados ◽  
Nikita Aggarwal ◽  
Josh Cowls ◽  
Jessica Morley ◽  
Huw Roberts ◽  
...  

AbstractResearch on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. Big Data Soc 3(2), 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.


Sign in / Sign up

Export Citation Format

Share Document