scholarly journals Estimating Pasture Biomass Using Sentinel-2 Imagery and Machine Learning

2021 ◽  
Vol 13 (4) ◽  
pp. 603
Author(s):  
Yun Chen ◽  
Juan Guerschman ◽  
Yuri Shendryk ◽  
Dave Henry ◽  
Matthew Tom Harrison

Effective dairy farm management requires the regular estimation and prediction of pasture biomass. This study explored the suitability of high spatio-temporal resolution Sentinel-2 imagery and the applicability of advanced machine learning techniques for estimating aboveground biomass at the paddock level in five dairy farms across northern Tasmania, Australia. A sequential neural network model was developed by integrating Sentinel-2 time-series data, weekly field biomass observations and daily climate variables from 2017 to 2018. Linear least-squares regression was employed for evaluating the results for model calibration and validation. Optimal model performance was realised with an R2 of ≈0.6, a root-mean-square error (RMSE) of ≈356 kg dry matter (DM)/ha and a mean absolute error (MAE) of 262 kg DM/ha. These performance markers indicated the results were within the variability of the pasture biomass measured in the field, and therefore represent a relatively high prediction accuracy. Sensitivity analysis further revealed what impact each farm’s in situ measurement, pasture management and grazing practices have on the model’s predictions. The study demonstrated the potential benefits and feasibility of improving biomass estimation in a cheap and rapid manner over traditional field measurement and commonly used remote-sensing methods. The proposed approach will help farmers and policymakers to estimate the amount of pasture present for optimising grazing management and improving decision-making regarding dairy farming.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Irfan Haider Shakri

Purpose The purpose of this study is to compare five data-driven-based ML techniques to predict the time series data of Bitcoin returns, namely, alternating model tree, random forest (RF), multiple linear regression, multi-layer perceptron regression and M5 Tree algorithms. Design/methodology/approach The data used to forecast time series data of Bitcoin returns ranges from 8 July 2010 to 30 Aug 2020. This study used several predictors to predict bitcoin returns including economic policy uncertainty, equity market volatility index, S&P returns, USD/EURO exchange rates, oil and gold prices, volatilities and returns. Five statistical indexes, namely, correlation coefficient, mean absolute error, root mean square error, relative absolute error and root relative squared error are determined. The results of these metrices are used to develop colour intensity ranking. Findings Among the machine learning (ML) techniques used in this study, RF models has shown superior predictive ability for estimating the Bitcoin returns. Originality/value This study is first of its kind to use and compare ML models in the prediction of Bitcoins. More studies can be carried out by using further cryptocurrencies and other ML data-driven models in future.


Author(s):  
Daniela A. Gomez-Cravioto ◽  
Ramon E. Diaz-Ramos ◽  
Francisco J. Cantu-Ortiz ◽  
Hector G. Ceballos

AbstractTo understand and approach the spread of the SARS-CoV-2 epidemic, machine learning offers fundamental tools. This study presents the use of machine learning techniques for projecting COVID-19 infections and deaths in Mexico. The research has three main objectives: first, to identify which function adjusts the best to the infected population growth in Mexico; second, to determine the feature importance of climate and mobility; third, to compare the results of a traditional time series statistical model with a modern approach in machine learning. The motivation for this work is to support health care providers in their preparation and planning. The methods compared are linear, polynomial, and generalized logistic regression models to describe the growth of COVID-19 incidents in Mexico. Additionally, machine learning and time series techniques are used to identify feature importance and perform forecasting for daily cases and fatalities. The study uses the publicly available data sets from the John Hopkins University of Medicine in conjunction with the mobility rates obtained from Google’s Mobility Reports and climate variables acquired from the Weather Online API. The results suggest that the logistic growth model fits best the pandemic’s behavior, that there is enough correlation of climate and mobility variables with the disease numbers, and that the Long short-term memory network can be exploited for predicting daily cases. Given this, we propose a model to predict daily cases and fatalities for SARS-CoV-2 using time series data, mobility, and weather variables.


2021 ◽  
Author(s):  
Xian Zeng ◽  
Yaoqin Hu ◽  
Liqi Shu ◽  
Jianhua Li ◽  
Huilong Duan ◽  
...  

Abstract The quality of treatment and prognosis after pediatric congenital heart surgery remains unsatisfactory. A reliable prediction model for postoperative complications of congenital heart surgery patients is essential to enable prompt initiation of therapy and improve the quality of prognosis. Here, we develop an interpretable machine-learning-based model that integrates patient demographics, surgery-specific features and intraoperative blood pressure data for accurately predicting complications after pediatric congenital heart surgery. We used blood pressure variability and the k-means algorithm combined with a smoothed formulation of dynamic time wrapping to extract features from time series data. In addition, SHAP framework was used to provide explanations of the prediction. Our model achieved the best performance both in binary and multi-label classification compared with other consensus-based risk models. In addition, this explainable model explains why a prediction was made to help improve the clinical understanding of complication risk and generate actionable knowledge in practice. The combination of model performance and interpretability is easy for clinicians to trust and provide insight into how they should respond before the condition worsens after pediatric congenital heart surgery.


The stock market has been one of the primary revenue streams for many for years. The stock market is often incalculable and uncertain; therefore predicting the ups and downs of the stock market is an uphill task even for the financial experts, which they been trying to tackle without any little success. But it is now possible to predict stock markets due to rapid improvement in technology which led to better processing speed and more accurate algorithms. It is necessary to forswear the misconception that prediction of stock market is only meant for people who have expertise in finance; hence an application can be developed to guide the user about the tempo of the stock market and risk associated with it.The prediction of prices in stock market is a complicated task, and there are various techniques that are used to solve the problem, this paper investigates some of these techniques and compares the accuracy of each of the methods. Forecasting the time series data is important topic in many economics, statistics, finance and business. Of the many techniques in forecasting time series data such as the Autoregressive, Moving Average, and the Autoregressive Integrated Moving Average, it is the Autoregressive Integrated Moving Average that has higher accuracy and higher precision than other methods. And with recent advancement in computational power of processors and advancement in knowledge of machine learning techniques and deep learning, new algorithms could be made to tackle the problem of predicting the stock market. This paper investigates one of such machine learning algorithms to forecast time series data such as Long Short Term Memory. It is compared with traditional algorithms such as the ARIMA method, to determine how superior the LSTM is compared to the traditional methods for predicting the stock market.


2020 ◽  
Vol 24 (21) ◽  
pp. 16509-16517
Author(s):  
Irfan Ramzan Parray ◽  
Surinder Singh Khurana ◽  
Munish Kumar ◽  
Ali A. Altalbe

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Xian Zeng ◽  
Yaoqin Hu ◽  
Liqi Shu ◽  
Jianhua Li ◽  
Huilong Duan ◽  
...  

AbstractThe quality of treatment and prognosis after pediatric congenital heart surgery remains unsatisfactory. A reliable prediction model for postoperative complications of congenital heart surgery patients is essential to enable prompt initiation of therapy and improve the quality of prognosis. Here, we develop an interpretable machine-learning-based model that integrates patient demographics, surgery-specific features and intraoperative blood pressure data for accurately predicting complications after pediatric congenital heart surgery. We used blood pressure variability and the k-means algorithm combined with a smoothed formulation of dynamic time wrapping to extract features from time-series data. In addition, SHAP framework was used to provide explanations of the prediction. Our model achieved the best performance both in binary and multi-label classification compared with other consensus-based risk models. In addition, this explainable model explains why a prediction was made to help improve the clinical understanding of complication risk and generate actionable knowledge in practice. The combination of model performance and interpretability is easy for clinicians to trust and provide insight into how they should respond before the condition worsens after pediatric congenital heart surgery.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 1064
Author(s):  
Michele Resta ◽  
Anna Monreale ◽  
Davide Bacciu

The biomedical field is characterized by an ever-increasing production of sequential data, which often come in the form of biosignals capturing the time-evolution of physiological processes, such as blood pressure and brain activity. This has motivated a large body of research dealing with the development of machine learning techniques for the predictive analysis of such biosignals. Unfortunately, in high-stakes decision making, such as clinical diagnosis, the opacity of machine learning models becomes a crucial aspect to be addressed in order to increase the trust and adoption of AI technology. In this paper, we propose a model agnostic explanation method, based on occlusion, that enables the learning of the input’s influence on the model predictions. We specifically target problems involving the predictive analysis of time-series data and the models that are typically used to deal with data of such nature, i.e., recurrent neural networks. Our approach is able to provide two different kinds of explanations: one suitable for technical experts, who need to verify the quality and correctness of machine learning models, and one suited to physicians, who need to understand the rationale underlying the prediction to make aware decisions. A wide experimentation on different physiological data demonstrates the effectiveness of our approach both in classification and regression tasks.


Sign in / Sign up

Export Citation Format

Share Document