Prediction and the aquatic sciences

2001 ◽  
Vol 58 (1) ◽  
pp. 63-72 ◽  
Author(s):  
Michael L Pace

The need for prediction is now widely recognized and frequently articulated as an objective of research programs in aquatic science. This recognition is partly the legacy of earlier advocacy by the school of empirical limnologists. This school, however, presented prediction narrowly and failed to account for the diversity of predictive approaches as well to set prediction within the proper scientific context. Examples from time series analysis and probabilistic models oriented toward management provide an expanded view of approaches and prospects for prediction. The context and rationale for prediction is enhanced understanding. Thus, prediction is correctly viewed as an aid to building scientific knowledge with better understanding leading to improved predictions. Experience, however, suggests that the most effective predictive models represent condensed models of key features in aquatic systems. Prediction remains important for the future of aquatic sciences. Predictions are required in the assessment of environmental concerns and for testing scientific fundamentals. Technology is driving enormous advances in the ability to study aquatic systems. If these advances are not accompanied by improvements in predictive capability, aquatic research will have failed in delivering on promised objectives. This situation should spark discomfort in aquatic scientists and foster creative approaches toward prediction.

Author(s):  
J Wang ◽  
H Liu

Predictive models for the major cutting performance measures, such as the kerf taper and depth of cut, are developed for both straight-slit cutting and profile cutting by an abrasive waterjet. The plausibility and predictive capability of the models are assessed and verified by comparing the model predictions with the corresponding experimental data. Very good correlations between the predicted and experimental results have been found, which confirm the adequacy of the models for use in process planning.


Synthese ◽  
2020 ◽  
Author(s):  
Franklin Jacoby

Abstract This paper uses several case studies to suggest that (1) two prominent definitions of data do not on their own capture how scientists use data and (2) a novel perspectival account of data is needed. It then outlines some key features of what this account could look like. Those prominent views, the relational and representational, do not fully capture what data are and how they function in science. The representational view is insensitive to the scientific context in which data are used. The relational account does not fully account for the empirical nature of data and how it is possible for data to be evidentially useful. The perspectival account surmounts these problems by accommodating a representational element to data. At the same time, data depend upon the epistemic context because they are the product of situated and informed judgements.


2022 ◽  
Author(s):  
Alexandre Perez-Lebel ◽  
Gaël Varoquaux ◽  
Marine Le Morvan ◽  
Julie Josse ◽  
Jean-Baptiste Poline

BACKGROUND As databases grow larger, it becomes harder to fully control their collection, and they frequently come with missing values: incomplete observations. These large databases are well suited to train machine-learning models, for instance for forecasting or to extract biomarkers in biomedical settings. Such predictive approaches can use discriminative --rather than generative-- modeling, and thus open the door to new missing-values strategies. Yet existing empirical evaluations of strategies to handle missing values have focused on inferential statistics. RESULTS Here we conduct a systematic benchmark of missing-values strategies in predictive models with a focus on large health databases: four electronic health record datasets, a population brain imaging one, a health survey and two intensive care ones. Using gradient-boosted trees, we compare native support for missing values with simple and state-of-the-art imputation prior to learning. We investigate prediction accuracy and computational time. For prediction after imputation, we find that adding an indicator to express which values have been imputed is important, suggesting that the data are missing not at random. Elaborate missing values imputation can improve prediction compared to simple strategies but requires longer computational time on large data. Learning trees that model missing values --with missing incorporated attribute-- leads to robust, fast, and well-performing predictive modeling. CONCLUSIONS Native support for missing values in supervised machine learning predicts better than state-of-the-art imputation with much less computational cost. When using imputation, it is important to add indicator columns expressing which values have been imputed.


Author(s):  
Samuel Luoma ◽  
Lauren Muscatine

Sixteen years ago, in October 2003, San Francisco Estuary and Watershed Science (SFEWS) published its first article. An anniversary like this is a good time to remind ourselves of our history, and to ask if the journal is living up to the goals we set in 2003. And if so, are those goals consistent with today’s needs? In 2004, CDL’s eScholarship Publishing Group counted an average of 254 requests per month for SFEWS online articles. In 2010, that increased to 1,232 requests per month, and in 2014 to 1,764 per month. In the first 10 months of 2019, 4,420 articles were requested per month. Downloads have been consistently 35% to 40% of requests. Taking data from 2014 through 2017, the search engine Scopus’ CiteScore for SFEWS increased from 0.32 to 1.64; its rank is 82nd of 203 journals in the Water Science and Technology category for 2018, a remarkable climb from being ranked 120 of 179 in 2014. SFEWS is ranked fifth among 53 open access journals in the aquatic sciences, according to the Science Journal Ranking index; and in the top 25% among all 218 aquatic science journals ranked by that index. Thus, SFEWS has grown from an outlet designed to expand access to regional science to a well-respected scientific journal in its own right. Our look back shows that SFEWS has probably grown beyond our original expectations in size, influence, and stature.


1990 ◽  
Vol 47 (9) ◽  
pp. 1788-1795 ◽  
Author(s):  
Donald A. Jackson ◽  
Harold H. Harvey ◽  
Keith M. Somers

Researchers in aquatic sciences frequently employ empirically derived models to predict productivity, yield, and abundance of fish. We demonstrate that predictive models employing ratios of standardized biomass and lake morphometric variables are biased by spurious correlations due to mathematical transformations and the use of inappropriate null models. Our findings emphasise that studies incorporating ratios like mean depth or the morphoedaphic index require cautious interpretation. Future research should focus on more appropriate analytical approaches such as regression-based models like the analysis of covariance. Alternatively, where ratios are employed and spurious correlations are likely, statistical evaluations must incorporate randomization tests to assess the significance of such results.


1964 ◽  
Vol 28 (4) ◽  
pp. 27-31 ◽  
Author(s):  
Alfred A. Kuehn ◽  
Ralph L. Day

Does it make sense to classify consumers as users and nonusers? As heavy and light users? As users of brand X rather than brand Y? Do American consumers fit into such neat categories? Or is it likely that their brand choice behavior is more subtle, less susceptible to exact categorization? This article shows how traditional, static approaches are inadequate for predicting consumer behavior. It offers an alternative: the dynamic, probabilistic approach which can sharpen analysis and yield better predictive models.


2021 ◽  
Author(s):  
Tsair-Wei Chien ◽  
Willy Chou

UNSTRUCTURED The recent article published on February 24 in 2021 is well-written but remains several questions that are required for clarifications, particularly for readers who hope to replicate this study to assess the 158 selected web-Based COVID-19 dashboards. We design a dashboard including the 158 selected COVID-19 and other prediction-involved dashboards that were neglected in the previous study. The 158 dashboards were downloaded from Multimedia Appendix 3 in the previous study. The other two dashboards regarding prediction COVID-19 were included in the study to understand the characteristic of prediction COIVD-19 cases using mathematical models. We observed that (1) all those 158 dashboards can be laid on Google Maps for better understanding the research than the previous study; (2) the prediction-COVID-19 dashboards can be applied with mathematical models to strengthening the previous study. The lack of predictive models in the pandemic likely stunted the use of those 158 dashboards. Predictive approaches to dashboard design should be involved in the previous study.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Ahmed Ali ◽  
Ahmed Fathalla ◽  
Ahmad Salah ◽  
Mahmoud Bekhit ◽  
Esraa Eldesouky

Nowadays, ocean observation technology continues to progress, resulting in a huge increase in marine data volume and dimensionality. This volume of data provides a golden opportunity to train predictive models, as the more the data is, the better the predictive model is. Predicting marine data such as sea surface temperature (SST) and Significant Wave Height (SWH) is a vital task in a variety of disciplines, including marine activities, deep-sea, and marine biodiversity monitoring. The literature has efforts to forecast such marine data; these efforts can be classified into three classes: machine learning, deep learning, and statistical predictive models. To the best of the authors’ knowledge, no study compared the performance of these three approaches on a real dataset. This paper focuses on the prediction of two critical marine features: the SST and SWH. In this work, we proposed implementing statistical, deep learning, and machine learning models for predicting the SST and SWH on a real dataset obtained from the Korea Hydrographic and Oceanographic Agency. Then, we proposed comparing these three predictive approaches on four different evaluation metrics. Experimental results have revealed that the deep learning model slightly outperformed the machine learning models for overall performance, and both of these approaches greatly outperformed the statistical predictive model.


Sign in / Sign up

Export Citation Format

Share Document