scholarly journals Machine Learning Techniques for Fluid Flows at the Nanoscale

Fluids ◽  
2021 ◽  
Vol 6 (3) ◽  
pp. 96
Author(s):  
Filippos Sofos ◽  
Theodoros E. Karakasidis

Simulations of fluid flows at the nanoscale feature massive data production and machine learning (ML) techniques have been developed during recent years to leverage them, presenting unique results. This work facilitates ML tools to provide an insight on properties among molecular dynamics (MD) simulations, covering missing data points and predicting states not previously located by the simulation. Taking the fluid flow of a simple Lennard-Jones liquid in nanoscale slits as a basis, ML regression-based algorithms are exploited to provide an alternative for the calculation of transport properties of fluids, e.g., the diffusion coefficient, shear viscosity and thermal conductivity and the average velocity across the nanochannels. Through appropriate training and testing, ML-predicted values can be extracted for various input variables, such as the geometrical characteristics of the slits, the interaction parameters between particles and the flow driving force. The proposed technique could act in parallel to simulation as a means of enriching the database of material properties, assisting in coupling between scales, and accelerating data-based scientific computations.

Polymers ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 579 ◽  
Author(s):  
Yousef Mohammadi ◽  
Mohammad Saeb ◽  
Alexander Penlidis ◽  
Esmaiel Jabbari ◽  
Florian J. Stadler ◽  
...  

Nowadays, polymer reaction engineers seek robust and effective tools to synthesize complex macromolecules with well-defined and desirable microstructural and architectural characteristics. Over the past few decades, several promising approaches, such as controlled living (co)polymerization systems and chain-shuttling reactions have been proposed and widely applied to synthesize rather complex macromolecules with controlled monomer sequences. Despite the unique potential of the newly developed techniques, tailor-making the microstructure of macromolecules by suggesting the most appropriate polymerization recipe still remains a very challenging task. In the current work, two versatile and powerful tools capable of effectively addressing the aforementioned questions have been proposed and successfully put into practice. The two tools are established through the amalgamation of the Kinetic Monte Carlo simulation approach and machine learning techniques. The former, an intelligent modeling tool, is able to model and visualize the intricate inter-relationships of polymerization recipes/conditions (as input variables) and microstructural features of the produced macromolecules (as responses). The latter is capable of precisely predicting optimal copolymerization conditions to simultaneously satisfy all predefined microstructural features. The effectiveness of the proposed intelligent modeling and optimization techniques for solving this extremely important ‘inverse’ engineering problem was successfully examined by investigating the possibility of tailor-making the microstructure of Olefin Block Copolymers via chain-shuttling coordination polymerization.


Author(s):  
Vidyullatha P ◽  
D. Rajeswara Rao

<p>Curve fitting is one of the procedures in data analysis and is helpful for prediction analysis showing graphically how the data points are related to one another whether it is in linear or non-linear model. Usually, the curve fit will find the concentrates along the curve or it will just use to smooth the data and upgrade the presence of the plot. Curve fitting checks the relationship between independent variables and dependent variables with the objective of characterizing a good fit model. Curve fitting finds mathematical equation that best fits given information. In this paper, 150 unorganized data points of environmental variables are used to develop Linear and non-linear data modelling which are evaluated by utilizing 3 dimensional ‘Sftool’ and ‘Labfit’ machine learning techniques. In Linear model, the best estimations of the coefficients are realized by the estimation of R- square turns in to one and in Non-Linear models with least Chi-square are the criteria. </p>


Algorithms ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 258
Author(s):  
Tran Dinh Khang ◽  
Manh-Kien Tran ◽  
Michael Fowler

Clustering is an unsupervised machine learning method with many practical applications that has gathered extensive research interest. It is a technique of dividing data elements into clusters such that elements in the same cluster are similar. Clustering belongs to the group of unsupervised machine learning techniques, meaning that there is no information about the labels of the elements. However, when knowledge of data points is known in advance, it will be beneficial to use a semi-supervised algorithm. Within many clustering techniques available, fuzzy C-means clustering (FCM) is a common one. To make the FCM algorithm a semi-supervised method, it was proposed in the literature to use an auxiliary matrix to adjust the membership grade of the elements to force them into certain clusters during the computation. In this study, instead of using the auxiliary matrix, we proposed to use multiple fuzzification coefficients to implement the semi-supervision component. After deriving the proposed semi-supervised fuzzy C-means clustering algorithm with multiple fuzzification coefficients (sSMC-FCM), we demonstrated the convergence of the algorithm and validated the efficiency of the method through a numerical example.


2018 ◽  
Vol 3 (24) ◽  
pp. eaau2489 ◽  
Author(s):  
I. M. Van Meerbeek ◽  
C. M. De Sa ◽  
R. F. Shepherd

In a step toward soft robot proprioception, and therefore better control, this paper presents an internally illuminated elastomer foam that has been trained to detect its own deformation through machine learning techniques. Optical fibers transmitted light into the foam and simultaneously received diffuse waves from internal reflection. The diffuse reflected light was interpreted by machine learning techniques to predict whether the foam was twisted clockwise, twisted counterclockwise, bent up, or bent down. Machine learning techniques were also used to predict the magnitude of the deformation type. On new data points, the model predicted the type of deformation with 100% accuracy and the magnitude of the deformation with a mean absolute error of 0.06°. This capability may impart soft robots with more complete proprioception, enabling them to be reliably controlled and responsive to external stimuli.


Author(s):  
Vidyullatha P ◽  
D. Rajeswara Rao

<p>Curve fitting is one of the procedures in data analysis and is helpful for prediction analysis showing graphically how the data points are related to one another whether it is in linear or non-linear model. Usually, the curve fit will find the concentrates along the curve or it will just use to smooth the data and upgrade the presence of the plot. Curve fitting checks the relationship between independent variables and dependent variables with the objective of characterizing a good fit model. Curve fitting finds mathematical equation that best fits given information. In this paper, 150 unorganized data points of environmental variables are used to develop Linear and non-linear data modelling which are evaluated by utilizing 3 dimensional ‘Sftool’ and ‘Labfit’ machine learning techniques. In Linear model, the best estimations of the coefficients are realized by the estimation of R- square turns in to one and in Non-Linear models with least Chi-square are the criteria. </p>


2021 ◽  
Author(s):  
Charmaine Cruz ◽  
Kevin McGuinness ◽  
Jerome O'Connell ◽  
James Martin ◽  
Philip Perrin ◽  
...  

&lt;p&gt;The EU Habitats Directive (HD) requires that natural habitats are monitored every six years to assess habitat condition, extent and range. In Ireland, reporting for the HD is based on ecological field surveys. This field-based mapping and assessment methodology, while desirable, can be time-consuming, difficult, and expensive. It also only covers a sub-sample of sites due to cost. Thus, more efficient mapping approaches, such as remote sensing, should be considered to supplement these monitoring techniques. &amp;#160;&lt;/p&gt;&lt;p&gt;Here we present some preliminary results from the iHabiMap Project. The overall aim of iHabiMap is to develop and assess analytical approaches that use machine learning techniques to derive habitat maps from imagery acquired by Unmanned Aerial Vehicle (UAV). The project started in 2019 and to date twelve UAV surveys have been conducted acquiring very high-resolution (6 cm) multispectral imagery for five selected study sites. Ecological data were collected concurrently with each UAV survey to obtain the actual state of the recorded vegetation. The project focuses on assessing imagery from three habitat types: upland blanket bog, coastal dunes, and grassland in Ireland. In this abstract we focus on the coastal dunes.&lt;/p&gt;&lt;p&gt;The Random Forest (RF) machine learning algorithm using the python Scikit-learn library was utilized to identify and map the habitats. The pixel-based RF model was calibrated using a combination of ground truth data and several colour, band ratios, and topographic variables derived from the UAV data. Six separate models were generated to compare how classification accuracies change based on combinations of input variables. The methodology was initially implemented to classify four sand dune Annex I habitats: 2120 - Marram dunes; 2130 - Fixed dunes; 2170 &amp;#8211; Dunes with creeping willow; 2190 &amp;#8211; Dune slacks, in the Maharees site in Ireland. The results were analyzed using the standard confusion matrix to calculate overall and class-specific accuracies. Preliminary results suggest that RF can classify sand dune Annex I habitats 2120, 2130, 2170, and 2190, with overall accuracies ranging from 0.80 to 0.93, depending on the input variables. The highest accuracy was achieved using the combined spectral and topographic information. Feature importance metrics calculated from RF showed that the surface elevation and Green Normalized Vegetation Index (GNDVI) were the key input variables in the classification. The results obtained from the presented workflow demonstrate the potential of using UAV, machine learning techniques, and field data in characterizing coastal dune environments. The classification will be further expanded to explore phenological differences of the vegetation by including the temporal dimension of the data and will be tested on the upland and grassland habitats. Moreover, an upscaling methodology will be implemented to assess UAV data usability on a broader scale mapping.&lt;/p&gt;


Author(s):  
Yumeng Li ◽  
Weirong Xiao ◽  
Pingfeng Wang

Atomistic simulations play an important role in the material analysis and design by being rooted in the accurate first principles methods that free from empirical parameters and phenomenological models. However, successful applications of MD simulations largely depend on the availability of efficient and accurate force field potentials used for describing the interatomic interactions. As a powerful tool revolutionizing many areas in science and technology, machine learning techniques have gained growing attentions in the field of material science and engineering due to their potentials to accelerate the material discovery through their applications in surrogate model assisted material design. Despite tremendous advantages of employing machine learning techniques for the development of force field potentials as compared to conventional approaches, the uncertainty involved in the machine learning interpolated atomic potential energy surface has not drew much attention although it is an important issue. In this paper, the uncertainty quantification study is performed for the machine learning interpolated atomic potentials, and applied to the titanium dioxide (TiO2), an industrially relevant and well-studies material. The study results indicated that quantifying uncertainties is an indispensable task that must be performed along with the atomistic simulation process for a successful application of the machine learning based force field potentials.


2021 ◽  
Author(s):  
Helmut Wasserbacher ◽  
Martin Spindler

AbstractThis article is an introduction to machine learning for financial forecasting, planning and analysis (FP&A). Machine learning appears well suited to support FP&A with the highly automated extraction of information from large amounts of data. However, because most traditional machine learning techniques focus on forecasting (prediction), we discuss the particular care that must be taken to avoid the pitfalls of using them for planning and resource allocation (causal inference). While the naive application of machine learning usually fails in this context, the recently developed double machine learning framework can address causal questions of interest. We review the current literature on machine learning in FP&A and illustrate in a simulation study how machine learning can be used for both forecasting and planning. We also investigate how forecasting and planning improve as the number of data points increases.


Author(s):  
Mostafa A. Salama ◽  
Aboul Ella Hassanien

Euclidian calculations represent a cornerstone in many machine learning techniques such as the Fuzzy C-Means (FCM) and Support Vector Machine (SVM) techniques. The FCM technique calculates the Euclidian distance between different data points, and the SVM technique calculates the dot product of two points in the Euclidian space. These calculations do not consider the degree of relevance of the selected features to the target class labels. This paper proposed a modification in the Euclidian space calculation for the FCM and SVM techniques based on the ranking of features extracted from evaluating the features. The authors consider the ranking as a membership value of this feature in Fuzzification of Euclidian calculations rather than using the crisp concept of feature selection, which selects some features and ignores others. Experimental results proved that applying the fuzzy value of memberships to Euclidian calculations in the FCM and SVM techniques has better accuracy than the ordinary calculating method and just ignoring the unselected features.


2006 ◽  
Author(s):  
Christopher Schreiner ◽  
Kari Torkkola ◽  
Mike Gardner ◽  
Keshu Zhang

Sign in / Sign up

Export Citation Format

Share Document