A Glass Durability Model Based on Understanding Glass Chemistry and Structural Configurations of the Glass Constituents

1996 ◽  
Vol 432 ◽  
Author(s):  
Xiangdong Feng ◽  
Todd B. Metzger

AbstractAn improved structural bond strength (SBS) model has been developed to quantify the correlation between glass compositions and their chemical durabilities. The SBS model assumes that the strengths of the bonds between cations and oxygens and the structural roles of the individual elements in the glass are the predominant factors controlling the composition dependence of the chemical durability of glasses. The structural roles of oxides in glass are classified as network formers, network breakers, and intermediates. The structural roles of the oxides depend on glass composition and the redox state of oxides. A12O3, ZrO2, Fe2O3, and B2O2 are assigned as network formers only when there are sufficient alkalis to bind with these oxides. CaO can also improve durability by sharing non-bridging oxygen with alkalis, relieving Si0 2 from alkalis. The binding order to alkalis is AI2O3>ZrO2>Fe2O 2>B2O2>CaO>SiO2. The percolation phenomenon in glass is also taken into account. The concentration of network formers has to reach a critical value for a glass to become durable; durable glasses are sufficient in network formers and have a complete network structure; poor durability glasses are deficient in network formers and the network is incomplete and discontinuous. The SBS model is capable of correlating the 7-day product consistency test durability of 42 low-level waste glasses with their composition with an R2 of 0.87, which is better than 0.81 obtained with an eight-coefficient empirical first-order mixture model on the same data set.

PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255630
Author(s):  
Marcin Budka ◽  
Matthew R. Bennett ◽  
Sally C. Reynolds ◽  
Shelby Barefoot ◽  
Sarah Reel ◽  
...  

Footprints are left, or obtained, in a variety of scenarios from crime scenes to anthropological investigations. Determining the sex of a footprint can be useful in screening such impressions and attempts have been made to do so using single or multi landmark distances, shape analyses and via the density of friction ridges. Here we explore the relative importance of different components in sexing two-dimensional foot impressions namely, size, shape and texture. We use a machine learning approach and compare this to more traditional methods of discrimination. Two datasets are used, a pilot data set collected from students at Bournemouth University (N = 196) and a larger data set collected by podiatrists at Sheffield NHS Teaching Hospital (N = 2677). Our convolutional neural network can sex a footprint with accuracy of around 90% on a test set of N = 267 footprint images using all image components, which is better than an expert can achieve. However, the quality of the impressions impacts on this success rate, but the results are promising and in time it may be possible to create an automated screening algorithm in which practitioners of whatever sort (medical or forensic) can obtain a first order sexing of a two-dimensional footprint.


Author(s):  
Lakshmana Kumar Ramasamy ◽  
Seifedine Kadry ◽  
Yunyoung Nam ◽  
Maytham N. Meqdad

Sentiment Analysis is a current research topic by many researches using supervised and machine learning algorithms. The analysis can be done on movie reviews, twitter reviews, online product reviews, blogs, discussion forums, Myspace comments and social networks. The Twitter data set is analyzed using support vector machines (SVM) classifier with various parameters. The content of tweet is classified to find whether it contains fact data or opinion data. The deep analysis is required to find the opinion of the tweets posted by the individual. The sentiment is classified in to positive, negative and neutral. From this classification and analysis, an important decision can be made to improve the productivity. The performance of SVM radial kernel, SVM linear grid and SVM radial grid was compared and found that SVM linear grid performs better than other SVM models.


Author(s):  
D. E. Becker

An efficient, robust, and widely-applicable technique is presented for computational synthesis of high-resolution, wide-area images of a specimen from a series of overlapping partial views. This technique can also be used to combine the results of various forms of image analysis, such as segmentation, automated cell counting, deblurring, and neuron tracing, to generate representations that are equivalent to processing the large wide-area image, rather than the individual partial views. This can be a first step towards quantitation of the higher-level tissue architecture. The computational approach overcomes mechanical limitations, such as hysterisis and backlash, of microscope stages. It also automates a procedure that is currently done manually. One application is the high-resolution visualization and/or quantitation of large batches of specimens that are much wider than the field of view of the microscope.The automated montage synthesis begins by computing a concise set of landmark points for each partial view. The type of landmarks used can vary greatly depending on the images of interest. In many cases, image analysis performed on each data set can provide useful landmarks. Even when no such “natural” landmarks are available, image processing can often provide useful landmarks.


2020 ◽  

BACKGROUND: This paper deals with territorial distribution of the alcohol and drug addictions mortality at a level of the districts of the Slovak Republic. AIM: The aim of the paper is to explore the relations within the administrative territorial division of the Slovak Republic, that is, between the individual districts and hence, to reveal possibly hidden relation in alcohol and drug mortality. METHODS: The analysis is divided and executed into the two fragments – one belongs to the female sex, the other one belongs to the male sex. The standardised mortality rate is computed according to a sequence of the mathematical relations. The Euclidean distance is employed to compute the similarity within each pair of a whole data set. The cluster analysis examines is performed. The clusters are created by means of the mutual distances of the districts. The data is collected from the database of the Statistical Office of the Slovak Republic for all the districts of the Slovak Republic. The covered time span begins in the year 1996 and ends in the year 2015. RESULTS: The most substantial point is that the Slovak Republic possesses the regional disparities in a field of mortality expressed by the standardised mortality rate computed particularly for the diagnoses assigned to the alcohol and drug addictions at a considerably high level. However, the female sex and the male sex have the different outcome. The Bratislava III District keeps absolutely the most extreme position. It forms an own cluster for the both sexes too. The Topoľčany District bears a similar extreme position from a point of view of the male sex. All the Bratislava districts keep their mutual notable dissimilarity. Contrariwise, evaluation of a development of the regional disparities among the districts looks like notably heterogeneously. CONCLUSIONS: There are considerable regional discrepancies throughout the districts of the Slovak Republic. Hence, it is necessary to create a common platform how to proceed with the solution of this issue.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2020 ◽  
Vol 27 (4) ◽  
pp. 329-336 ◽  
Author(s):  
Lei Xu ◽  
Guangmin Liang ◽  
Baowen Chen ◽  
Xu Tan ◽  
Huaikun Xiang ◽  
...  

Background: Cell lytic enzyme is a kind of highly evolved protein, which can destroy the cell structure and kill the bacteria. Compared with antibiotics, cell lytic enzyme will not cause serious problem of drug resistance of pathogenic bacteria. Thus, the study of cell wall lytic enzymes aims at finding an efficient way for curing bacteria infectious. Compared with using antibiotics, the problem of drug resistance becomes more serious. Therefore, it is a good choice for curing bacterial infections by using cell lytic enzymes. Cell lytic enzyme includes endolysin and autolysin and the difference between them is the purpose of the break of cell wall. The identification of the type of cell lytic enzymes is meaningful for the study of cell wall enzymes. Objective: In this article, our motivation is to predict the type of cell lytic enzyme. Cell lytic enzyme is helpful for killing bacteria, so it is meaningful for study the type of cell lytic enzyme. However, it is time consuming to detect the type of cell lytic enzyme by experimental methods. Thus, an efficient computational method for the type of cell lytic enzyme prediction is proposed in our work. Method: We propose a computational method for the prediction of endolysin and autolysin. First, a data set containing 27 endolysins and 41 autolysins is built. Then the protein is represented by tripeptides composition. The features are selected with larger confidence degree. At last, the classifier is trained by the labeled vectors based on support vector machine. The learned classifier is used to predict the type of cell lytic enzyme. Results: Following the proposed method, the experimental results show that the overall accuracy can attain 97.06%, when 44 features are selected. Compared with Ding's method, our method improves the overall accuracy by nearly 4.5% ((97.06-92.9)/92.9%). The performance of our proposed method is stable, when the selected feature number is from 40 to 70. The overall accuracy of tripeptides optimal feature set is 94.12%, and the overall accuracy of Chou's amphiphilic PseAAC method is 76.2%. The experimental results also demonstrate that the overall accuracy is improved by nearly 18% when using the tripeptides optimal feature set. Conclusion: The paper proposed an efficient method for identifying endolysin and autolysin. In this paper, support vector machine is used to predict the type of cell lytic enzyme. The experimental results show that the overall accuracy of the proposed method is 94.12%, which is better than some existing methods. In conclusion, the selected 44 features can improve the overall accuracy for identification of the type of cell lytic enzyme. Support vector machine performs better than other classifiers when using the selected feature set on the benchmark data set.


Author(s):  
Sankirti Sandeep Shiravale ◽  
R. Jayadevan ◽  
Sanjeev S. Sannakki

Text present in a camera captured scene images is semantically rich and can be used for image understanding. Automatic detection, extraction, and recognition of text are crucial in image understanding applications. Text detection from natural scene images is a tedious task due to complex background, uneven light conditions, multi-coloured and multi-sized font. Two techniques, namely ‘edge detection' and ‘colour-based clustering', are combined in this paper to detect text in scene images. Region properties are used for elimination of falsely generated annotations. A dataset of 1250 images is created and used for experimentation. Experimental results show that the combined approach performs better than the individual approaches.


1995 ◽  
Vol 3 (3) ◽  
pp. 133-142 ◽  
Author(s):  
M. Hana ◽  
W.F. McClure ◽  
T.B. Whitaker ◽  
M. White ◽  
D.R. Bahler

Two artificial neural network models were used to estimate the nicotine in tobacco: (i) a back-propagation network and (ii) a linear network. The back-propagation network consisted of an input layer, an output layer and one hidden layer. The linear network consisted of an input layer and an output layer. Both networks used the generalised delta rule for learning. Performances of both networks were compared to the multiple linear regression method MLR of calibration. The nicotine content in tobacco samples was estimated for two different data sets. Data set A contained 110 near infrared (NIR) spectra each consisting of reflected energy at eight wavelengths. Data set B consisted of 200 NIR spectra with each spectrum having 840 spectral data points. The Fast Fourier transformation was applied to data set B in order to compress each spectrum into 13 Fourier coefficients. For data set A, the linear regression model gave better results followed by the back-propagation network which was followed by the linear network. The true performance of the linear regression model was better than the back-propagation and the linear networks by 14.0% and 18.1%, respectively. For data set B, the back-propagation network gave the best result followed by MLR and the linear network. Both the linear network and MLR models gave almost the same results. The true performance of the back-propagation network model was better than the MLR and linear network by 35.14%.


2013 ◽  
Vol 2013 ◽  
pp. 1-6 ◽  
Author(s):  
Mohammad Ahmadian ◽  
Sohyla Reshadat ◽  
Nader Yousefi ◽  
Seyed Hamed Mirhossieni ◽  
Mohammad Reza Zare ◽  
...  

Due to complex composition of leachate, the comprehensive leachate treatment methods have been not demonstrated. Moreover, the improper management of leachate can lead to many environmental problems. The aim of this study was application of Fenton process for decreasing the major pollutants of landfill leachate on Kermanshah city. The leachate was collected from Kermanshah landfill site and treated by Fenton process. The effect of various parameters including solution pH, Fe2+and H2O2dosage, Fe2+/H2O2molar ratio, and reaction time was investigated. The result showed that with increasing Fe2+and H2O2dosage, Fe2+/H2O2molar ratio, and reaction time, the COD, TOC, TSS, and color removal increased. The maximum COD, TOC, TSS, and color removal were obtained at low pH (pH: 3). The kinetic data were analyzed in term of zero-order, first-order, and second-order expressions. First-order kinetic model described the removal of COD, TOC, TSS, and color from leachate better than two other kinetic models. In spite of extremely difficulty of leachate treatment, the previous results seem rather encouraging on the application of Fenton’s oxidation.


1994 ◽  
Vol 29 (1) ◽  
pp. 43-55 ◽  
Author(s):  
M Raoof ◽  
I Kraincanic

Using theoretical parametric studies covering a wide range of cable (and wire) diameters and lay angles, the range of validity of various approaches used for analysing helical cables are critically examined. Numerical results strongly suggest that for multi-layered steel strands with small wire/cable diameter ratios, the bending and torsional stiffnesses of the individual wires may safely be ignored when calculating the 2 × 2 matrix for strand axial/torsional stiffnesses. However, such bending and torsional wire stiffnesses are shown to be first order parameters in analysing the overall axial and torsional stiffnesses of, say, seven wire stands, especially under free-fixed end conditions with respect to torsional movements. Interwire contact deformations are shown to be of great importance in evaluating the axial and torsional stiffnesses of large diameter multi-layered steel strands. Their importance diminishes as the number of wires associated with smaller diameter cables decreases. Using a modified version of a previously reported theoretical model for analysing multilayered instrumentation cables, the importance of allowing for the influence of contact deformations in compliant layers on cable overall characteristics such as axial or torsional stiffnesses is demonstrated by theoretical numerical results. In particular, non-Hertzian contact formulations are used to obtain the interlayer compliances in instrumentation cables in preference to a previously reported model employing Hertzian theory with its associated limitations.


Sign in / Sign up

Export Citation Format

Share Document