scholarly journals Application of Data Analytics Techniques to Establish Geometallurgical Relationships to Bond Work Index at the Paracutu Mine, Minas Gerais, Brazil

Minerals ◽  
2019 ◽  
Vol 9 (5) ◽  
pp. 302 ◽  
Author(s):  
Mahadi Bhuiyan ◽  
Kamran Esmaieli ◽  
Juan C. Ordóñez-Calderón

Analysis of geometallurgical data is essential to building geometallurgical models that capture physical variability in the orebody and can be used for the optimization of mine planning and the prediction of milling circuit performance. However, multivariate complexity and compositional data constraints can make this analysis challenging. This study applies unsupervised and supervised learning to establish relationships between the Bond ball mill work index (BWI) and geomechanical, geophysical and geochemical variables for the Paracatu gold orebody. The regolith and fresh rock geometallurgical domains are established from two cluster sets resulting from K-means clustering of the first three principal component (PC) scores of isometric log-ratio (ilr) coordinates of geochemical data and standardized BWI, geomechanical and geophysical data. The first PC is attributed to weathering and reveals a strong relationship between BWI and rock strength and fracture intensity in the regolith. Random forest (RF) classification of BWI in the fresh rock identifies the greater importance of geochemical ilr balances relative to geomechanical and geophysical variables.

2013 ◽  
Vol 1 (1) ◽  
pp. 1 ◽  
Author(s):  
Kostalena Michelaki ◽  
Michael J. Hughes ◽  
Ronald G.V. Hancock

Since the 1970s, archaeologists have increasingly depended on archaeometric rather than strictly stylistic data to explore questions of ceramic provenance and technol- ogy, and, by extension, trade, exchange, social networks and even identity. It is accepted as obvious by some archaeometrists and statisti- cians that the results of the analyses of compo- sitional data may be dependent on the format of the data used, on the data exploration method employed and, in the case of multivari- ate analyses, even on the number of elements considered. However, this is rarely articulated clearly in publications, making it less obvious to archaeologists. In this short paper, we re- examine compositional data from a collection of bricks, tiles and ceramics from Hill Hall, near Epping in Essex, England, as a case study to show how the method of data exploration used and the number of elements considered in multivariate analyses of compositional data can affect the sorting of ceramic samples into chemical groups. We compare bivariate data splitting (BDS) with principal component analysis (PCA) and centered log ratio-principal component analysis (CLR-PCA) of different unstandardized data formats [original concen- tration data and logarithmically transformed (i.e. log10 data)], using different numbers of elements. We confirm that PCA, in its various forms, is quite sensitive to the numbers and types of elements used in data analysis.


1992 ◽  
Vol 56 (385) ◽  
pp. 469-475 ◽  
Author(s):  
H. R. Rollinson

AbstractCompositional data—that is data where concentrations are expressed as proportions of a whole, such as percentages or parts per million—have a number of peculiar mathematical properties which make standard statistical tests unworkable. In particular correlation analysis can produce geologically meaningless results. Aitchison (1986) proposed a log-ratio transformation of compositional data which allows inter-element relationships to be investigated. This method was applied to two sets of geochemical data—basalts from Kilauea Iki lava lake and grantic gneisses from the Limpopo Belt—and geologically 'sensible' results were obtained. Geochemists are encouraged to adopt the Aitchison method of data analysis in preference to the traditional but invalid approach which uses compositional data.


2018 ◽  
Vol 156 (07) ◽  
pp. 1111-1130 ◽  
Author(s):  
J. VERHAEGEN ◽  
G.J. WELTJE ◽  
D. MUNSTERMAN

AbstractThe field of provenance analysis has seen a revival in the last decade as quantitative data-acquisition techniques continue to develop. In the 20th century, many heavy-mineral data were collected. These data were mostly used as qualitative indications for stratigraphy and provenance, and not incorporated in a quantitative provenance methodology. Even today, such data are mostly only used in classic data tables or cumulative heavy-mineral plots as a qualitative indication of variation. The main obstacle to rigorous statistical analysis is the compositional nature of these data which makes them unfit for standard multivariate statistics. To gain more information from legacy data, a straightforward workflow for quantitative analysis of compositional datasets is provided. First (1) a centred log-ratio transformation of the data is carried out to fix the constant-sum constraint and non-negativity of the compositional data. Next, (2) cluster analysis is followed by (3) principal component analysis and (4) bivariate log-ratio plots. Several (5) proxies for the effects of sorting and weathering are included to check the provenance significance of observed variations and finally a (6) spatial interpolation of a provenance proxy extracted from the dataset can be carried out. To test this methodology, available heavy-mineral data from the southern edge of the Miocene North Sea Basin are analysed. The results are compared with available information from literature and are used to gain improved insight into Miocene sediment input variations in the study area.


Author(s):  
Dennis te Beest ◽  
Els Nijhuis ◽  
Tim Mohlmann ◽  
Caro Ter braak

Microbiome composition data collected through amplicon sequencing are count data on taxa in which the total count per sample (the library size) is an artifact of the sequencing platform and as a result such data are compositional. To avoid library size dependency, one common way of analyzing multivariate compositional data is to perform a principal component analysis (PCA) on data transformed with the centered log-ratio, hereafter called a log-ratio PCA. Two aspects typical of amplicon sequencing data are the large differences in library size and the large number of zeroes. In this paper we show on real data and by simulation that, applied to data that combines these two aspects, log-ratio PCA is nevertheless heavily dependent on the library size. This leads to a reduction in power when testing against any explanatory variable in log-ratio redundancy analysis. If there is additionally a correlation between the library size and the explanatory variable, then the type 1 error becomes inflated. We explore putative solutions to this problem.


2021 ◽  
Author(s):  
Solveig Pospiech ◽  
Anne Taivalkoski ◽  
Yann Lahaye ◽  
Pertti Sarala ◽  
Janne Kinnunen ◽  
...  

<p>Modern mineral exploration is required to be conducted in a sustainable, environmentally friendly and socially acceptable way. Especially for the geochemical exploration on ecologically sensitive areas this poses a challenge because any heavy machinery or invasive methods might cause long-lasting damage to nature. One way of reducing the impact of mineral exploration on the environment during the early stages of exploration is to use surface sampling media, such as upper soil horizons, water, plants and, on high latitudes, also snow. Of these options, snow has several advantages: Sampling and analysing snow is fast and low in costs, it has no impact on the environment, and in wintertime it is ubiquitous and available independent of the ecosystem.<br>In the “New Exploration Technologies (NEXT)” project*, snow samples were collected in March-April 2019 to evaluate the usage of snow as a sampling material for mineral exploration. The test site was the Rajapalot Au-Co prospect in northern Finland, located 60 km west from Rovaniemi and operated by Mawson Oy. A stratified random sampling strategy was applied to place the sampling stations on the test site. The sampling comprised 94 snow samples and 12 field replicates. The samples were analysed at the GTK Research laboratory using a Nu AttoM single collector inductively coupled plasma mass spectrometry (SC-ICPMS) which returned analytical results for 52 elements at the ppt level. After applying quality control to the data, the elements Ba, Ca, Cd, Cr, Cs, Ga, Li, Mg, Rb, Sr, Tl and V showed good quality and were used in the final data analysis.<br>Geochemical data of drill cores were used to train a model to predict bedrock geochemistry based on the 12 available element concentrations of snow analysis. Prior to statistical methods, all geochemical data was transformed to log-ratio scores in order to ensure that results are independent of the selection of elements and to avoid spurious correlations (compositional data approach). Results show that snow data provide reasonable predictions of bedrock geochemistry for elements such as Ca, Cr, Li and Mg, but also for elements not used in snow data, such as Mn and Na. This suggests that snow can serve as a lithogeochemical mapping tool for potential geological domains. For the ore related elements Au, Ag, Co, and U the model provided predictions with higher uncertainty. Yet, the pattern of the predicted values of ore related elements show that snow can also be used to delineate prospective areas for continuing exploration with more sensitive methods.<br>*) This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 776804.</p>


2020 ◽  
Vol 21 (1) ◽  
pp. geochem2020-054 ◽  
Author(s):  
E. C. Grunsky ◽  
D. Arne

In this study we apply multivariate statistical and predictive classification methods to interpret geochemical data from 8545 stream-sediment samples collected in southern British Columbia, Canada. Data for 35 elements were corrected for laboratory bias and adjusted for values reported below the lower limit of detection. Each sample site was attributed with the closest British Columbia MINFILE occurrence within 2.5 km. MINFILE occurrences were grouped into ‘GroupModels’ based on similarities between the British Columbia Geological Survey mineral deposit models and geochemical signatures. These data were used to create a training dataset of 474 observations, including 100 samples not attributed with a MINFILE occurrence. The training set was used to generate predictions for the mineral deposit models from which posterior probabilities were estimated for the remaining 8071 samples. The data underwent a centred log-ratio transformation and then characterization using either principal component analysis (PCA) or t-distributed stochastic neighbour embedding using 9 dimensions (t-SNE) prior to classification by random forests. The posterior probabilities generated from the t-SNE metric provide a slightly higher level of prediction accuracy compared to the posterior probabilities obtained using the PCA metric. The results are comparable to those obtained using a conventional catchment analysis approach and expert-driven model. The approach presented here provides a repeatable, consistent and defensible methodology for the identification of prospective mineralized terrains and mineral systems.


Minerals ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. 501
Author(s):  
Caterina Gozzi ◽  
Roberta Sauro Graziano ◽  
Antonella Buccianti

Nature is often characterized by systems that are far from thermodynamic equilibrium, and rivers are not an exception for the Earth’s critical zone. When the chemical composition of stream waters is investigated, it emerges that riverine systems behave as complex systems. This means that the compositions have properties that depend on the integrity of the whole (i.e., the composition with all the chemical constituents), properties that arise thanks to the innumerable nonlinear interactions between the elements of the composition. The presence of interconnections indicates that the properties of the whole cannot be fully understood by examining the parts of the system in isolation. In this work, we propose investigating the complexity of riverine chemistry by using the CoDA (Compositional Data Analysis) methodology and the performance of the perturbation operator in the simplex geometry. With riverine bicarbonate considered as a key component of regional and global biogeochemical cycles and Ca2+ considered as mostly related to the weathering of carbonatic rocks, perturbations were calculated for subsequent couples of compositions after ranking the data for increasing values of the log-ratio ln(Ca2+/HCO3−). Numerical values were analyzed by using robust principal component analysis and non-parametric correlations between compositional parts (heat map) associated with distributional and multifractal methods. The results indicate that HCO3−, Ca2+, Mg2+ and Sr2+ are more resilient, thus contributing to compositional changes for all the values of ln(Ca2+/HCO3−) to a lesser degree with respect to the other chemical elements/components. Moreover, the complementary cumulative distribution function of all the sequences tracing the compositional change and the nonlinear relationship between the Q-th moment versus the scaling exponents for each of them indicate the presence of multifractal variability, thus revealing scaling properties of the fluctuations.


2021 ◽  
Author(s):  
Karel Hron ◽  
Alessandra Menafoglio ◽  
Javier Palarea-Albaladejo ◽  
Peter Filzmoser ◽  
Juan Jose Egozcue

<p>For varied reasons, in practical analysis of geochemical (compositional) data we are often interested in adjusting the role or the influence of variables on the final results. For instance, a measuring device used to analyse the chemical mixture of soil samples might not necessarily have the same accuracy for all components, particularly for those with low concentrations. This can have a severe impact on results and interpretation of popular methods like principal component analysis, regression analysis or clustering, but also on the quality of imputation of values below detection limit of a measurement device. In all these cases, a sensible weighting scheme for the variables would generally lead to a statistical analysis better reflecting the underlying phenomenon of interest and less influenced by some limitations or issues with the data collection process. In addition, the relative nature of geochemical data (i.e., those in units like mg/kg, proportions or percentages), where the relevant information is contained in ratios between components, needs to be taken into account for a reliable statistical processing. In this contribution we propose a sensible way of weighting of geochemical components using a generalisation of the logratio methodology for compositional data analysis, namely, the Bayes space approach. We provide practical examples of such weighting and also highlight that the Bayes space approach enables one to develop a methodological framework where it is possibly to apply any weighting strategy in a controlled way.</p>


Metals ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1079
Author(s):  
Victor Ciribeni ◽  
Juan M. Menéndez-Aguado ◽  
Regina Bertero ◽  
Andrea Tello ◽  
Enzo Avellá ◽  
...  

As a continuation of a previous research work carried out to estimate the Bond work index (wi) by using a simulator based on the cumulative kinetic model (CKM), a deeper analysis was carried out to determine the link between the kinetic and energy parameters in the case of metalliferous and non-metallic ore samples. The results evidenced a relationship between the CKM kinetic parameter k and the grindability index gbp; and also with the wi, obtained following the standard procedure. An excellent correlation was obtained in both cases, posing the definition of alternative work index estimation tests with the advantages of more straightforward and quicker laboratory procedures.


Sign in / Sign up

Export Citation Format

Share Document