scholarly journals Determination of Metallic Impurities by ICP-MS Technique in Eyeshadows Purchased in Poland. Part I

Molecules ◽  
2021 ◽  
Vol 26 (21) ◽  
pp. 6753
Author(s):  
Aleksandra Pawlaczyk ◽  
Magdalena Gajek ◽  
Martyna Balcerek ◽  
Małgorzata I. Szynkowska-Jóźwik

Eye shadows, which are products willingly and frequently used by women and even children, have been reported in literature to contain toxic metals. In this work, a total of 94 eye shadows samples available on the Polish market were collected. Eye shadow products have been selected in order to include several parameters important from the point of view of the typical consumer such as: product type (mat/pearl), consumer group (for adults and children), price range (very cheap, medium price, expensive and very expensive), color (twelve different colors were tested), manufacturer (eight brands were investigated) or country of production (four countries were included). The concentration of selected metals (Ag, Ba, Bi, Cd, Pb, Sr, Tl) was determined by ICP-MS technique after the sample extraction with a mixture of nitric acid and hydrogen peroxide in a microwave closed system. For Ag, Cd and Tl, some results were below the established limit of quantification for the employed technique. The presence of strontium, barium, lead and bismuth was confirmed in all studied samples. The obtained results for analyzed elements were, in general, quite comparable with the data reported by other authors. A small number of samples exceeding the permissible values (two samples were beyond the limit value for Cd of 0.5 mg/kg and one exceed the acceptable concentration for Pb of 10 mg/kg) also proves a relatively good condition of the Polish cosmetics market and suggests insubstantial risk for the potential consumers. The results gathered for some of the eye shadows intended for children turned out to be alarmingly high, in particular for elements such as Cd. The highest concentration of Cd reached almost 4 mg/kg, while of Pb amounted to 16 mg/kg. The presence of the statistically significant differences was confirmed for all included parameters with an exception of the color of the eye shadow. Considering the results acquired only for Cd and Pb with respect to the country of origin, the least contaminated cosmetics by metallic impurities seem to be the one produced in Canada, while the ones presenting the highest health risk among all studied eye shadows are make-up cosmetics originating from Poland and Italy. Multivariate analysis of a large data set using CA methods and PCA provided valuable information on dependencies between variables and objects.

2016 ◽  
Vol 42 (4) ◽  
pp. 637-660 ◽  
Author(s):  
Germán Kruszewski ◽  
Denis Paperno ◽  
Raffaella Bernardi ◽  
Marco Baroni

Logical negation is a challenge for distributional semantics, because predicates and their negations tend to occur in very similar contexts, and consequently their distributional vectors are very similar. Indeed, it is not even clear what properties a “negated” distributional vector should possess. However, when linguistic negation is considered in its actual discourse usage, it often performs a role that is quite different from straightforward logical negation. If someone states, in the middle of a conversation, that “This is not a dog,” the negation strongly suggests a restricted set of alternative predicates that might hold true of the object being talked about. In particular, other canids and middle-sized mammals are plausible alternatives, birds are less likely, skyscrapers and other large buildings virtually impossible. Conversational negation acts like a graded similarity function, of the sort that distributional semantics might be good at capturing. In this article, we introduce a large data set of alternative plausibility ratings for conversationally negated nominal predicates, and we show that simple similarity in distributional semantic space provides an excellent fit to subject data. On the one hand, this fills a gap in the literature on conversational negation, proposing distributional semantics as the right tool to make explicit predictions about potential alternatives of negated predicates. On the other hand, the results suggest that negation, when addressed from a broader pragmatic perspective, far from being a nuisance, is an ideal application domain for distributional semantic methods.


2014 ◽  
Vol 931-932 ◽  
pp. 1353-1359
Author(s):  
Sutheetutt Vacharaskunee ◽  
Sarun Intakosum

Processing of a large data set which is known for today as big data processing is still a problem that has not yet a well-defined solution. The data can be both structured and unstructured. For the structured part, eXtensible Markup Language (XML) is a major tool that freely allows document owners to describe and organize their data using their markup tags. One major problem, however, behind this freedom lies in the big data retrieving process. The same or similar information that are described using the different tags or different structures may not be retrieved if the query statements contains different keywords to the one used in the markup tags. The best way to solve this problem is to specify a standard set of the markup tags for each problem domain. The creation of such a standard set if done manually requires a lot of hard work and is a time consuming process. In addition, it may be hard to define terms that are acceptable by all people. This research proposes a model for a new technique, XML Tag Recommendation (XTR) that aims to solve this problem. This technique applies the idea of Case Base Reasoning (CBR) by collecting the most used tags in each domain as a case. These tags come from the collection of related words in WordNet. The WordCount that is the web site to find the frequency of words is applied to choose the most used one. The input (problem) to the XTR system is an XML document contains the tags specified by the document owners. The solution is a set of the recommended tags, which is the most used tags, for the problem domain of the document. Document owners have a freedom to change or not change the tags in their documents and can provide feedback to the XTR system.


2012 ◽  
Vol 12 (1) ◽  
pp. 817-868 ◽  
Author(s):  
J. K. Carman ◽  
D. L. Rossiter ◽  
D. Khelif ◽  
H. H. Jonsson ◽  
I. C. Faloona ◽  
...  

Abstract. Aircraft sampling of the stratocumulus-topped boundary layer (STBL) during the Physics of Stratocumulus Top (POST) experiment was primarily achieved using sawtooth flight patterns, during which the atmospheric layer 100 m above and below cloud top was sampled at a frequency of once every 2 min. The large data set that resulted from each of the 16 flights document the complex structure and variability of this interfacial region in a variety of conditions. In this study, we first describe some properties of the entrainment interface layer (EIL), where strong gradients in turbulent kinetic energy (TKE), potential temperature and moisture can be found. We find that defining the EIL by the first two properties tend to yield similar results, but that moisture can be a misleading tracer of the EIL. These results are consistent with studies using large-eddy simulations. We next utilize the POST data to shed light on and constrain processes relevant to entrainment, a key process in the evolution of the STBL that to-date is not well-represented even by high resolution models. We define "entrainment efficiency" as the ratio of the TKE consumed by entrainment to that generated within the STBL (primarily by cloud-top cooling). We find values for the entrainment efficiency that vary by 1.5 orders of magnitude, which is even greater than the one order magnitude that previous modeling results have suggested. Our analysis also demonstrate that the entrainment efficiency depends on the strength of the stratification of the EIL, but not on the TKE in the cloud top region. The relationships between entrainment efficiency and other STBL parameters serve as novel observational contraints for simulations of entrainment in such systems.


Author(s):  
Christian Gollier

This chapter shows how the probability distribution for economic growth is subject to some parametric uncertainty. There is a limited data set for the dynamics of economic growth, and the absence of a sufficiently large data set to estimate the long-term growth process of the economy implies that its parameters are uncertain and subject to learning in the future. This problem is particularly crucial when its parameters are unstable, or when the dynamic process entails low-probability extreme events. Thus, the rarer the event, the less precise the estimate of its likelihood. This builds a bridge between the problem of parametric uncertainty, and the one of extreme events.


The Auk ◽  
2004 ◽  
Vol 121 (2) ◽  
pp. 380-390
Author(s):  
Shandelle M. Henson ◽  
James L. Hayward ◽  
Christina M. Burden ◽  
Clara J. Logan ◽  
Joseph G. Galusha

Abstract Seabirds move throughout the day in changing, patchy environments as they engage in various behaviors. We studied the diurnal abundance dynamics of Glaucous-winged Gulls (Larus glaucescens) in a habitat patch dedicated to loafing in the Strait of Juan de Fuca, Washington. We constructed three differential equation models as alternative hypotheses and then used model selection techniques to choose the one that most accurately described the system. We validated the model on an independent data set, made a priori model predictions, and conducted a field test of the predictions. Clear dynamic patterns emerged in the abundance of loafing gulls, even though individuals moved in and out of the loafing area more or less continuously throughout the day. Temporal patterns in aggregate loafing behavior are predicted by three environmental factors: day of the year, height of the tide, and solar elevation. This result is important for several reasons: (1) it reduces the aggregate behavior of complicated vertebrates to a simple mathematical equation, (2) it gives an example of a field system in which animal abundances are determined largely by low dimensional exogenous forces, and (3) it provides an example of accurate quantitative prediction of animal numbers in the field. From the point of view of conservation biology and resource management, the result is important because of the pervasive need to explain and predict numbers of organisms in time and space.


2012 ◽  
Vol 12 (22) ◽  
pp. 11135-11152 ◽  
Author(s):  
J. K. Carman ◽  
D. L. Rossiter ◽  
D. Khelif ◽  
H. H. Jonsson ◽  
I. C. Faloona ◽  
...  

Abstract. Aircraft sampling of the stratocumulus-topped boundary layer (STBL) during the Physics of Stratocumulus Top (POST) experiment was primarily achieved using sawtooth flight patterns, during which the atmospheric layer 100 m above and below cloud top was sampled at a frequency of once every 2 min. The large data set that resulted from each of the 16 flights document the complex structure and variability of this interfacial region in a variety of conditions. In this study, we first describe some properties of the entrainment interface layer (EIL), where strong gradients in turbulent kinetic energy (TKE), potential temperature and moisture can be found. We find that defining the EIL by the first two properties tends to yield similar results, but that moisture can be a misleading tracer of the EIL. These results are consistent with studies using large-eddy simulations. We next utilize the POST data to shed light on and constrain processes relevant to entrainment, a key process in the evolution of the STBL that to-date is not well-represented even by high resolution models. We define "entrainment efficiency" as the ratio of the TKE consumed by entrainment to that generated within the STBL (primarily by cloud-top cooling). We find values for the entrainment efficiency that vary by 1.5 orders of magnitude, which is even greater than the one order magnitude that previous modeling results have suggested. Our analysis also demonstrates that the entrainment efficiency depends on the strength of the stratification of the EIL, but not on the TKE in the cloud top region. The relationships between entrainment efficiency and other STBL parameters serve as novel observational contraints for simulations of entrainment in such systems.


2019 ◽  
Vol 2 (S1) ◽  
Author(s):  
Stephan Balduin ◽  
Martin Tröschel ◽  
Sebastian Lehnhoff

Abstract Surrogate models are used to reduce the computational effort required to simulate complex systems. The power grid can be considered as such a complex system with a large number of interdependent inputs. With artificial neural networks and deep learning, it is possible to build high-dimensional approximation models. However, a large data set is also required for the training process. This paper presents an approach to sample input data and create a deep learning surrogate model for a low voltage grid. Challenges are discussed and the model is evaluated under different conditions. The results show that the model performs well from a machine learning point of view, but has domain-specific weaknesses.


Paleobiology ◽  
1987 ◽  
Vol 13 (1) ◽  
pp. 100-107 ◽  
Author(s):  
Carl F. Koch

Few paleontological studies of species distribution in time and space have adequately considered the effects of sample size. Most species occur very infrequently, and therefore sample size effects may be large relative to the faunal patterns reported. Examination of 10 carefully compiled large data sets (each more than 1,000 occurrences) reveals that the species-occurrence frequency distribution of each fits the log series distribution well and therefore sample size effects can be predicted. Results show that, if the materials used in assembling a large data set are resampled, as many as 25% of the species will not be found a second time even if both samples are of the same size. If the two samples are of unequal size, then the larger sample may have as many as 70% unique species and the smaller sample no unique species. The implications of these values are important to studies of species richness, origination, and extinction patterns, and biogeographic phenomena such as endemism or province boundaries. I provide graphs showing the predicted sample size effects for a range of data set size, species richness, and relative data size. For data sets that do not fit the log series distribution well, I provide example calculations and equations which are usable without a large computer. If these graphs or equations are not used, then I suggest that species which occur infrequently be eliminated from consideration. Studies in which sample size effects are not considered should include sample size information in sufficient detail that other workers might make their own evaluation of observed faunal patterns.


2019 ◽  
Vol 30 (2) ◽  
pp. 109-122
Author(s):  
Aleksandar Bulajić ◽  
Miomir Despotović ◽  
Thomas Lachmann

Abstract. The article discusses the emergence of a functional literacy construct and the rediscovery of illiteracy in industrialized countries during the second half of the 20th century. It offers a short explanation of how the construct evolved over time. In addition, it explores how functional (il)literacy is conceived differently by research discourses of cognitive and neural studies, on the one hand, and by prescriptive and normative international policy documents and adult education, on the other hand. Furthermore, it analyses how literacy skills surveys such as the Level One Study (leo.) or the PIAAC may help to bridge the gap between cognitive and more practical and educational approaches to literacy, the goal being to place the functional illiteracy (FI) construct within its existing scale levels. It also sheds more light on the way in which FI can be perceived in terms of different cognitive processes and underlying components of reading. By building on the previous work of other authors and previous definitions, the article brings together different views of FI and offers a perspective for a needed operational definition of the concept, which would be an appropriate reference point for future educational, political, and scientific utilization.


2006 ◽  
Vol 27 (4) ◽  
pp. 199-207 ◽  
Author(s):  
Peter Hartmann

Spearman's Law of Diminishing Returns (SLODR) with regard to age was tested in two different databases from the National Longitudinal Survey of Youth. The first database consisted of 6,980 boys and girls aged 12–16 from the 1997 cohort ( NLSY 1997 ). The subjects were tested with a computer-administered adaptive format (CAT) of the Armed Services Vocational Aptitude Battery (ASVAB) consisting of 12 subtests. The second database consisted of 11,448 male and female subjects aged 15–24 from the 1979 cohort ( NLSY 1979 ). These subjects were tested with the older 10-subtest version of the ASVAB. The hypothesis was tested by dividing the sample into Young and Old age groups while keeping IQ fairly constant by a method similar to the one developed and employed by Deary et al. (1996) . The different age groups were subsequently factor-analyzed separately. The eigenvalue of the first principal component (PC1) and the first principal axis factor (PAF1), and the average intercorrelation of the subtests were used as estimates of the g saturation and compared across groups. There were no significant differences in the g saturation across age groups for any of the two samples, thereby pointing to no support for this aspect of Spearman's “Law of Diminishing Returns.”


Sign in / Sign up

Export Citation Format

Share Document