Efficiency of TMTD Vulcanization

1958 ◽  
Vol 31 (3) ◽  
pp. 559-561
Author(s):  
E. M. Bevilacqua

Abstract Tetramethylthiuram disulfide (TMTD), together with a sufficient amount of zinc oxide, is the simplest and most efficient vulcanizing agent known for introduction of sulfur crosslinks into rubber. No other combination approaches it in efficiency over a wide range of temperatures, although under limited conditions sulfur cures can be obtained with approximately as high a crosslink density per mole of vulcanizing agent. The stoichiometry of vulcanization with this curative has been explored in some detail in recent work of Scheele and coworkers. They have confirmed the observation by Jarrijon that during vulcanization dithiocarbamate is formed equivalent to close to two-thirds of the TMTD taken. The generality of this result was established in an exhaustive series of experiments, covering a range of structures of thiuram disulfide and of polyolefin, of temperature, and of concentration of reactants.

2020 ◽  
Author(s):  
Tamar Johnson ◽  
Kexin Gao ◽  
Kenny Smith ◽  
Hugh Rabagliati ◽  
Jennifer Culbertson

Research on cross-linguistic differences in morphological paradigms reveals a wide range of variation on many dimensions, including the number of categories expressed, the number of unique forms, and the number of inflectional classes. However, in an influential paper, Ackerman & Malouf (2013) argue that there is one dimension on which languages do not differ widely: in predictive structure. Predictive structure in a paradigm describes the extent to which forms predict each other, called i-complexity. Ackerman & Malouf (2013) show that although languages differ according to measure of surface paradigm complexity, called e-complexity, they tend to have low i-complexity. They conclude that morphological paradigms have evolved under a pressure for low i-complexity, such that even paradigms with very high e-complexity are relatively easy to learn so long as they have low i-complexity. While this would potentially explain why languages are able to maintain large paradigms, recent work by Johnson et al. (submitted) suggests that both neural networks and human learners may actually be more sensitive to e-complexity than i-complexity. Here we will build on this work, reporting a series of experiments under more realistic learning conditions which confirm that indeed, across a range of paradigms that vary in either e- or i-complexity, neural networks (LSTMs) are sensitive to both, but show a larger effect of e-complexity (and other measures associated with size and diversity of forms). In human learners, we fail to find any effect of i-complexity at all. Further, analysis of a large number of randomly generated paradigms show that e- and i-complexity are negatively correlated: paradigms with high e-complexity necessarily show low i-complexity. These findings suggest that the observations made by Ackerman & Malouf (2013) for natural language paradigms may stem from the nature of these measures rather than learning pressures specially attuned to i-complexity.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Tamar Johnson ◽  
Kexin Gao ◽  
Kenny Smith ◽  
Hugh Rabagliati ◽  
Jennifer Culbertson

Research on cross-linguistic differences in morphological paradigms reveals a wide range of variation on many dimensions, including the number of categories expressed, the number of unique forms, and the number of inflectional classes. However, in an influential paper, Ackerman & Malouf (2013) argue that there is one dimension on which languages do not differ widely: in predictive structure. Predictive structure in a paradigm describes the extent to which forms predict each other, called i-complexity. Ackerman & Malouf (2013) show that although languages differ according to measure of surface paradigm complexity, called e-complexity, they tend to have low i-complexity. They conclude that morphological paradigms have evolved under a pressure for low i-complexity, such that even paradigms with very high e-complexity are relatively easy to learn so long as they have low i-complexity. While this would potentially explain why languages are able to maintain large paradigms, recent work by Johnson et al. (submitted) suggests that both neural networks and human learners may actually be more sensitive to e-complexity than i-complexity. Here we will build on this work, reporting a series of experiments under more realistic learning conditions which confirm that indeed, across a range of paradigms that vary in either e- or i-complexity, neural networks (LSTMs) are sensitive to both, but show a larger effect of e-complexity (and other measures associated with size and diversity of forms). In human learners, we fail to find any effect of i-complexity at all. Further, analysis of a large number of randomly generated paradigms show that e- and i-complexity are negatively correlated: paradigms with high e-complexity necessarily show low i-complexity.These findings suggest that the observations made by Ackerman & Malouf (2013) for natural language paradigms may stem from the nature of these measures rather than learning pressures specially attuned to i-complexity.


Author(s):  
Svitlana Lobchenko ◽  
Tetiana Husar ◽  
Viktor Lobchenko

The results of studies of the viability of spermatozoa with different incubation time at different concentrations and using different diluents are highlighted in the article. (Un) concentrated spermatozoa were diluented: 1) with their native plasma; 2) medium 199; 3) a mixture of equal volumes of plasma and medium 199. The experiment was designed to generate experimental samples with spermatozoa concentrations prepared according to the method, namely: 0.2; 0.1; 0.05; 0.025 billion / ml. The sperm was evaluated after 2, 4, 6 and 8 hours. The perspective of such a study is significant and makes it possible to research various aspects of the subject in a wide range. In this regard, a series of experiments were conducted in this area. The data obtained are statistically processed and allow us to highlight the results that relate to each stage of the study. In particular, in this article it was found out some regularities between the viability of sperm, the type of diluent and the rate of rarefaction, as evidenced by the data presented in the tables. As a result of sperm incubation, the viability of spermatozoa remains at least the highest trend when sperm are diluted to a concentration of 0.1 billion / ml, regardless of the type of diluent used. To maintain the viability of sperm using this concentration of medium 199 is not better than its native plasma, and its mixture with an equal volume of plasma through any length of time incubation of such sperm. Most often it is at this concentration of sperm that their viability is characterized by the lowest coefficient of variation, regardless of the type of diluent used, which may indicate the greatest stability of the result under these conditions. The viability of spermatozoa with a concentration of 0.1 billion / ml is statistically significantly reduced only after 6 or even 8 hours of incubation. If the sperm are incubated for only 2 hours, regardless of the type of diluent used, the sperm concentrations tested do not affect the viability of the sperm. Key words: boar, spermatozoa, sperm plasma, concentration, incubation, medium 199, activity, viability, rarefaction.


Nanoscale ◽  
2021 ◽  
Author(s):  
Keonwon Beom ◽  
Jimin Han ◽  
Hyun-Mi Kim ◽  
Tae-Sik Yoon

Wide range synaptic weight modulation with a tunable drain current was demonstrated in thin-film transistors (TFTs) with a hafnium oxide (HfO2−x) gate insulator and an indium-zinc oxide (IZO) channel layer...


1987 ◽  
Vol 60 (3) ◽  
pp. 381-416 ◽  
Author(s):  
B. S. Nau

Abstract The understanding of the engineering fundamentals of rubber seals of all the various types has been developing gradually over the past two or three decades, but there is still much to understand, Tables V–VII summarize the state of the art. In the case of rubber-based gaskets, the field of high-temperature applications has scarcely been touched, although there are plans to initiate work in this area both in the U.S.A. at PVRC, and in the U.K., at BHRA. In the case of reciprocating rubber seals, a broad basis of theory and experiment has been developed, yet it still is not possible to design such a seal from first principles. Indeed, in a comparative series of experiments run recently on seals from a single batch, tested in different laboratories round the world to the same test procedure, under the aegis of an ISO working party, a very wide range of values was reported for leakage and friction. The explanation for this has still to be ascertained. In the case of rotary lip seals, theories and supporting evidence have been brought forward to support alternative hypotheses for lubrication and sealing mechanisms. None can be said to have become generally accepted, and it remains to crystallize a unified theory.


2008 ◽  
Vol 1087 ◽  
Author(s):  
Marco Palumbo ◽  
Simon J. Henley ◽  
Thierry Lutz ◽  
Vlad Stolojan ◽  
David Cox ◽  
...  

AbstractRecent results in the use of Zinc Oxide (ZnO) nano/submicron crystals in fields as diverse as sensors, UV lasers, solar cells, piezoelectric nanogenerators and light emitting devices have reinvigorated the interest of the scientific community in this material. To fully exploit the wide range of properties offered by ZnO, a good understanding of the crystal growth mechanism and related defects chemistry is necessary. However, a full picture of the interrelation between defects, processing and properties has not yet been completed, especially for the ZnO nanostructures that are now being synthesized. Furthermore, achieving good control in the shape of the crystal is also a very desirable feature based on the strong correlation there is between shape and properties in nanoscale materials. In this paper, the synthesis of ZnO nanostructures via two alternative aqueous solution methods - sonochemical and hydrothermal - will be presented, together with the influence that the addition of citric anions or variations in the concentration of the initial reactants have on the ZnO crystals shape. Foreseen applications might be in the field of sensors, transparent conductors and large area electronics possibly via ink-jet printing techniques or self-assembly methods.


2013 ◽  
Vol 19 (S4) ◽  
pp. 103-104
Author(s):  
C.B. Garcia ◽  
E. Ariza ◽  
C.J. Tavares

Zinc Oxide is a wide band-gap compound semiconductor that has been used in optoelectronic and photovoltaic applications due to its good electrical and optical properties. Aluminium has been an efficient n-type dopant for ZnO to produce low resistivity films and high transparency to visible light. In addition, the improvement of these properties also depends on the morphology, crystalline structure and deposition parameters. In this work, ZnO:Al films were produced by d.c. pulsed magnetron sputtering deposition from a ZnO ceramic target (2.0 wt% Al2O3) on glass substrates, at a temperature of 250 ºC.The crystallographic orientation of aluminum doped zinc oxide (ZnO:Al) thin films has been studied by Electron Backscatter Diffraction (EBSD) technique. EBSD coupled with Scanning Electron Microscopy (SEM) is a powerful tool for the microstructural and crystallographic characterization of a wide range of materials.The investigation by EBSD technique of such films presents some challenges since this analysis requires a flat and smooth surface. This is a necessary condition to avoid any shadow effects during the experiments performed with high tilting conditions (70º). This is also essential to ensure a good control of the three dimensional projection of the crystalline axes on the geometrical references related to the sample.Crystalline texture is described by the inverse pole figure (IPF) maps (Figure 1). Through EBSD analysis it was observed that the external surface of the film presents a strong texture on the basal plane orientation (grains highlighted in red colour). Furthermore it was possible to verify that the grain size strongly depends on the deposition time (Figure 1 (a) and (b)). The electrical and optical film properties improve with increasing of the grain size, which can be mainly, attributed to the decrease in scattering grain boundaries which leads to an increasing in carrier mobility (Figure 2).The authors kindly acknowledge the financial support from the Portuguese Foundation for Science and Technology (FCT) scientific program for the National Network of Electron Microscopy (RNME) EDE/1511/RME/2005.


2019 ◽  
Vol 81 (1) ◽  
pp. 118-128
Author(s):  
V. V. Balandin ◽  
V. V. Balandin ◽  
V. V. Parkhachev

Investigating impact interaction of solid and deformed bodies with obstacles of various physical natures requires developing experimental methodologies of registering the parameters of the interaction process. In experimental investigations of impact interaction of solids, it is common practice to measure displacement of strikers as a function of time, as well as their velocity and deceleration. To determine the displacement and velocity of a striker, a radio-interferometric methodology of registering the displacement of its rear end is proposed. In contrast with the registration methods based on high-speed filming and pulsed X-ray photography, the method using a millimeter-range radio-interferometer provides continuous high-accuracy registering of the displacement of the rear end of a striker in a wide range of displacement values. To test the effectiveness of the methodology, a series of experiments have been conducted on registering the motion of a cylindrical striker of an aluminum alloy, fired from a 20mm-dia gas gun. The displacement of the striker was also monitored using high-speed filming. The results of measuring using the two methodologies differ within the limits of the error of measurement. Based on the results of the above experiments, it has been concluded that the methodology of determining the displacement and velocity of strikers in a ballistic experiment using a mm-range radio-interferometer makes it possible to measure practically continuously large displacements (100 mm and larger) to a safe accuracy. The present methodology can be used for measuring the displacement and velocity of the rear end of a striker interacting with obstacles of various physical natures (metals, ceramics, soils, concretes, etc.).


2021 ◽  
Author(s):  
Anne M Luescher ◽  
Julian Koch ◽  
Wendelin J Stark ◽  
Robert N Grass

Aerosolized particles play a significant role in human health and environmental risk management. The global importance of aerosol-related hazards, such as the circulation of pathogens and high levels of air pollutants, have led to a surging demand for suitable surrogate tracers to investigate the complex dynamics of airborne particles in real-world scenarios. In this study, we propose a novel approach using silica particles with encapsulated DNA (SPED) as a tracing agent for measuring aerosol distribution indoors. In a series of experiments with a portable setup, SPED were successfully aerosolized, re-captured and quantified using quantitative polymerase chain reaction (qPCR). Position-dependency and ventilation effects within a confined space could be shown in a quantitative fashion achieving detection limits below 0.1 ng particles per m3 of sampled air. In conclusion, SPED show promise for a flexible, cost-effective and low-impact characterization of aerosol dynamics in a wide range of settings.


Sign in / Sign up

Export Citation Format

Share Document