scholarly journals 19 years of tilt data on Mt. Vesuvius: state of the art and future perspectives

2013 ◽  
Vol 56 (4) ◽  
Author(s):  
Ciro Ricco ◽  
Ida Aquino ◽  
Sven Ettore P. Borgstrom ◽  
Carlo Del Gaudio

<p>Mt. Vesuvius, located along the SW border of the Campania Plane graben, is one of the most studied volcanoes worldwide, from both the volcanological and the geophysical, geochemical and geodetic point of view. In order to better understand its dynamics, the deformation of the volcano has been already studied since the early ’70s by setting up levelling lines and, since a few years later, through trilateration networks, whereas ground tilt monitoring started in 1993. Tilt variations were recorded by an automatic surface station set up at the Osservatorio Vesuviano (O.V.) bunker (OVO) and data recorded were transmitted to the O.V. Surveillance Centre in Naples. Afterwards, in 1996 two more identical stations were set up close to Torre del Greco (CMD), and close to Trecase (TRC). In 2002 the data acquisition system was replaced, while at the end of 2011 a Lily borehole sensor was set up at 26 m depth, replacing the old TRC tilt station. The paper describes in details the tilt network of Mt. Vesuvius, its development over time and the data processing procedure; moreover, the ground deformation pattern is discussed, as inferred from the study of 19 years of data and its change during the seismic crises of 1995-1996 and 1999-2000. From the information obtained from the tiltmetric monitoring, a complex deformation pattern can be deduced, strongly dependent on the position of the sites in which the sensors were set up with respect to the morphology of the volcanic edifice and its structural outlines. If we consider the signals as they were recorded, although previously corrected for the influences of the thermo-elastic strain on the sensors, the tilting occurs mainly in the SW direction with rates of about 11 µradians/year on both the western and eastern flanks and of about 13 µradians/year on the southern one. Because tilt vectors point in the long term outward from the summit and towards the subsiding area, this supports the hypothesis of a southern areas subsidence, according with a spreading effect of Vesuvius, taking into account geological, structural, geophysical and geodetical (optical levelling, InSAR) data. The SW tilting occurs therefore irregularly and shows some seasonalities, consistent with the solar thermal radiation whose removal by statistical procedure outlines a different but equally interesting deformation field as it shows interruptions with changes in both trend and amplitude during two periods of strong seismic activity that affected Mt. Vesuvius in the periods 1995-1996 and late 1999-2000, marked by an average rate of energy release of at least one order of magnitude greater than the previous and following periods. Another change in intensity and direction of the deformation detected by tiltmeters since 2000, connected with the variations of the phase shift between the tilt components and the temperature recorded, compared to previous years, occurs during a strong decrease of the energy released by Vesuvius earthquakes.</p>

Author(s):  
R. R. Palmer

In 1792, the French Revolution became a thing in itself, an uncontrollable force that might eventually spend itself but which no one could direct or guide. The governments set up in Paris in the following years all faced the problem of holding together against forces more revolutionary than themselves. This chapter distinguishes two such forces for analytical purposes. There was a popular upheaval, an upsurge from below, sans-culottisme, which occurred only in France. Second, there was the “international” revolutionary agitation, which was not international in any strict sense, but only concurrent within the boundaries of various states as then organized. From the French point of view these were the “foreign” revolutionaries or sympathizers. The most radical of the “foreign” revolutionaries were seldom more than advanced political democrats. Repeatedly, however, from 1792 to 1799, these two forces tended to converge into one force in opposition to the French government of the moment.


The theory of the vibrations of the pianoforte string put forward by Kaufmann in a well-known paper has figured prominently in recent discussions on the acoustics of this instrument. It proceeds on lines radically different from those adopted by Helmholtz in his classical treatment of the subject. While recognising that the elasticity of the pianoforte hammer is not a negligible factor, Kaufmann set out to simplify the mathematical analysis by ignoring its effect altogether, and treating the hammer as a particle possessing only inertia without spring. The motion of the string following the impact of the hammer is found from the initial conditions and from the functional solutions of the equation of wave-propagation on the string. On this basis he gave a rigorous treatment of two cases: (1) a particle impinging on a stretched string of infinite length, and (2) a particle impinging on the centre of a finite string, neither of which cases is of much interest from an acoustical point of view. The case of practical importance treated by him is that in which a particle impinges on the string near one end. For this case, he gave only an approximate theory from which the duration of contact, the motion of the point struck, and the form of the vibration-curves for various points of the string could be found. There can be no doubt of the importance of Kaufmann’s work, and it naturally becomes necessary to extend and revise his theory in various directions. In several respects, the theory awaits fuller development, especially as regards the harmonic analysis of the modes of vibration set up by impact, and the detailed discussion of the influence of the elasticity of the hammer and of varying velocities of impact. Apart from these points, the question arises whether the approximate method used by Kaufmann is sufficiently accurate for practical purposes, and whether it may be regarded as applicable when, as in the pianoforte, the point struck is distant one-eighth or one-ninth of the length of the string from one end. Kaufmann’s treatment is practically based on the assumption that the part of the string between the end and the point struck remains straight as long as the hammer and string remain in contact. Primâ facie , it is clear that this assumption would introduce error when the part of the string under reference is an appreciable fraction of the whole. For the effect of the impact would obviously be to excite the vibrations of this portion of the string, which continue so long as the hammer is in contact, and would also influence the mode of vibration of the string as a whole when the hammer loses contact. A mathematical theory which is not subject to this error, and which is applicable for any position of the striking point, thus seems called for.


Genetics ◽  
2004 ◽  
Vol 166 (2) ◽  
pp. 797-806 ◽  
Author(s):  
James D Fry

Abstract High rates of deleterious mutations could severely reduce the fitness of populations, even endangering their persistence; these effects would be mitigated if mutations synergize each others’ effects. An experiment by Mukai in the 1960s gave evidence that in Drosophila melanogaster, viability-depressing mutations occur at the surprisingly high rate of around one per zygote and that the mutations interact synergistically. A later experiment by Ohnishi seemed to support the high mutation rate, but gave no evidence for synergistic epistasis. Both of these studies, however, were flawed by the lack of suitable controls for assessing viability declines of the mutation-accumulation (MA) lines. By comparing homozygous viability of the MA lines to simultaneously estimated heterozygous viability and using estimates of the dominance of mutations in the experiments, I estimate the viability declines relative to an appropriate control. This approach yields two unexpected conclusions. First, in Ohnishi’s experiment as well as in Mukai’s, MA lines showed faster-than-linear declines in viability, indicative of synergistic epistasis. Second, while Mukai’s estimate of the genomic mutation rate is supported, that from Ohnishi’s experiment is an order of magnitude lower. The different results of the experiments most likely resulted from differences in the starting genotypes; even within Mukai’s experiment, a subset of MA lines, which I argue probably resulted from a contamination event, showed much slower viability declines than did the majority of lines. Because different genotypes may show very different mutational behavior, only studies using many founding genotypes can determine the average rate and distribution of effects of mutations relevant to natural populations.


2009 ◽  
Vol 66 (7) ◽  
pp. 2107-2115 ◽  
Author(s):  
Cegeon J. Chan ◽  
R. Alan Plumb

Abstract In simple GCMs, the time scale associated with the persistence of one particular phase of the model’s leading mode of variability can often be unrealistically large. In a particularly extreme example, the time scale in the Polvani–Kushner model is about an order of magnitude larger than the observed atmosphere. From the fluctuation–dissipation theorem, one implication of these simple models is that responses are exaggerated, since such setups are overly sensitive to any external forcing. Although the model’s equilibrium temperature is set up to represent perpetual Southern Hemisphere winter solstice, it is found that the tropospheric eddy-driven jet has a preference for two distinct regions: the subtropics and midlatitudes. Because of this bimodality, the jet persists in one region for thousands of days before “switching” to another. As a result, the time scale associated with the intrinsic variability is unrealistic. In this paper, the authors systematically vary the model’s tropospheric equilibrium temperature profile, one configuration being identical to that of Polvani and Kushner. Modest changes to the tropospheric state to either side of the parameter space removed the bimodality in the zonal-mean zonal jet’s spatial distribution and significantly reduced the time scale associated with the model’s internal mode. Consequently, the tropospheric response to the same stratospheric forcing is significantly weaker than in the Polvani and Kushner case.


1998 ◽  
Vol 538 ◽  
Author(s):  
F. Cleri

AbstractThe validity and predictive capability of continuum models of fracture rests on basic informations whose origin lies at the atomic scale. Examples of such crucial informations are, e.g., the explicit form of the cohesive law in the Barenblatt model and the shear-displacement relation in the Rice-Peierls-Nabarro model. Modem approaches to incorporate atomic-level information into fracture modelling require to increase the size of atomic-scale models up to millions of atoms and more; or to connect directly atomistic and macroscopic, e.g. finite-elements, models; or to pass information from atomistic to continuum models in the form of constitutive relations. A main drawback of the atomistic methods is the complexity of the simulation results, which can be rather difficult to rationalize in the framework of classical, continuum fracture mechanics. We critically discuss the main issues in the atomistic simulation of fracture problems (and dislocations, to some extent); our objective is to indicate how to set up atomistic simulations which represent well-posed problems also from the point of view of continuum mechanics, so as to ease the connection between atomistic information and macroscopic models of fracture.


2019 ◽  
Author(s):  
Darian Jancowicz-Pitel

The presented paper aimed for exploring the translation process, a translator or interpreter needs equipment or tools so that the objectives of a translation can be achieved. If an interpreter needs a pencil, paper, headphones, and a mic, then an interpreter needs even more tools. The tools required include conventional and modern tools. Meanwhile, the approach needed in research on translation is qualitative and quantitative, depending on the research objectives. If you want to find a correlation between a translator's translation experience with the quality or type of translation errors, a quantitative method is needed. Also, this method is very appropriate to be used in research in the scope of teaching translation, for example from the student's point of view, their level of intelligence regarding the quality or translation errors. While the next method is used if the research contains translation errors, procedures, etc., it is more appropriate to use qualitative methods. Seeing this fact, these part-time translators can switch to the third type of translator, namely free translators. This is because there is an awareness that they can live by translation. These translators set up their translation efforts that involve multiple languages.


1972 ◽  
Vol 2 (1) ◽  
pp. 33-36 ◽  
Author(s):  
W. L. F. Brinkmann

Abstract: Spherical ceramic bulbs were set up as weekly water-loss integrators on a clearing and below a 2 year-old Cecropia-commumty at Km 18 of the Manaus-Itacoatiara Road. The instruments worked well in distinguishing the particular responses of individual sites to the impact of atmospheric agents as solar radiation, air temperature, air humidity and wind. Water-loss was primarily dependent on the order of magnitude of the weekly total of solar radiation and the presence or lack of a standing crop. Already a scarce secondary growth will reduce the weekly amount of water lost to the atmosphere considerably. Shelter-wood, however, considering the crop specific demands if introduced to tropical agriculture would provide favourable conditions as far as the impact of atmospheric controls on the tropical environment are concerned.


1961 ◽  
Vol 3 (30) ◽  
pp. 1133-1151 ◽  
Author(s):  
R. Haefeli

AbstractStarting from Glen’s flow law for ice and from a series of assumptions based in part on observations in Greenland and in the Jungfraujoch, the velocity distribution (horizontal velocity component) and surface configuration is derived for a strip-shaped ice sheet in a stationary state. For the choice n = 3 − 4 of the exponent in the power-law flow relation, there is extensive agreement between the theoretically calculated surface profile and the east-west profile measured through “Station Centrale” by Expéditions Polaires Françaises. The corresponding theoretical solution for a circular ice sheet is also given. As a first application of this theory, an attempt is made to calculate the average rate of accumulation in Antarctica from its surface profile (assumed circular in plan) and from the flow-law parameters derived from the Greenland Ice Sheet. It is also shown that a change in accumulation has only a small influence on the total ice thickness of an ice sheet. A method of calculating approximately the age of ice in an ice sheet, based on the foregoing theory, is illustrated by applying it to the Greenland Ice Sheet. After comparing the present theory with that of Nye, a general expression for the surface profile of an ice sheet with constant accumulation is set up and discussed by means of comparison with two profiles through Antarctica.


Author(s):  
Epaminondas Kapetanios

In this article, the author explores the notion of Collective Intelligence (CI) as an emerging computing paradigm. The article is meant to provide a historical and contextual view of CI through the lenses of as many related disciplines as possible (biology, sociology, natural and environmental sciences, physics) in conjunction with the computer science point of view. During this explorative journey, the article also aims at pinpointing the current strengths and weaknesses of CI-related computational and system engineering design and implementation methodologies of CI-based systems. A non-exhaustive list of case studies set up the stage for CI applications as well as challenging research questions. These can be particularly directed towards the Social Web, as a very prominent example of synergistic interactions of a group of people with diverse cultural and professional backgrounds and its potential to become a platform for the emergence of truly CI-based systems.


The detonation of a cartridge of a high explosive is started by firing a detonator, which consists of a small metal cylinder containing a compound or mixture which is itself readily detonated when it is heated. The manner in which detonators thus function is not thoroughly understood, and the methods used for measuring their "efficiency" are, in consequence, diverse. By some methods only the total blow given by the detonator, or its crushing and shattering effect, is measured; the nail test and sand test are the crudest forms. The lead plate test gives a similar measure, and the efficiency of a detonator is judged not only by the depth of the impression produced, but also by the number and appearance of radial grooves in the lead plate produced by the disrupted metal casing. More precise physical methods have been adopted, such as the Hopkinson pressure-bar , which gives a measure of the time of action of the impulsive blow. A more logical method of measurement of efficiency would appear to be a examine the ease with which the detonator will set up detonation in a standard explosive or in a series of standard explosives. Such a method is the Esop test , in which measurement is made of the maximum amount of olive or cotton seed oil which can be mixed with picric acid without preventing its detonation by the detonator embedded in the mixture. Of the same type is the gap test , in which the detonator and a standard explosive are separated and the maximum distance is measured at which detonation of the explosive can be established. The efficiency of a detonator is of considerable technical importance. The more rapidly a detonator can set up detonation in a cartridge of explosive the greater will be the proportion of the cartridge which will detonate and the greater therefore will be the efficiency of the explosive, though once detonation is effectively set up it will be independent of the strength of detonator used. The use of an inefficient detonator may result in portions of cartridges remaining undetonated and becoming a source of danger during the subsequent handling of the material that has been blasted. With the desensitized explosives that are used in coal mines the efficiency of the detonator may influence the safety of the explosive from the point of view of its ability to ignite firedamp. The present investigation has been carried out for that reason.


Sign in / Sign up

Export Citation Format

Share Document