scholarly journals Lunisolar Atmospheric Tides: A New Approach

1988 ◽  
Vol 41 (6) ◽  
pp. 807 ◽  
Author(s):  
R Brahde

In records of the atmospheric pressure in Oslo, at 60' latitude, a one-day oscillation caused by the lunisolar tide has been detected. The amplitude has a mean value of O� 17 mb. This oscillation appears during intervals when the declination of the Moon has high numerical values. When the Moon passes through the equator, the one-day oscillation disappears and only the half-day mode continues. If a maximum coincides with upper culmination, it reappears during the next fortnight at lower culmination. This means that the phase changes approximately 180' or 12h every time the Moon crosses the equator, and this is the main reason why it has not been detected by means of traditional harmonic analysis of the atmospheric pressure oscillations. By means of the correlation between the pressure variation and the magnitude of the tidal acceleration, it was possible to separate the dynamic one-day oscillation from terms of thermal origin.

1989 ◽  
Vol 42 (4) ◽  
pp. 439 ◽  
Author(s):  
R Brahde

In an earlier paper (Brahde 1988) it was shown that series of measurements of the atmospheric pressure in Oslo contained information about a one�day oscillation with mean amplitude 0�17 mb. The data consisted of measurements every second hour during the years 1957-67, 1969 and 1977. In the present paper the intervening years plus 1978 and 1979 have been included, increasing the basis from 13 to 23 years. In addition the phase shift occurring when the Moon crosses the celestial equator has been defined precisely, thus making it possible to include all the data.


Author(s):  
Xiaoyi Shen ◽  
Chang-Qing Ke ◽  
Bin Cheng ◽  
Wentao Xia ◽  
Mengmeng Li ◽  
...  

AbstractIn August 2018, a remarkable polynya was observed off the north coast of Greenland, a perennial ice zone where thick sea ice cover persists. In order to investigate the formation process of this polynya, satellite observations, a coupled ice-ocean model, ocean profiling data, and atmosphere reanalysis data were applied. We found that the thinnest sea ice cover in August since 1978 (mean value of 1.1 m, compared to the average value of 2.8 m during 1978–2017) and the modest southerly wind caused by a positive North Atlantic Oscillation (mean value of 0.82, compared to the climatological value of −0.02) were responsible for the formation and maintenance of this polynya. The opening mechanism of this polynya differs from the one formed in February 2018 in the same area caused by persistent anomalously high wind. Sea ice drift patterns have become more responsive to the atmospheric forcing due to thinning of sea ice cover in this region.


Author(s):  
Yuko Komuro ◽  
Yuji Ohta

Conventionally, the strength of toe plantar flexion (STPF) is measured in a seated position, in which not only the target toe joints but also the knee and particularly ankle joints, are usually restrained. We have developed an approach for the measurement of STPF which does not involve restraint and considers the interactions of adjacent joints of the lower extremities. This study aimed to evaluate this new approach and comparing with the seated approach. A thin, light-weight, rigid plate was attached to the sole of the foot in order to immobilize the toe area. Participants were 13 healthy young women (mean age: 24 ± 4 years). For measurement of STPF with the new approach, participants were instructed to stand, raise the device-wearing leg slightly, plantar flex the ankle, and push the sensor sheet with the toes to exert STPF. The sensor sheet of the F-scan II system was inserted between the foot sole and the plate. For measurement with the seated approach, participants were instructed to sit and push the sensor with the toes. They were required to maintain the hip, knee, and ankle joints at 90°. The mean values of maximum STPF of the 13 participants obtained with each approach were compared. There was no significant difference in mean value of maximum STPF when the two approaches were compared (new: 59 ± 23 N, seated: 47 ± 33 N). The coefficient of variation of maximum STPF was smaller for data obtained with the new approach (new: 39%, seated: 70%). Our simple approach enables measurement of STPF without the need for the restraints that are required for the conventional seated approach. These results suggest that the new approach is a valid method for measurement of STPF.


Author(s):  
Ajay Andrew Gupta

AbstractThe widespread proliferation of and interest in bracket pools that accompany the National Collegiate Athletic Association Division I Men’s Basketball Tournament have created a need to produce a set of predicted winners for each tournament game by people without expert knowledge of college basketball. Previous research has addressed bracket prediction to some degree, but not nearly on the level of the popular interest in the topic. This paper reviews relevant previous research, and then introduces a rating system for teams using game data from that season prior to the tournament. The ratings from this system are used within a novel, four-predictor probability model to produce sets of bracket predictions for each tournament from 2009 to 2014. This dual-proportion probability model is built around the constraint of two teams with a combined 100% probability of winning a given game. This paper also performs Monte Carlo simulation to investigate whether modifications are necessary from an expected value-based prediction system such as the one introduced in the paper, in order to have the maximum bracket score within a defined group. The findings are that selecting one high-probability “upset” team for one to three late rounds games is likely to outperform other strategies, including one with no modifications to the expected value, as long as the upset choice overlaps a large minority of competing brackets while leaving the bracket some distinguishing characteristics in late rounds.


The most precise way of estimating the dissipation of tidal energy in the oceans is by evaluating the rate at which work is done by the tidal forces and this quantity is completely described by the fundamental harmonic in the ocean tide expansion that has the same degree and order as the forcing function. The contribution of all other harmonics to the work integral must vanish. These harmonics have been estimated for the principal M 2 tide using several available numerical models and despite the often significant difference in the detail of the models, in the treatment of the boundary conditions and in the way dissipating forces are introduced, the results for the rate at which energy is dissipated are in good agreement. Equivalent phase lags, representing the global ocean-solid Earth response to the tidal forces and the rates of energy dissipation have been computed for other tidal frequencies, including the atmospheric tide, by using available tide models, age of tide observations and equilibrium theory. Orbits of close Earth satellites are periodically perturbed by the combined solid Earth and ocean tide and the delay of these perturbations compared with the tide potential defines the same terms as enter into the tidal dissipation problem. They provide, therefore, an independent estimate of dissipation. The results agree with the tide calculations and with the astronomical estimates. The satellite results are independent of dissipation in the Moon and a comparison of astronomical, satellite and tidal estimates of dissipation permits a separation of energy sinks in the solid Earth, the Moon and in the oceans. A precise separation is not yet possible since dissipation in the oceans dominates the other two sinks: dissipation occurs almost exclusively in the oceans and neither the solid Earth nor the Moon are important energy sinks. Lower limits to the Q of the solid Earth can be estimated by comparing the satellite results with the ocean calculations and by comparing the astronomical results with the latter. They result in Q > 120. The lunar acceleration n , the Earth’s tidal acceleration O T and the total rate of energy dissipation E estimated by the three methods give astronomical based estimate —1.36 —28±3 —7.2 ± 0.7 4.1±0.4 satellite based estimate —1.03 —24 ±5 — 6.4 ± 1.5 3.6±0.8 numerical tide model — 1.49 —30 ±3 —7.5± 0.8 4.5±0.5 The mean value for O T corresponds to an increase in the length of day of 2.7 ms cy -1 . The non-tidal acceleration of the Earth is (1.8 ± 1.0) 10 -22 s ~2 , resulting in a decrease in the length of day of 0.7 ± 0.4 ms cy -1 and is barely significant. This quantity remains the most unsatisfactory of the accelerations. The nature of the dissipating mechanism remains unclear but whatever it is it must also control the phase of the second degree harmonic in the ocean expansion. It is this harmonic that permits the transfer of angular momentum from the Earth to the Moon but the energy dissipation occurs at frequencies at the other end of the tide’s spatial spectrum. The efficacity of the break-up of the second degree term into the higher modes governs the amount of energy that is eventually dissipated. It appears that the break-up is controlled by global ocean characteristics such as the ocean­-continent geometry and sea floor topography. Friction in a few shallow seas does not appear to be as important as previously thought: New estimates for dissipation in the Bering Sea being almost an order of magnitude smaller than earlier estimates. If bottom friction is important then it must be more uniformly distributed over the world's continental shelves. Likewise, if turbulence provides an important dissipation mechanism it must be fairly uniformly distributed along, for example, coastlines or along continental margins. Such a global distribution of the dissipation makes it improbable that there has been a change in the rate of dissipation during the last few millennium as there is no evidence of changes in ocean volume, or ocean geometry or sea level beyond a few metres. It also suggests that the time scale problem can be resolved if past ocean-continent geometries led to a less efficient breakdown of the second degree harmonic into higher degree harmonics.


2021 ◽  
Author(s):  
Alwin Förster ◽  
Lars Panning-von Scheidt

Abstract Turbomachines experience a wide range of different types of excitation during operation. On the structural mechanics side, periodic or even harmonic excitations are usually assumed. For this type of excitation there are a variety of methods, both for linear and nonlinear systems. Stochastic excitation, whether in the form of Gaussian white noise or narrow band excitation, is rarely considered. As in the deterministic case, the calculations of the vibrational behavior due to stochastic excitations are even more complicated by nonlinearities, which can either be unintentionally present in the system or can be used intentionally for vibration mitigation. Regardless the origin of the nonlinearity, there are some methods in the literature, which are suitable for the calculation of the vibration response of nonlinear systems under random excitation. In this paper, the method of equivalent linearization is used to determine a linear equivalent system, whose response can be calculated instead of the one of the nonlinear system. The method is applied to different multi-degree of freedom nonlinear systems that experience narrow band random excitation, including an academic turbine blade model. In order to identify multiple and possibly ambiguous solutions, an efficient procedure is shown to integrate the mentioned method into a path continuation scheme. With this approach, it is possible to track jump phenomena or the influence of parameter variations even in case of narrow band excitation. The results of the performed calculations are the stochastic moments, i.e. mean value and variance.


2012 ◽  
Vol 2 (2) ◽  
pp. 10 ◽  
Author(s):  
Michael Karl Sachs ◽  
Ya-Ting Lee ◽  
Donald Turcotte ◽  
James R. Holliday ◽  
John B. Rundle

The Regional Earthquake Likelihood Models (RELM) test was the first competitive comparison of prospective earthquake forecasts. The test was carried out over 5 years from 1 January 2006 to 31 December 2010 over a region that included all of California. The test area was divided into 7682 0.1°x0.1° spatial cells. Each submitted forecast gave the predicted numbers of earthquakes <em>N<sub>emi</sub></em> larger than <em>M</em>=4.95 in 0.1 magnitude bins for each cell. In this paper we present a method that separates the forecast of the number of test earthquakes from the forecast of their locations. We first obtain the number <em>N<sub>em</sub></em> of forecast earthquakes in magnitude bin <em>m</em>. We then determine the conditional probability <em>λ<sub>emi</sub></em>=<em>N<sub>emi</sub>/</em><em>N<sub>em</sub></em> that an earthquake in magnitude bin <em>m</em> will occur in cell <em>i</em>. The summation of <em>λ<sub>emi</sub></em> over all 7682 cells is unity. A random (no skill) forecast gives equal values of <em>λ<sub>emi</sub></em> for all spatial cells and magnitude bins. The <em>skill</em> of a forecast, in terms of the location of the earthquakes, is measured by the success in assigning large values of <em>λ<sub>emi</sub></em> to the cells in which earthquakes occur and low values of <em>λ<sub>emi</sub></em> to the cells where earthquakes do not occur. Thirty-one test earthquakes occurred in 27 different combinations of spatial cells <em>i</em> and magnitude bins <em>m</em>, we had the highest value of <em>λ<sub>emi</sub></em> for that <em>mi</em> cell. We evaluate the performance of eleven submitted forecasts in two ways. First, we determine the number of <em>mi</em> cells for which the forecast <em>λ<sub>emi</sub></em> was the largest, the best forecast is the one with the highest number. Second, we determine the mean value of <em>λ<sub>emi</sub></em> for the 27 <em>mi</em> cells for each forecast. The best forecast has the highest mean value of <em>λ<sub>emi</sub></em>. The success of a forecast during the test period is dependent on the allocation of the probabilities λemi between the mi cells, since the sum over the mi cells is unity. We illustrate the forecast distributions of <em>λ<sub>emi</sub></em> and discuss their differences. We conclude that the RELM test was successful in illustrating the choices required when a forecast of the location of a future earthquake is made.


2012 ◽  
Vol 43 (6) ◽  
pp. 833-850 ◽  
Author(s):  
Ziqi Yan ◽  
Lars Gottschalk ◽  
Irina Krasovskaia ◽  
Jun Xia

The long-term mean value of runoff is the basic descriptor of available water resources. This paper focuses on the accuracy that can be achieved when mapping this variable across space and along main rivers for a given stream gauging network. Three stochastic interpolation schemes for estimating average annual runoff across space are evaluated and compared. Two of the schemes firstly interpolate runoff to a regular grid net and then integrate the grid values along rivers. One of these schemes includes a constraint to account for the lateral water balance along the rivers. The third scheme interpolates runoff directly to points along rivers. A drainage basin in China with 20 gauging sites is used as a test area. In general, all three approaches reproduce the sample discharges along rivers with postdiction errors along main river branches around 10%. Using more objective cross-validation results, it was found that the two schemes based on basin integration, and especially the one with a constraint, performed significantly better than the one with direct interpolation to points along rivers. The analysis did not allow identification of possible influence of surface water use.


2007 ◽  
Vol 79 (2) ◽  
pp. 195-208 ◽  
Author(s):  
Gil C. Marques ◽  
Dominique Spehler

Based on a new approach to symmetries of the fundamental interactions we deal, in this paper, with the electroweak interactions of leptons. We show that the coupling constants, arising in the way leptons are coupled to intermediate bosons, can be understood as parameters associated to the breakdown of SU(2) and parity symmetries. The breakdown of both symmetries is characterized by a new parameter (the asymetry parameter) of the electroweak interactions. This parameter gives a measure of the strength of breakdown of symmetries. We analyse the behaviour of the theory for three values of this parameter. The most relevant value is the one for which only the electromagnetic interactions do not break parity (the maximally allowed left-right asymetric theory). Maximamally allowed parity asymmetry is a requirement that is met for a value of Weinberg's theta-angle that is quite close to the experimental value of this parameter.


The discussions of tide observations which the author has hitherto at various times laid before the Society, were instituted with reference to the transit of the Moon immediately preceding the time of high-water; from which the laws of the variation in the interval between the moon’s transit and the time of high-water have been deduced. But the discussion of nineteen years’ observations of the tides at the London Docks, which is given in the present paper, has been made with reference to the moon’s transit two days previously, and proves very satisfactorily that the laws to which the phenomena are subject accord generally with the views propounded long since by Bernouilli, The relations which the author points out between the height of high-water and the atmospheric pressure as indicated by the barometer are particularly interesting and important. The influence of the wind is also considered; and such corrections indicated as are requisite in consequence of the employment by several observers of solar instead of mean time.


Sign in / Sign up

Export Citation Format

Share Document