scholarly journals Adaptive curvelet-domain primary-multiple separation

Geophysics ◽  
2008 ◽  
Vol 73 (3) ◽  
pp. A17-A21 ◽  
Author(s):  
Felix J. Herrmann ◽  
Deli Wang ◽  
Dirk J. (Eric) Verschuur

In many exploration areas, successful separation of primaries and multiples greatly determines the quality of seismic imaging. Despite major advances made by surface-related multiple elimination (SRME), amplitude errors in the predicted multiples remain a problem. When these errors vary for each type of multiple in different ways (as a function of offset, time, and dip), they pose a serious challenge for conventional least-squares matching and for the recently introduced separation by curvelet-domain thresholding. We propose a data-adaptive method that corrects amplitude errors, which vary smoothly as a function of location, scale (frequency band), and angle. With this method, the amplitudes can be corrected by an elementwise curvelet-domain scaling of the predicted multiples. We show that this scaling leads to successful estimation of primaries, despite amplitude, sign, timing, and phase errors in the predicted multiples. Our results on synthetic and real data show distinct improvements over conventional least-squares matching in terms of better suppression of multiple energy and high-frequency clutter and better recovery of estimated primaries.

2004 ◽  
Vol 20 (3) ◽  
pp. 361-367 ◽  
Author(s):  
Mats Lundström ◽  
Eva Wendel

Objectives:To study the impact on public health in terms of utility of various proportions of first-eye and second-eye cataract surgery.Methods:A model was used to study the impact on a population of a fixed cataract surgical rate (9,250 operations/1,000,000 people) with varying proportions of first-eye and second-eye cataract operations. The study population was the County of Blekinge with a known incidence of previous cataract surgery. The prevalence of cataract, the estimated need for cataract surgery, and the utility values were taken from the literature. The population was grouped by disability stage of cataract and previous cataract surgery in accordance with prevalence studies and data from a large national database on cataract surgery and patients' self-assessed visual function. The mortality rate was taken from real data for the study population.Results:Given a fixed cataract surgical rate over a period of five years, a high percentage of second-eye cataract surgery (42 percent) resulted in a mean utility of 0.82239 in the population forty years of age and older and the corresponding number for a low percentage of second-eye cataract surgery (25 percent) was 0.82253. A high percentage of second-eye surgeries resulted in 421 more individuals who were well compared with a low percentage of second-eye surgeries. On the other hand, a low percentage of second-eye surgeries resulted in 152 fewer individuals with disability and 118 fewer individuals with dependence compared with a high percentage of second-eye surgeries.Conclusions:A high frequency of first-eye cataract surgeries instead of second-eye surgeries affects more individuals and means an optimized improvement of utility in a population. This should be recommended if the cataract surgical rate is very insufficient. If the cataract surgical rate is high, more second-eye surgeries should be performed to optimize quality of life to as many as possible.


2005 ◽  
Vol 120 (1) ◽  
pp. 40-49
Author(s):  
Stanisław POLANOWSKI

This paper presents the possibilities of the processing of indicator diagrams by means of the moving approximation objectsdeveloped by the author, which are based on the least squares method. The rules of creating the approximating objects withthe use of the spline knots have been discussed: glued, riveted and broken knots as well as multiple approximation. Byusing the example of the processing of indicator diagrams of the medium-speed marine engine, the quality of the mileageapproximation for a few types of the approximating objects has been compared. Some examples of curve smoothing and determining of derivatives and separating the high-frequency noise and disturbances caused by gas channels have beenpresented as well.


Author(s):  
S. R. Rakhmanov

In some cases, the processes of piercing or expanding pipe blanks involve the use of high-frequency active vibrations. However, due to insufficient knowledge, these processes are not widely used in the practice of seamless pipes production. In particular, the problems of increasing the efficiency of the processes of piercing or expanding a pipe blank at a piercing press using high-frequency vibrations are being solved without proper research and, as a rule, by experiments. The elaboration of modern technological processes for the production of seamless pipes using high-frequency vibrations is directly related to the choice of rational modes of metal deformation and the prediction resistance indicators of technological tools and the reliability of equipment operation. The creation of a mathematical model of the process of vibrating piercing (expansion) of an axisymmetric pipe blank at a piercing press of a pipe press facility is an actual task. A calculation scheme for the process of piercing a pipe plank has been elaborated. A dependence was obtained characterizing the speed of front of plastic deformation propagation on the speed of penetration of a vibrated axisymmetric mandrel into the pipe workpiece being pierced. The dynamic characteristics of the occurrence of wave phenomena in the metal being pierced under the influence of a vibrated tool have been determined, which significantly complements the previously known ideas about the stress-strain state of the metal in the deformation zone. The deformation fields in the zones of the disturbed region of the deformation zone were established, taking into account the high-frequency vibrations of the technological tool. It has been established that the choice of rational parameters (amplitude-frequency characteristics) of the vibration piercing process of a pipe blank results in significant increase in the efficiency of the process, the durability of the technological tool and the quality of the pierced blanks.


1998 ◽  
Vol 2 ◽  
pp. 115-122
Author(s):  
Donatas Švitra ◽  
Jolanta Janutėnienė

In the practice of processing of metals by cutting it is necessary to overcome the vibration of the cutting tool, the processed detail and units of the machine tool. These vibrations in many cases are an obstacle to increase the productivity and quality of treatment of details on metal-cutting machine tools. Vibration at cutting of metals is a very diverse phenomenon due to both it’s nature and the form of oscillatory motion. The most general classification of vibrations at cutting is a division them into forced vibration and autovibrations. The most difficult to remove and poorly investigated are the autovibrations, i.e. vibrations arising at the absence of external periodic forces. The autovibrations, stipulated by the process of cutting on metalcutting machine are of two types: the low-frequency autovibrations and high-frequency autovibrations. When the low-frequency autovibration there appear, the cutting process ought to be terminated and the cause of the vibrations eliminated. Otherwise, there is a danger of a break of both machine and tool. In the case of high-frequency vibration the machine operates apparently quiently, but the processed surface feature small-sized roughness. The frequency of autovibrations can reach 5000 Hz and more.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


1996 ◽  
Vol 33 (9) ◽  
pp. 101-108 ◽  
Author(s):  
Agnès Saget ◽  
Ghassan Chebbo ◽  
Jean-Luc Bertrand-Krajewski

The first flush phenomenon of urban wet weather discharges is presently a controversial subject. Scientists do not agree with its reality, nor with its influences on the size of treatment works. Those disagreements mainly result from the unclear definition of the phenomenon. The objective of this article is first to provide a simple and clear definition of the first flush and then to apply it to real data and to obtain results about its appearance frequency. The data originate from the French database based on the quality of urban wet weather discharges. We use 80 events from 7 separately sewered basins, and 117 events from 7 combined sewered basins. The main result is that the first flush phenomenon is very scarce, anyway too scarce to be used to elaborate a treatment strategy against pollution generated by urban wet weather discharges.


2020 ◽  
Vol 16 (35) ◽  
pp. 2997-3013
Author(s):  
Kentaro Kogushi ◽  
Michael LoPresti ◽  
Shunya Ikeda

Background: Synovial sarcoma (SS) is a rare, aggressive soft tissue sarcoma with a poor prognosis after metastasis. The objective of this study was to conduct a systematic review of the clinical evidence for therapeutic options for adults with metastatic or advanced SS. Materials & methods: Relevant databases were searched with predefined keywords. Results: Thirty-nine publications reported clinical data for systemic treatment and other interventions. Data on survival outcomes varied but were generally poor (progression-free survival: 1.0–7.7 months; overall survival: 6.7–29.2 months) for adults with metastatic and advanced SS. A high frequency of neutropenia with systemic treatment and low quality of life post-progression were reported. Conclusion: Reported evidence suggests poor outcomes in adults with metastatic and advanced SS and the need for the development of new treatment modalities.


2021 ◽  
Vol 5 (1) ◽  
pp. 59
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

Terrestrial laser scanners (TLS) capture a large number of 3D points rapidly, with high precision and spatial resolution. These scanners are used for applications as diverse as modeling architectural or engineering structures, but also high-resolution mapping of terrain. The noise of the observations cannot be assumed to be strictly corresponding to white noise: besides being heteroscedastic, correlations between observations are likely to appear due to the high scanning rate. Unfortunately, if the variance can sometimes be modeled based on physical or empirical considerations, the latter are more often neglected. Trustworthy knowledge is, however, mandatory to avoid the overestimation of the precision of the point cloud and, potentially, the non-detection of deformation between scans recorded at different epochs using statistical testing strategies. The TLS point clouds can be approximated with parametric surfaces, such as planes, using the Gauss–Helmert model, or the newly introduced T-splines surfaces. In both cases, the goal is to minimize the squared distance between the observations and the approximated surfaces in order to estimate parameters, such as normal vector or control points. In this contribution, we will show how the residuals of the surface approximation can be used to derive the correlation structure of the noise of the observations. We will estimate the correlation parameters using the Whittle maximum likelihood and use comparable simulations and real data to validate our methodology. Using the least-squares adjustment as a “filter of the geometry” paves the way for the determination of a correlation model for many sensors recording 3D point clouds.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Camilo Broc ◽  
Therese Truong ◽  
Benoit Liquet

Abstract Background The increasing number of genome-wide association studies (GWAS) has revealed several loci that are associated to multiple distinct phenotypes, suggesting the existence of pleiotropic effects. Highlighting these cross-phenotype genetic associations could help to identify and understand common biological mechanisms underlying some diseases. Common approaches test the association between genetic variants and multiple traits at the SNP level. In this paper, we propose a novel gene- and a pathway-level approach in the case where several independent GWAS on independent traits are available. The method is based on a generalization of the sparse group Partial Least Squares (sgPLS) to take into account groups of variables, and a Lasso penalization that links all independent data sets. This method, called joint-sgPLS, is able to convincingly detect signal at the variable level and at the group level. Results Our method has the advantage to propose a global readable model while coping with the architecture of data. It can outperform traditional methods and provides a wider insight in terms of a priori information. We compared the performance of the proposed method to other benchmark methods on simulated data and gave an example of application on real data with the aim to highlight common susceptibility variants to breast and thyroid cancers. Conclusion The joint-sgPLS shows interesting properties for detecting a signal. As an extension of the PLS, the method is suited for data with a large number of variables. The choice of Lasso penalization copes with architectures of groups of variables and observations sets. Furthermore, although the method has been applied to a genetic study, its formulation is adapted to any data with high number of variables and an exposed a priori architecture in other application fields.


2021 ◽  
Vol 9 (5) ◽  
pp. 465
Author(s):  
Angelos Ikonomakis ◽  
Ulrik Dam Nielsen ◽  
Klaus Kähler Holst ◽  
Jesper Dietz ◽  
Roberto Galeazzi

This paper examines the statistical properties and the quality of the speed through water (STW) measurement based on data extracted from almost 200 container ships of Maersk Line’s fleet for 3 years of operation. The analysis uses high-frequency sensor data along with additional data sources derived from external providers. The interest of the study has its background in the accuracy of STW measurement as the most important parameter in the assessment of a ship’s performance analysis. The paper contains a thorough analysis of the measurements assumed to be related with the STW error, along with a descriptive decomposition of the main variables by sea region including sea state, vessel class, vessel IMO number and manufacturer of the speed-log installed in each ship. The paper suggests a semi-empirical method using a threshold to identify potential error in a ship’s STW measurement. The study revealed that the sea region is the most influential factor for the STW accuracy and that 26% of the ships of the dataset’s fleet warrant further investigation.


Sign in / Sign up

Export Citation Format

Share Document