MULTICHANNEL Z‐TRANSFORMS AND MINIMUM‐DELAY

Geophysics ◽  
1966 ◽  
Vol 31 (3) ◽  
pp. 482-500 ◽  
Author(s):  
Enders A. Robinson

In the standard deterministic model of water reverberation generation, the reverberation pulse‐train resulting from a deep reflection is minimum‐delay. Even in the more complex physical situations encountered in the field, there is evidence that in many cases the reverberation pulse‐train waveforms are minimum‐delay, or at least approximately so. The reason for this minimum‐delay property is that a pulse‐train waveform results from multiple reflections and transmissions within the layered earth; because reflection coefficients are less than unity in magnitude, the concentration of energy in a pulse‐train must appear at its beginning rather than its end; this early concentration of energy is the condition that pulse‐train waveform be minimum‐delay. Each deep reflection horizon contributes a minimum‐delay reverberation pulse‐train waveform to a seismic trace. If we let a spike series represent the deep horizons in the sense that the timing of a spike represents the direct arrival time of a reflection and the amplitude of the spike represents the strength of the reflection, then the seismic trace may be considered as the convolution of the spike series with the reverberation pulse‐train waveform. Because the reverberation pulse‐train waveform is minimum‐delay, and because at least approximately the deep horizon spike series represents a statistically uncorrelated series, the two conditions required for the application of the method of predictive deconvolution (Robinson, 1954, 1957) are met, and hence this method can be used as a practical digital data processing method to eliminate water reverberations on field seismic traces. The concept of minimum‐delay therefore is an important link in chaining together the deterministic approach and the statistical approach to seismic record analysis in the single‐channel case. The concept of minimum‐delay can be extended to the multichannel case. The theory of multichannel digital filters can be regarded as the matrix‐valued counterpart of single‐channel digital filter theory. A reflection seismogram consists of many traces; these traces are interrelated. A multichannel filter operates simultaneously on all these traces, and thus it can take advantage of the seismogram structure between traces as well as along a single trace. An important objective of seismogram analysis is to increase the resolution of overlapping waveforms by deconvolution. This goal can be accomplished through the use of inverse multichannel digital filters. Without proper design, an inverse multichannel filter can have the undesirable property that its impulse response function is unstable. In the case when there is the same number of input channels as output channels, then each of the coefficients of a multichannel digital filter may be regarded as a square matrix, and the z‐transform of the filter coefficients is a matrix‐valued polynomial in z. The determinant of its matrix‐valued z‐transform plays a central role in the classification of the delay properties of such a multichannel filter. This determinant is a scalar‐valued polynomial in z. If the coefficients of this polynomial represents a single‐channel minimum‐delay filter, then the original multichannel filter is also minimum‐delay; if they represent a single‐channel maximum‐delay filter, then the multichannel filter is also maximum‐delay; if they represent a single‐channel mixed‐delay filter, then the multichannel filter is also mixed‐delay. A minimum‐delay multichannel digital filter has an inverse which is a stable memory function. On the other hand, a maximum‐delay multichannel digital filter has an inverse that consists of a stable anticipation function. A mixed‐delay multichannel digital filter has a stable inverse, this inverse being made up of a memory component and an anticipation component.

Energies ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 1488
Author(s):  
Damian Trofimowicz ◽  
Tomasz P. Stefański

In this paper, novel methods for the evaluation of digital-filter stability are investigated. The methods are based on phase analysis of a complex function in the characteristic equation of a digital filter. It allows for evaluating stability when a characteristic equation is not based on a polynomial. The operation of these methods relies on sampling the unit circle on the complex plane and extracting the phase quadrant of a function value for each sample. By calculating function-phase quadrants, regions in the immediate vicinity of unstable roots (i.e., zeros), called candidate regions, are determined. In these regions, both real and imaginary parts of complex-function values change signs. Then, the candidate regions are explored. When the sizes of the candidate regions are reduced below an assumed accuracy, then filter instability is verified with the use of discrete Cauchy’s argument principle. Three different algorithms of the unit-circle sampling are benchmarked, i.e., global complex roots and poles finding (GRPF) algorithm, multimodal genetic algorithm with phase analysis (MGA-WPA), and multimodal particle swarm optimization with phase analysis (MPSO-WPA). The algorithms are compared in four benchmarks for integer- and fractional-order digital filters and systems. Each algorithm demonstrates slightly different properties. GRPF is very fast and efficient; however, it requires an initial number of nodes large enough to detect all the roots. MPSO-WPA prevents missing roots due to the usage of stochastic space exploration by subsequent swarms. MGA-WPA converges very effectively by generating a small number of individuals and by limiting the final population size. The conducted research leads to the conclusion that stochastic methods such as MGA-WPA and MPSO-WPA are more likely to detect system instability, especially when they are run multiple times. If the computing time is not vitally important for a user, MPSO-WPA is the right choice, because it significantly prevents missing roots.


Author(s):  
E. A. Romaniuk ◽  
V. Yu. Rumiantsev ◽  
Yu. V. Rumiantsev ◽  
A. A. Dziaruhina

Digital filters made with the use of discrete Fourier Transform are applied in most microprocessor protections produced both in the home country and abroad. When the input signal frequency deviates from the value to which these filters are configured, a signal is generated at their output with oscillation amplitude that is proportional to the deviation of the signal frequency from the specified one. The article proposes an algorithm for compensating the oscillations of orthogonal components of the output signals of digital filters implemented on the basis of a discrete Fourier transform, when the input signal frequency deviates from the nominal one. A mathematical model of the proposed digital filter with an algorithm for compensating the oscillations of its orthogonal components, as well as a signal model for reproducing input effects, is implemented in the MatLab-Simulink dynamic modeling environment. The digital filter model is provided with two channels, viz. a current channel and a voltage channel, which makes it possible to simulate their operation in relation to protections that use one or two input values, for example, for current and remote protection. Verification of the functioning of the digital filter model with compensation for fluctuations in its output signal was carried out with the use of two types of test effects, viz. a sinusoidal signal with a frequency of 48–51 Hz (idealized effect), and the effects that are close to the real secondary signals of measuring current transformers and voltage transformers in case of short circuits accompanied by a decrease in frequency. The conducted computational experiments with deviation of frequency from the nominal one, revealed the presence of undamped oscillations at the output of standard digital Fourier filters and their almost complete absence in the proposed digital filters. This makes us possible to recommend digital filters based on a discrete Fourier transform supplemented by an algorithm for compensation of fluctuations in the amplitudes of the output signals for the use in microprocessor protection.


1993 ◽  
Vol 17 ◽  
pp. 386-390 ◽  
Author(s):  
Sonia C. Gallegos ◽  
Jeffrey D. Hawkins ◽  
Chiu Fu Cheng

A cloud screening method initially generated to mask cloud contaminated pixels over the ocean in visible/infrared imagery, has been revised and adapted to detect clouds over Arctic regions with encouraging results. Although the method is quite successful in eliminating very cold clouds, it underestimates low level clouds. However, this does not appear to interfere with monitoring of ice related features such as leads or the ice edge in Advanced Very High Resolution Radiometer (AVHRR) scenes. The method uses: a multiple-band approach to produce signatures not readily available in single channel data, an edge detection/dilation technique to locate features in the clouds and to join isolated edges, and a polygon identification technique to remove noise in the form of isolated pixels and separate clear regions from cloud contaminated areas. The method has been tested over a limited set of data with consistent results. Initial evaluation of the usefulness of this cloud-detection algorithm in data-fusion experiments indicate a potential in locating areas in AVHRR data which are cloud contaminated and which could yield a far superior representation of the ice features if replaced with data from a different sensor such as the Special Sensor Microwave/lmager (SSM/I).


2012 ◽  
Vol 565 ◽  
pp. 656-661
Author(s):  
Hirotaka Ojima ◽  
Kazutaka Nonomura ◽  
Li Bo Zhou ◽  
Jun Shimizu ◽  
Teppei Onuki

The underlying data form of a wafer is a matrix of length (or height) measurements. In the presence of noise, evaluation parameters are normally biased. The expectation value such as peak-to-valley and GBIR (global backside ideal range) is systematically larger than the “true” value. Correction and compensation need a large population of measurements to analytically estimate both bias and the uncertainty. In this study, approach to obtain the true value is to extract a “true” profile by filtering noise from the measured data. In previous paper, the digital filter with wavelet transformation (WT) is proposed and efficiency to remove the noise, however, the method is introduced the pseudo-Gibbs effect. Then, we propose the digital filter with new algorithm of total variation (TV). In this paper, the new algorithm of TV is proposed and the digital filter by new TV indicate that data is filtered without the pseudo-Gibbs effect. The digital filters by WT and new TV are applied on the sample data of actual measurement system to investigate their performance of noise reduction.


2016 ◽  
Vol 26 (02) ◽  
pp. 1750033
Author(s):  
Tian-Bo Deng

Guaranteeing the stability is one of the most critical issues in designing a variable recursive digital filter. In this paper, we first present an odd-order recursive variable model (transfer function) that is used for designing an odd-order variable-magnitude (VM) digital filter, and then we replace the original coefficients of the denominator of the odd-order transfer function with a set of new parameters. These new parameters can ensure that they can take arbitrary values without incurring instability of the designed odd-order VM filter. To make the VM filter coefficients variable, we find all the VM filter coefficients as polynomial functions of the tuning parameter, which includes two phases. The first phase designs a set of recursive digital filters with fixed coefficients (constant filters), and the second phase utilizes a curve-fitting scheme to represent each coefficient as a polynomial function. As a result, the VM filter coefficients become variable, and the proposed parameter-substitution-based denominator coefficients ensure the filter stability. This is the most important contribution of the parameter-substitution-based design scheme. This paper uses the fifth-order demonstrative example to verify the stability guarantee as well as the design accuracy of the obtained the fifth-order VM filter.


2020 ◽  
Vol 10 (24) ◽  
pp. 9052
Author(s):  
Pavel Lyakhov ◽  
Maria Valueva ◽  
Georgii Valuev ◽  
Nikolai Nagornov

This paper proposes new digital filter architecture based on a modified multiply-accumulate (MAC) unit architecture called truncated MAC (TMAC), with the aim of increasing the performance of digital filtering. This paper provides a theoretical analysis of the proposed TMAC units and their hardware simulation. Theoretical analysis demonstrated that replacing conventional MAC units with modified TMAC units, as the basis for the implementation of digital filters, can theoretically reduce the filtering time by 29.86%. Hardware simulation showed that TMAC units increased the performance of digital filters by up to 10.89% compared to digital filters using conventional MAC units, but were associated with increased hardware costs. The results of this research can be used in the theory of digital signal processing to solve practical problems such as noise reduction, amplification and suppression of the frequency spectrum, interpolation, decimation, equalization and many others.


Geophysics ◽  
1984 ◽  
Vol 49 (9) ◽  
pp. 1559-1560
Author(s):  
Mark Lane ◽  
Tad Ulrych

The recent note by Jin and Rogers (1983) presented examples of the failure of the homomorphic transform to invert properly. Since this transform is not only of interest in geophysics, but has also found applications in other fields (Oppenheim and Schafer, 1975), these results are of concern. We consequently attempted to reproduce Jin and Rogers’ results. We failed to do so. In fact, in our experience, the transform has always inverted successfully. Our results using the first example of Jin and Rogers are shown in Figure 1. We used the algorithm of Tribolet (1977) with a modified Goertzel algorithm (Bonzanigo, 1978) for phase unwrapping. The figure is arranged as in Jin and Rogers’ paper. Figure 1a shows the input: impulses separated by 20 samples, of magnitude 2000 and 1999. Figure 1b shows its complex cepstrum. We have set the zero‐quefrency point to zero since this represents a scale factor and can dominate the plotting. Note the minimum delay cepstrum with a small amount of aliasing. The sequence returned by the inverse transform is shown in Figure 1c, demonstrating a successful inversion. The effect of noise is also shown. Noise with a standard deviation of 5 was added to the sequence of Figure 1a. This is shown in Figure 1d. Note that our noise realization is undoubtedly different from that of Jin and Rogers. The noise has changed the relative magnitude of the original spikes such that they are maximum delay. This is reflected in the cepstrum (Figure 1e). Figure 1f shows the returned sequence, again demonstrating the successful inversion.


1979 ◽  
Vol 23 ◽  
pp. 125-131
Author(s):  
L. A. Rayburn

AbstractOne of the uncertain aspects in the analysis of x-ray spectra is the determination of the proper background to subtract from the raw data. In those cases where the background is a smoothly varying funct ion of the x-ray energy, the application of a digital filter to the raw data will effectively remove the background leaving only the filtered peak information. These filtered peaks can then be fit by using a non-linear least squares method in conjunction with a suitably chosen mathematical model of the peak structure.


Sign in / Sign up

Export Citation Format

Share Document