scholarly journals A Pipeline for Adaptive Filtering and Transformation of Noisy Left-Arm ECG to Its Surrogate Chest Signal

Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 866
Author(s):  
Farzad Mohaddes ◽  
Rafael da Silva ◽  
Fatma Akbulut ◽  
Yilu Zhou ◽  
Akhilesh Tanneeru ◽  
...  

The performance of a low-power single-lead armband in generating electrocardiogram (ECG) signals from the chest and left arm was validated against a BIOPAC MP160 benchtop system in real-time. The filtering performance of three adaptive filtering algorithms, namely least mean squares (LMS), recursive least squares (RLS), and extended kernel RLS (EKRLS) in removing white (W), power line interference (PLI), electrode movement (EM), muscle artifact (MA), and baseline wandering (BLW) noises from the chest and left-arm ECG was evaluated with respect to the mean squared error (MSE). Filter parameters of the used algorithms were adjusted to ensure optimal filtering performance. LMS was found to be the most effective adaptive filtering algorithm in removing all noises with minimum MSE. However, for removing PLI with a maximal signal-to-noise ratio (SNR), RLS showed lower MSE values than LMS when the step size was set to 1 × 10−5. We proposed a transformation framework to convert the denoised left-arm and chest ECG signals to their low-MSE and high-SNR surrogate chest signals. With wide applications in wearable technologies, the proposed pipeline was found to be capable of establishing a baseline for comparing left-arm signals with original chest signals, getting one step closer to making use of the left-arm ECG in clinical cardiac evaluations.

Author(s):  
SeungJae Lee ◽  
Soo-Yong Kim

We propose an electrocardiogram (ECG) signal-based algorithm to estimate the respiratory rate is a significant informative indicator of physiological state of a patient. The consecutive ECG signals reflect the information about the respiration because inhalation and exhalation make transthoracic impedance vary. The proposed algorithm extracts the respiration-related signal by finding out the commonality between the frequency and amplitude features in the ECG pulse train. The respiration rate can be calculated from the principle components after the procedure of the singular spectrum analysis. We achieved 1.7569 breaths per min of root-mean-squared error and 1.7517 of standard deviation with a 32-seconds signal window of the Capnobase dataset, which gives notable improvement compared with the conventional Autoregressive model based estimation methods.


2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Ahmad Abdo ◽  
Claude D’Amours

In this article, we analyze the performance of adaptive filtering in the context of dual-polarization coherent optical flexible bit-rate transceivers. We investigate the ability of different adaptive algorithms to track fast state-of-polarization (SOP) transients in the presence of colored noise. Colored noise exists due to the concatenation of Wavelength Selective Switches (WSSs) and polarization dependent loss (PDL) which can be considered as spatially dependent noise. We consider the use of different modulation formats, and the practical limitation of error signal feedback delay in decision-directed adaptive filters is also taken into account. The back-to-back required signal-to-noise ratio (RSNR) penalty that can be tolerated determines the maximum SOP rate of change that can be tracked by the adaptive filters as well as the filter’s adaptive step size. We show that the recursive least squares algorithm, using the covariance matrix as an aggressive “step size,” has a much better convergence speed compared to the least mean squares (LMS) and normalized LMS (NLMS) algorithms in the presence of colored noise in the fiber. However, the three algorithms have similar tracking capabilities in the absence of colored noise.


Author(s):  
Yunfeng Wu ◽  
Rangaraj M. Rangayyan

The authors propose an unbiased linear adaptive filter (ULAF) to eliminate high-frequency random noise in electrocardiographic (ECG) signals. The ULAF does not contain a bias in its summation unit, and the filter coefficients are normalized. During the adaptation process, the normalized coefficients are updated with the steepest-descent algorithm in order to achieve efficient filtering of noisy ECG signals. The authors tested the ULAF with ECG signals recorded from 16 subjects, and compared the performance of the ULAF with that of the least-mean-square (LMS) and recursive-least-squares (RLS) adaptive filters. The filtering performance was quantified in terms of the root-mean-squared error (RMSE), normalized correlation coefficient (NCC), and filtered noise entropy (FNE). A template derived from each ECG signal was used as the reference to compute the measures of filtering performance. The results indicated that the ULAF was able to provided noise-free ECG signals with an average RMSE of 0.0287, which was lower than the second best RMSE (0.0365) obtained with the LMS filter. With respect to waveform fidelity, the proposed ULAF provided the highest average NCC (0.9964) among the three filters studied. In addition, the ULAF effectively removed more noise measured by FNE versus the LMS and RLS filters in most of the ECG signals tested.


2014 ◽  
Vol 2 (2) ◽  
pp. 47-58
Author(s):  
Ismail Sh. Baqer

A two Level Image Quality enhancement is proposed in this paper. In the first level, Dualistic Sub-Image Histogram Equalization DSIHE method decomposes the original image into two sub-images based on median of original images. The second level deals with spikes shaped noise that may appear in the image after processing. We presents three methods of image enhancement GHE, LHE and proposed DSIHE that improve the visual quality of images. A comparative calculations is being carried out on above mentioned techniques to examine objective and subjective image quality parameters e.g. Peak Signal-to-Noise Ratio PSNR values, entropy H and mean squared error MSE to measure the quality of gray scale enhanced images. For handling gray-level images, convenient Histogram Equalization methods e.g. GHE and LHE tend to change the mean brightness of an image to middle level of the gray-level range limiting their appropriateness for contrast enhancement in consumer electronics such as TV monitors. The DSIHE methods seem to overcome this disadvantage as they tend to preserve both, the brightness and contrast enhancement. Experimental results show that the proposed technique gives better results in terms of Discrete Entropy, Signal to Noise ratio and Mean Squared Error values than the Global and Local histogram-based equalization methods


2012 ◽  
Vol 16 (S3) ◽  
pp. 355-375 ◽  
Author(s):  
Olena Kostyshyna

An adaptive step-size algorithm [Kushner and Yin,Stochastic Approximation and Recursive Algorithms and Applications, 2nd ed., New York: Springer-Verlag (2003)] is used to model time-varying learning, and its performance is illustrated in the environment of Marcet and Nicolini [American Economic Review93 (2003), 1476–1498]. The resulting model gives qualitatively similar results to those of Marcet and Nicolini, and performs quantitatively somewhat better, based on the criterion of mean squared error. The model generates increasing gain during hyperinflations and decreasing gain after hyperinflations end, which matches findings in the data. An agent using this model behaves cautiously when faced with sudden changes in policy, and is able to recognize a regime change after acquiring sufficient information.


2021 ◽  
Vol 2 (2) ◽  
Author(s):  
Wenpeng Wei ◽  
Hussein Dourra ◽  
Guoming Zhu

Abstract Transfer case clutch is crucial in determining traction torque distribution between front and rear tires for four-wheel-drive (4WD) vehicles. Estimating time-varying clutch surface friction coefficient is critical for traction torque control since it is proportional to the clutch output torque. As a result, this paper proposes a real-time adaptive lookup table strategy to provide the time-varying clutch surface friction coefficient. Specifically, the clutch-parameter-dependent (such as clutch output torque and clutch touchpoint distance) friction coefficient is first estimated with available low-cost vehicle sensors (such as wheel speed and vehicle acceleration); and then a clutch-parameter-independent approach is developed for clutch friction coefficient through a one-dimensional lookup table. The table nodes are adaptively updated based on a fast recursive least-squares (RLS) algorithm. Furthermore, the effectiveness of adaptive lookup table is demonstrated by comparing the estimated clutch torque from adaptive lookup table with that estimated from vehicle dynamics, which achieves 14.8 Nm absolute mean squared error (AMSE) and 2.66% relative mean squared error (RMSE).


2018 ◽  
pp. 1940-1954
Author(s):  
Suma K. V. ◽  
Bheemsain Rao

Reduction in the capillary density in the nailfold region is frequently observed in patients suffering from Hypertension (Feng J, 2010). Loss of capillaries results in avascular regions which have been well characterized in many diseases (Mariusz, 2009). Nailfold capillary images need to be pre-processed so that noise can be removed, background can be separated and the useful parameters may be computed using image processing algorithms. Smoothing filters such as Gaussian, Median and Adaptive Median filters are compared using Mean Squared Error and Peak Signal-to-Noise Ratio. Otsu's thresholding is employed for segmentation. Connected Component Labeling algorithm is applied to calculate the number of capillaries per mm. This capillary density is used to identify rarefaction of capillaries and also the severity of rarefaction. Avascular region is detected by determining the distance between the peaks of the capillaries using Euclidian distance. Detection of rarefaction of capillaries and avascular regions can be used as a diagnostic tool for Hypertension and various other diseases.


2016 ◽  
Vol 5 (2) ◽  
pp. 73-86
Author(s):  
Suma K. V. ◽  
Bheemsain Rao

Reduction in the capillary density in the nailfold region is frequently observed in patients suffering from Hypertension (Feng J, 2010). Loss of capillaries results in avascular regions which have been well characterized in many diseases (Mariusz, 2009). Nailfold capillary images need to be pre-processed so that noise can be removed, background can be separated and the useful parameters may be computed using image processing algorithms. Smoothing filters such as Gaussian, Median and Adaptive Median filters are compared using Mean Squared Error and Peak Signal-to-Noise Ratio. Otsu's thresholding is employed for segmentation. Connected Component Labeling algorithm is applied to calculate the number of capillaries per mm. This capillary density is used to identify rarefaction of capillaries and also the severity of rarefaction. Avascular region is detected by determining the distance between the peaks of the capillaries using Euclidian distance. Detection of rarefaction of capillaries and avascular regions can be used as a diagnostic tool for Hypertension and various other diseases.


2015 ◽  
Vol 8 (4) ◽  
pp. 32
Author(s):  
Sabarish Sridhar

Steganography, water marking and encryption are widely used in image processing and communication. A general practice is to use them independently or in combination of two - for e.g. data hiding with encryption or steganography alone. This paper aims to combine the features of watermarking, image encryption as well as image steganography to provide reliable and secure data transmission .The basics of data hiding and encryption are explained. The first step involves inserting the required watermark on the image at the optimum bit plane. The second step is to use an RSA hash to actually encrypt the image. The final step involves obtaining a cover image and hiding the encrypted image within this cover image. A set of metrics will be used for evaluation of the effectiveness of the digital water marking. The list includes Mean Squared Error, Peak Signal to Noise Ratio and Feature Similarity.


Author(s):  
SONALI R. MAHAKALE ◽  
NILESHSINGH V. THAKUR

This paper deals with the comparative study of research work done in the field of Image Filtering. Different noises can affect the image in different ways. Although various solutions are available for denoising them, a detail study of the research is required in order to design a filter which will fulfill the desire aspects along with handling most of the image filtering issues. An output image should be judged on the basis of Image Quality Metrics for ex-: Peak-Signal-to-Noise ratio (PSNR), Mean Squared Error (MSE) and Mean Absolute Error (MAE) and Execution Time.


Sign in / Sign up

Export Citation Format

Share Document