scholarly journals Towards to Optimal Wavelet Denoising Scheme—A Novel Spatial and Volumetric Mapping of Wavelet-Based Biomedical Data Smoothing

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5301
Author(s):  
Ladislav Stanke ◽  
Jan Kubicek ◽  
Dominik Vilimek ◽  
Marek Penhaker ◽  
Martin Cerny ◽  
...  

Wavelet transformation is one of the most frequent procedures for data denoising, smoothing, decomposition, features extraction, and further related tasks. In order to perform such tasks, we need to select appropriate wavelet settings, including particular wavelet, decomposition level and other parameters, which form the wavelet transformation outputs. Selection of such parameters is a challenging area due to absence of versatile recommendation tools for suitable wavelet settings. In this paper, we propose a versatile recommendation system for prediction of suitable wavelet selection for data smoothing. The proposed system is aimed to generate spatial response matrix for selected wavelets and the decomposition levels. Such response enables the mapping of selected evaluation parameters, determining the efficacy of wavelet settings. The proposed system also enables tracking the dynamical noise influence in the context of Wavelet efficacy by using volumetric response. We provide testing on computed tomography (CT) and magnetic resonance (MR) image data and EMG signals mostly of musculoskeletal system to objectivise system usability for clinical data processing. The experimental testing is done by using evaluation parameters such is MSE (Mean Squared Error), ED (Euclidean distance) and Corr (Correlation index). We also provide the statistical analysis of the results based on Mann-Whitney test, which points out on statistically significant differences for individual Wavelets for the data corrupted with Salt and Pepper and Gaussian noise.

2014 ◽  
Vol 16 (3) ◽  
pp. 150-169 ◽  
Author(s):  
Kamran Munir ◽  
Saad Liaquat Kiani ◽  
Khawar Hasham ◽  
Richard McClatchey ◽  
Andrew Branson ◽  
...  

Purpose – The purpose of this paper is to provide an integrated analysis base to facilitate computational neuroscience experiments, following a user-led approach to provide access to the integrated neuroscience data and to enable the analyses demanded by the biomedical research community. Design/methodology/approach – The design and development of the N4U analysis base and related information services addresses the existing research and practical challenges by offering an integrated medical data analysis environment with the necessary building blocks for neuroscientists to optimally exploit neuroscience workflows, large image data sets and algorithms to conduct analyses. Findings – The provision of an integrated e-science environment of computational neuroimaging can enhance the prospects, speed and utility of the data analysis process for neurodegenerative diseases. Originality/value – The N4U analysis base enables conducting biomedical data analyses by indexing and interlinking the neuroimaging and clinical study data sets stored on the grid infrastructure, algorithms and scientific workflow definitions along with their associated provenance information.


2016 ◽  
Vol 138 (12) ◽  
Author(s):  
Alisdair R. MacLeod ◽  
Hannah Rose ◽  
Harinderjit S. Gill

Synthetic biomechanical test specimens are frequently used for preclinical evaluation of implant performance, often in combination with numerical modeling, such as finite-element (FE) analysis. Commercial and freely available FE packages are widely used with three FE packages in particular gaining popularity: abaqus (Dassault Systèmes, Johnston, RI), ansys (ANSYS, Inc., Canonsburg, PA), and febio (University of Utah, Salt Lake City, UT). To the best of our knowledge, no study has yet made a comparison of these three commonly used solvers. Additionally, despite the femur being the most extensively studied bone in the body, no freely available validated model exists. The primary aim of the study was primarily to conduct a comparison of mesh convergence and strain prediction between the three solvers (abaqus, ansys, and febio) and to provide validated open-source models of a fourth-generation composite femur for use with all the three FE packages. Second, we evaluated the geometric variability around the femoral neck region of the composite femurs. Experimental testing was conducted using fourth-generation Sawbones® composite femurs instrumented with strain gauges at four locations. A generic FE model and four specimen-specific FE models were created from CT scans. The study found that the three solvers produced excellent agreement, with strain predictions being within an average of 3.0% for all the solvers (r2 > 0.99) and 1.4% for the two commercial codes. The average of the root mean squared error against the experimental results was 134.5% (r2 = 0.29) for the generic model and 13.8% (r2 = 0.96) for the specimen-specific models. It was found that composite femurs had variations in cortical thickness around the neck of the femur of up to 48.4%. For the first time, an experimentally validated, finite-element model of the femur is presented for use in three solvers. This model is freely available online along with all the supporting validation data.


Author(s):  
Obed Appiah ◽  
James Benjamin Hayfron-Acquah ◽  
Michael Asante

For computer vision systems to effectively perform diagnoses, identification, tracking, monitoring and surveillance, image data must be devoid of noise. Various types of noises such as Salt-and-pepper or Impulse, Gaussian, Shot, Quantization, Anisotropic, and Periodic noises corrupts images making it difficult to extract relevant information from them. This has led to a lot of proposed algorithms to help fix the problem. Among the proposed algorithms, the median filter has been successful in handling salt-and-pepper noise and preserving edges in images. However, its moderate to high running time and poor performance when images are corrupted with high densities of noise, has led to various proposed modifications of the median filter. The challenge observed with all these modifications is the trade-off between efficient running time and quality of denoised images. This paper proposes an algorithm that delivers quality denoised images in low running time. Two state-of-the-art algorithms are combined into one and a technique called Mid-Value-Decision-Median introduced into the proposed algorithm to deliver high quality denoised images in real-time. The proposed algorithm, High-Performance Modified Decision Based Median Filter (HPMDBMF) runs about 200 times faster than the state-of-the-art Modified Decision Based Median Filter (MDBMF) and still generate equivalent output.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Tuan Anh Tran ◽  
Tien Dung Cao ◽  
Vu-Khanh Tran ◽  
◽  

Biomedical Image Processing, such as human organ segmentation and disease analysis, is a modern field in medicine development and patient treatment. Besides there are many kinds of image formats, the diversity and complexity of biomedical data is still a big issue to all of researchers in their applications. In order to deal with the problem, deep learning give us a successful and effective solutions. Unet and LSTM are two general approaches to the most of case of medical image data. While Unet helps to teach a machine in learning data from each image accompanied with its labelled information, LSTM helps to remember states from many slices of images by times. Unet gives us the segmentation of tumor, abnormal things from biomedical images and then the LSTM gives us the effective diagnosis on a patient disease. In this paper, we show some scenarios of using Unets and LSTM to segment and analysis on many kinds of human organ images and results of brain, retinal, skin, lung and breast segmentation.


In this paper, we are analyzing the performance of Recurrent Neural Network (RNN) for image recalling with improved training sets by Discrete Wavelet Transformation (DWT). DWT has been used for decomposing the images into four parts for low level feature extraction, to build the pattern information, encoding the pattern. When all the patterns of these four level training sets are encoded, and given as input to RNN to analyze the performance. This analysis is carried out in terms of successful and correct recalling of the images by hybridizing of DWT and RNN. Now we introduce salt and pepper noise so that the distorted feature vectors presented to the network. This gives a prototype pattern of noisy image and requires filtering of the training set. This leads to recalled output of the network that produces the pattern information for each part of the images. Now the integration is made possible if inverse discrete wavelet transformation (IDWT) to amalgamate the recalled outputs corresponding to each part of the image and final image is recognized.


2019 ◽  
Vol 8 (4) ◽  
pp. 11151-11157

Nowadays, the major biomedical data required for diagnosing the disease is neurons in the nerve cell. Just a brief timeframe after the neuron became recognized as the basic unit of the sensory system, the main endeavors were made to appraise the quantity of neurons in various parts of the sensory system. During the previous century, an incredible number of techniques have been utilized in making such gauges. In spite of the fact that the most generally utilized and acknowledged strategy is that of direct including in the magnifying lens, different systems, including photographic, projection, homogenate, programmed, and visual strategies have been planned. And in this project we are taking a brain tissue as an image data and from that image we are finding the number of neurons which are active in state for the first 24 hrs. and again check for 48 hrs. and finally for 72 hrs. so we here find how neurons are responding after giving information to a body and that information flows through nerves of the body and reaches to the neurons present in a human brain and the neurons react to the information and we take the data that how many neurons are responding to the information that is given to a human body. So, by finding the number of neurons responding to the information given to human body we could estimate the neurons which are alive, and which are dead by this we could declare the mental status of a person. So we are finding the number of neurons with the help of neural network method using MATLAB software and we created a page with the help of MATLAB so we can give input image in the page and the code we written will help to check the number of neurons.


2014 ◽  
Vol 1030-1032 ◽  
pp. 1713-1716
Author(s):  
Xin Wang ◽  
He Pan

The classic problem of the existence of fractal coding time is too long, a kind of fast encoding algorithm was proposed in this paper, which is based on Wavelet and Fractal combined, using wavelet decomposition characteristics. This method reduces the amount of image data compression effectively, shorts coding time and improve the image encoding quality.


2009 ◽  
Vol 48 (03) ◽  
pp. 225-228 ◽  
Author(s):  
C. Combi ◽  
A. Tucker ◽  
N. Peek

Summary Objective: To introduce the special topic of Methods of Information in Medicine on data mining in biomedicine, with selected papers from two workshops on Intelligent Data Analysis in bioMedicine (IDAMAP) held in Verona (2006) and Amsterdam (2007). Methods: Defining the field of biomedical data mining. Characterizing current developments and challenges for researchers in the field. Reporting on current and future activities of IMIA’s working group on Intelligent Data Analysis and Data Mining. Describing the content of the selected papers in this special topic. Results and Conclusions: In the biomedical field, data mining methods are used to develop clinical diagnostic and prognostic systems, to interpret biomedical signal and image data, to discover knowledge from biological and clinical databases, and in biosurveillance and anomaly detection applications. The main challenges for the field are i) dealing with very large search spaces in a both computationally efficient and statistically valid manner, ii) incorporating and utilizing medical and biological background knowledge in the data analysis process, iii) reasoning with time-oriented data and temporal abstraction, and iv) developing end-user tools for interactive presentation, interpretation, and analysis of large datasets.


Author(s):  
S. A. Azeem Farhan

Abstract: The recommendation problem involves the prediction of a set of items that maximize the utility for users. As a solution to this problem, a recommender system is an information filtering system that seeks to predict the rating given by a user to an item. There are theree types of recommendation systesms namely Content based, Collaborative based and the Hybrid based Recommendation systems. The collaborative filtering is further classified into the user based collaborative filtering and item based collaborative filtering. The collaborative filtering (CF) based recommendation systems are capable of grasping the interaction or correlation of users and items under consideration. We have explored most of the existing collaborative filteringbased research on a popular TMDB movie dataset. We found out that some key features were being ignored by most of the previous researches. Our work has given significant importance to 'movie overviews' available in the dataset. We experimented with typical statistical methods like TF-IDF , By using tf-idf the dimensions of our courps(overview and other text features) explodes, which creates problems ,we have tackled those problems using a dimensionality reduction technique named Singular Value Decomposition(SVD). After this preprocessing the Preprocessed data is being used in building the models. We have evaluated the performance of different machine learning algorithms like Random Forest and deep neural networks based BiLSTM. The experiment results provide a reliable model in terms of MAE(mean absolute error) ,RMSE(Root mean squared error) and the Bi-LSTM turns out to be a better model with an MAE of 0.65 and RMSE of 1.04 ,it generates more personalized movie recommendations compared to other models. Keywords: Recommender system, item-based collaborative filtering, Natural Language Processing, Deep learning.


2013 ◽  
Vol 457-458 ◽  
pp. 1232-1235
Author(s):  
Yi Liu ◽  
Ji He Zhou ◽  
An Yang

In order to insure the reliability of the biomechanics image analysis, the original image data should be in mathematical process, in which the noise data can be maximally removed while the real valid information can be reserved. This is the so called data smoothing. In recent years, few scholars conducted deep comparative studies in this area. Therefore, the present study tackles this rare issue, and fills in the gap of deep comparative studies involving different smoothing methods. This paper aims to test and compare these two different methods in their ability to process human movement images. The result is readied by applying theoretical analysis and conducting experiments. The present study relies on various mathematics principles of interpolation and filter methods, the comparison between the merits and demerits, range of applications and effects of different interpolation and filter combinations. The present study then adopts experimental method to validate the hypothesis and reach our conclusion. We obtain the conclusion that it is better to use low-pass filtering to remove the high-frequency noise data, a method which is used by most scholars. However, the data of IIR is closer to the original value and has a better effect, when processing smoothly change data. Henceforth, new process method from other areas can be tentatively introduced to improve the precision of data processing, such as wavelet analysis method, integrated use of multistage filtering analysis and so on.


Sign in / Sign up

Export Citation Format

Share Document