Image Retrieval Techniques, Analysis and Interpretation for Leukemia Data Sets

Author(s):  
Shobana Rajendran
Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3406
Author(s):  
Jie Jiang ◽  
Yin Zou ◽  
Lidong Chen ◽  
Yujie Fang

Precise localization and pose estimation in indoor environments are commonly employed in a wide range of applications, including robotics, augmented reality, and navigation and positioning services. Such applications can be solved via visual-based localization using a pre-built 3D model. The increase in searching space associated with large scenes can be overcome by retrieving images in advance and subsequently estimating the pose. The majority of current deep learning-based image retrieval methods require labeled data, which increase data annotation costs and complicate the acquisition of data. In this paper, we propose an unsupervised hierarchical indoor localization framework that integrates an unsupervised network variational autoencoder (VAE) with a visual-based Structure-from-Motion (SfM) approach in order to extract global and local features. During the localization process, global features are applied for the image retrieval at the level of the scene map in order to obtain candidate images, and are subsequently used to estimate the pose from 2D-3D matches between query and candidate images. RGB images only are used as the input of the proposed localization system, which is both convenient and challenging. Experimental results reveal that the proposed method can localize images within 0.16 m and 4° in the 7-Scenes data sets and 32.8% within 5 m and 20° in the Baidu data set. Furthermore, our proposed method achieves a higher precision compared to advanced methods.


2013 ◽  
Vol 427-429 ◽  
pp. 1606-1609 ◽  
Author(s):  
Tao Chen ◽  
Hui Fang Deng

In this paper, we propose a novel method for image retrieval based on multi-instance learning with relevance feedback. The process of this method mainly includes the following three steps: First, it segments each image into a number of regions, treats images and regions as bags and instances respectively. Second, it constructs an objective function of multi-instance learning with the query images, which is used to rank the images from a large digital repository according to the distance values between the nearest region vector of each image and the maximum of the objective function. Third, based on the users relevance feedback, several rounds may be needed to refine the output images and their ranks. Finally, a satisfying set of images will be returned to users. Experimental results on COREL image data sets have demonstrated the effectiveness of the proposed approach.


2014 ◽  
Author(s):  
Adrin Jalali ◽  
Nico Pfeifer

Motivation: Molecular measurements from cancer patients such as gene expression and DNA methylation are usually very noisy. Furthermore, cancer types can be very heterogeneous. Therefore, one of the main assumptions for machine learning, that the underlying unknown distribution is the same for all samples, might not be completely fullfilled. We introduce a method, that can estimate this bias on a per-feature level and incorporate calculated feature confidences into a weighted combination of classifiers with disjoint feature sets. Results: The new method achieves state-of-the-art performance on many different cancer data sets with measured DNA methylation or gene expression. Moreover, we show how to visualize the learned classifiers to find interesting associations with the target label. Applied to a leukemia data set we find several ribosomal proteins associated with leukemia's risk group that might be interesting targets for follow-up studies and support the hypothesis that the ribosomes are a new frontier in gene regulation. Availability: The method is available under GPLv3+ License at https: //github.com/adrinjalali/Network-Classifier.


2020 ◽  
Vol 17 (12) ◽  
pp. 5550-5562
Author(s):  
R. Inbaraj ◽  
G. Ravi

Content-Based Image Retrieval (CBIR) is another yet broadly recognized method for distinguishing images from monstrous and unannotated image databases. With the improvement of network and mixed media headways ending up being increasingly famous, customers are not content with the regular information retrieval progresses. So nowadays, Content-Based Image Retrieval (CBIR) is the perfect and fast recovery source. Lately, various strategies have been created to improve CBIR execution. Data clustering is an overlooked method of hiding formatting extraction from large data blocks. With large data sets, there is a possibility of high dimensionality Models are a challenging domain with both massive numerical accuracy and efficiency for multidimensional data sets. The calibration and rich information dataset contain the problem of recovery and handling of medical images. Every day, more medical images were converted to digital format. Therefore, this work has applied these data to manage and file a novel approach, the “Clustering (MHC) Approach Using Content-Based Medical Image Retrieval Hybrid.” This work is implemented as four levels. With each level, the effectiveness of job retention is improved. Compared to some of the existing works that are being done in the analysis of this work’s literature, the results of this work are compared. The classification and learning features are used to retrieve medical images in a database. The proposed recovery system performs better than the traditional approach; with precision, recall, F-measure, and accuracy of proposed method are 97.29%, 95.023%, 4.36%, and 98.55% respectively. The recommended approach is most appropriate for recuperating clinical images for various parts of the body.


2020 ◽  
Vol 79 (37-38) ◽  
pp. 26995-27021
Author(s):  
Lorenzo Putzu ◽  
Luca Piras ◽  
Giorgio Giacinto

Abstract Given the great success of Convolutional Neural Network (CNN) for image representation and classification tasks, we argue that Content-Based Image Retrieval (CBIR) systems could also leverage on CNN capabilities, mainly when Relevance Feedback (RF) mechanisms are employed. On the one hand, to improve the performances of CBIRs, that are strictly related to the effectiveness of the descriptors used to represent an image, as they aim at providing the user with images similar to an initial query image. On the other hand, to reduce the semantic gap between the similarity perceived by the user and the similarity computed by the machine, by exploiting an RF mechanism where the user labels the returned images as being relevant or not concerning her interests. Consequently, in this work, we propose a CBIR system based on transfer learning from a CNN trained on a vast image database, thus exploiting the generic image representation that it has already learned. Then, the pre-trained CNN is also fine-tuned exploiting the RF supplied by the user to reduce the semantic gap. In particular, after the user’s feedback, we propose to tune and then re-train the CNN according to the labelled set of relevant and non-relevant images. Then, we suggest different strategies to exploit the updated CNN for returning a novel set of images that are expected to be relevant to the user’s needs. Experimental results on different data sets show the effectiveness of the proposed mechanisms in improving the representation power of the CNN with respect to the user concept of image similarity. Moreover, the pros and cons of the different approaches can be clearly pointed out, thus providing clear guidelines for the implementation in production environments.


2015 ◽  
Vol 14 ◽  
pp. CIN.S22371 ◽  
Author(s):  
Ali Anaissi ◽  
Madhu Goyal ◽  
Daniel R. Catchpoole ◽  
Ali Braytee ◽  
Paul J. Kennedy

Background The process of retrieving similar cases in a case-based reasoning system is considered a big challenge for gene expression data sets. The huge number of gene expression values generated by microarray technology leads to complex data sets and similarity measures for high-dimensional data are problematic. Hence, gene expression similarity measurements require numerous machine-learning and data-mining techniques, such as feature selection and dimensionality reduction, to be incorporated into the retrieval process. Methods This article proposes a case-based retrieval framework that uses a k-nearest-neighbor classifier with a weighted-feature-based similarity to retrieve previously treated patients based on their gene expression profiles. Results The herein-proposed methodology is validated on several data sets: a childhood leukemia data set collected from The Children's Hospital at Westmead, as well as the Colon cancer, the National Cancer Institute (NCI), and the Prostate cancer data sets. Results obtained by the proposed framework in retrieving patients of the data sets who are similar to new patients are as follows: 96% accuracy on the childhood leukemia data set, 95% on the NCI data set, 93% on the Colon cancer data set, and 98% on the Prostate cancer data set. Conclusion The designed case-based retrieval framework is an appropriate choice for retrieving previous patients who are similar to a new patient, on the basis of their gene expression data, for better diagnosis and treatment of childhood leukemia. Moreover, this framework can be applied to other gene expression data sets using some or all of its steps.


2018 ◽  
Vol 7 (4.5) ◽  
pp. 87
Author(s):  
P. Nalini ◽  
Dr B. L. Malleswari

Medical Image Retrieval is mainly meant for enhancing the healthcare system by coordinating physicians and interact with computing machines. This helps the doctors and radiologists in understanding the case and leads to automatic medical image annotation process. The choice of image attributes have crucial role in retrieving similar looking images of various anatomic regions.  In this paper we presented an empirical analysis of an X-Ray image retrieval system with intensity, statistical features, DFT and DWT transformed coefficients and Eigen values using Singular Valued Decomposition techniques as parameters. We computed these features by dividing the images in five different regular and irregular zones. In our previous work we proved that analyzing the image with local attributes result in better retrieval efficiency and hence in this paper we computed the attributes by dividing the image into 64 regular and irregular zones. This experimentation carried out on IRMA 2008 and IRMA 2009 X-Ray image data sets. In this work we come up with some conclusions like wavelet based textural attributes, intensity features and Eigen values extracted from different regular zones worked well in retrieving the images over the features computed over irregular zones. We also determined like the set of image features in which form of zoning for different anatomical regions  result in excellent retrieval of  similar looking X-Ray images.


Author(s):  
Mohamed Ibrahim ◽  
Haitham Yousof

In this work we focus on proposing a new lifetime Weibull type model called the transmuted Topp-Leone Weibull and studying its properties. We derive some new bivariate and multivariate transmuted Topp-Leone Weibull versions using “Farlie Gumbel Morgenstern (FGM) Copula”, “modified FGM Copula”, “Clayton Copula” and “Renyi's entropy Copula”. The estimation of its unknown parameters is carried out by considering different method of estimation. The statistical performances of all methods are studied by two real data sets and a numerical Monte Carlo simulation. The Cramer-Von Mises method is the best method for modeling the carbon fibers data. The maximum likelihood method is the best method for modeling the Leukemia data, however all other methods performed well.


2021 ◽  
Vol 10 (4) ◽  
pp. 249
Author(s):  
Hongwei Zhao ◽  
Jiaxin Wu ◽  
Danyang Zhang ◽  
Pingping Liu

For full description of images’ semantic information, image retrieval tasks are increasingly using deep convolution features trained by neural networks. However, to form a compact feature representation, the obtained convolutional features must be further aggregated in image retrieval. The quality of aggregation affects retrieval performance. In order to obtain better image descriptors for image retrieval, we propose two modules in our method. The first module is named generalized regional maximum activation of convolutions (GR-MAC), which pays more attention to global information at multiple scales. The second module is called saliency joint weighting, which uses nonparametric saliency weighting and channel weighting to focus feature maps more on the salient region without discarding overall information. Finally, we fuse the two modules to obtain more representative image feature descriptors that not only consider the global information of the feature map but also highlight the salient region. We conducted experiments on multiple widely used retrieval data sets such as roxford5k to verify the effectiveness of our method. The experimental results prove that our method is more accurate than the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document