scholarly journals Improved CNN-Based Hashing for Encrypted Image Retrieval

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Wenyan Pan ◽  
Meimin Wang ◽  
Jiaohua Qin ◽  
Zhili Zhou

As more and more image data are stored in the encrypted form in the cloud computing environment, it has become an urgent problem that how to efficiently retrieve images on the encryption domain. Recently, Convolutional Neural Network (CNN) features have achieved promising performance in the field of image retrieval, but the high dimension of CNN features will cause low retrieval efficiency. Also, it is not suitable to directly apply them for image retrieval on the encryption domain. To solve the above issues, this paper proposes an improved CNN-based hashing method for encrypted image retrieval. First, the image size is increased and inputted into the CNN to improve the representation ability. Then, a lightweight module is introduced to replace a part of modules in the CNN to reduce the parameters and computational cost. Finally, a hash layer is added to generate a compact binary hash code. In the retrieval process, the hash code is used for encrypted image retrieval, which greatly improves the retrieval efficiency. The experimental results show that the scheme allows an effective and efficient retrieval of encrypted images.

Mathematics ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 1019
Author(s):  
Wentao Ma ◽  
Jiaohua Qin ◽  
Xuyu Xiang ◽  
Yun Tan ◽  
Zhibin He

Recently, searchable encrypted image retrieval in a cloud environment has been widely studied. However, the inappropriate encryption mechanism and single feature description make it hard to achieve the expected effects. Therefore, a major challenge of encrypted image retrieval is how to extract and fuse multiple efficient features to improve performance. Towards this end, this paper proposes a searchable encrypted image retrieval based on multi-feature adaptive late-fusion in a cloud environment. Firstly, the image encryption is completed by designing the encryption function in an RGB color channel, bit plane and pixel position of the image. Secondly, the encrypted images are uploaded to the cloud server and the convolutional neural network (CNN) is fine-tuned to build a semantic feature extractor. Then, low-level features and semantic features are extracted. Finally, the similarity score curves of each feature are calculated, and adaptive late-fusion is performed by the area under the curve. A large number of experiments on public dateset are used to validate the effectiveness of our method.


Author(s):  
R. Krishnamoorthi ◽  
S. Sathiya Devi

The exponential growth of digital image data has created a great demand for effective and efficient scheme and tools for browsing, indexing and retrieving images from a collection of large image databases. To address such a demand, this paper proposes a new content based image retrieval technique with orthogonal polynomials model. The proposed model extracts texture features that represent the dominant directions, gray level variations and frequency spectrum of the image under analysis and the resultant texture feature vector becomes rotation and scale invariant. A new distance measure in the frequency domain called Deansat is proposed as a similarity measure that uses the proposed feature vector for efficient image retrieval. The efficiency of the proposed retrieval technique is experimented with the standard Brodatz, USC-SIPI and VisTex databases and is compared with Discrete Cosine Transform (DCT), Tree Structured Wavelet Transform (TWT) and Gabor filter based retrieval schemes. The experimental results reveal that the proposed method outperforms well with less computational cost.


Author(s):  
R. Krishnamoorthi ◽  
S. Sathiya Devi

The exponential growth of digital image data has created a great demand for effective and efficient scheme and tools for browsing, indexing and retrieving images from a collection of large image databases. To address such a demand, this paper proposes a new content based image retrieval technique with orthogonal polynomials model. The proposed model extracts texture features that represent the dominant directions, gray level variations and frequency spectrum of the image under analysis and the resultant texture feature vector becomes rotation and scale invariant. A new distance measure in the frequency domain called Deansat is proposed as a similarity measure that uses the proposed feature vector for efficient image retrieval. The efficiency of the proposed retrieval technique is experimented with the standard Brodatz, USC-SIPI and VisTex databases and is compared with Discrete Cosine Transform (DCT), Tree Structured Wavelet Transform (TWT) and Gabor filter based retrieval schemes. The experimental results reveal that the proposed method outperforms well with less computational cost.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yu Zhao

A new document image retrieval algorithm is proposed in view of the inefficient retrieval of information resources in a digital library. First of all, in order to accurately characterize the texture and enhance the ability of image differentiation, this paper proposes the statistical feature method of the double-tree complex wavelet. Secondly, according to the statistical characteristic method, combined with the visual characteristics of the human eye, the edge information in the document image is extracted. On this basis, we construct the meaningful texture features and use texture features to define the characteristic descriptors of document images. Taking the descriptor as the clue, the content characteristics of the document image are combined organically, and appropriate similarity measurement criteria are used for efficient retrieval. Experimental results show that the algorithm not only has high retrieval efficiency but also reduces the complexity of the traditional document image retrieval algorithm.


Author(s):  
Xingbo Liu ◽  
Xiushan Nie ◽  
Quan Zhou ◽  
Xiaoming Xi ◽  
Lei Zhu ◽  
...  

Hashing can compress high-dimensional data into compact binary codes, while preserving the similarity, to facilitate efficient retrieval and storage. However, when retrieving using an extremely short length hash code learned by the existing methods, the performance cannot be guaranteed because of severe information loss. To address this issue, in this study, we propose a novel supervised short-length hashing (SSLH). In this proposed SSLH, mutual reconstruction between the short-length hash codes and original features are performed to reduce semantic loss. Furthermore, to enhance the robustness and accuracy of the hash representation, a robust estimator term is added to fully utilize the label information. Extensive experiments conducted on four image benchmarks demonstrate the superior performance of the proposed SSLH with short-length hash codes. In addition, the proposed SSLH outperforms the existing methods, with long-length hash codes. To the best of our knowledge, this is the first linear-based hashing method that focuses on both short and long-length hash codes for maintaining high precision.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 991
Author(s):  
Yuta Nakahara ◽  
Toshiyasu Matsushima

In information theory, lossless compression of general data is based on an explicit assumption of a stochastic generative model on target data. However, in lossless image compression, researchers have mainly focused on the coding procedure that outputs the coded sequence from the input image, and the assumption of the stochastic generative model is implicit. In these studies, there is a difficulty in discussing the difference between the expected code length and the entropy of the stochastic generative model. We solve this difficulty for a class of images, in which they have non-stationarity among segments. In this paper, we propose a novel stochastic generative model of images by redefining the implicit stochastic generative model in a previous coding procedure. Our model is based on the quadtree so that it effectively represents the variable block size segmentation of images. Then, we construct the Bayes code optimal for the proposed stochastic generative model. It requires the summation of all possible quadtrees weighted by their posterior. In general, its computational cost increases exponentially for the image size. However, we introduce an efficient algorithm to calculate it in the polynomial order of the image size without loss of optimality. As a result, the derived algorithm has a better average coding rate than that of JBIG.


Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 285
Author(s):  
Wenjing Yang ◽  
Liejun Wang ◽  
Shuli Cheng ◽  
Yongming Li ◽  
Anyu Du

Recently, deep learning to hash has extensively been applied to image retrieval, due to its low storage cost and fast query speed. However, there is a defect of insufficiency and imbalance when existing hashing methods utilize the convolutional neural network (CNN) to extract image semantic features and the extracted features do not include contextual information and lack relevance among features. Furthermore, the process of the relaxation hash code can lead to an inevitable quantization error. In order to solve these problems, this paper proposes deep hash with improved dual attention for image retrieval (DHIDA), which chiefly has the following contents: (1) this paper introduces the improved dual attention mechanism (IDA) based on the ResNet18 pre-trained module to extract the feature information of the image, which consists of the position attention module and the channel attention module; (2) when calculating the spatial attention matrix and channel attention matrix, the average value and maximum value of the column of the feature map matrix are integrated in order to promote the feature representation ability and fully leverage the features of each position; and (3) to reduce quantization error, this study designs a new piecewise function to directly guide the discrete binary code. Experiments on CIFAR-10, NUS-WIDE and ImageNet-100 show that the DHIDA algorithm achieves better performance.


2021 ◽  
Vol 11 (2) ◽  
pp. 813
Author(s):  
Shuai Teng ◽  
Zongchao Liu ◽  
Gongfa Chen ◽  
Li Cheng

This paper compares the crack detection performance (in terms of precision and computational cost) of the YOLO_v2 using 11 feature extractors, which provides a base for realizing fast and accurate crack detection on concrete structures. Cracks on concrete structures are an important indicator for assessing their durability and safety, and real-time crack detection is an essential task in structural maintenance. The object detection algorithm, especially the YOLO series network, has significant potential in crack detection, while the feature extractor is the most important component of the YOLO_v2. Hence, this paper employs 11 well-known CNN models as the feature extractor of the YOLO_v2 for crack detection. The results confirm that a different feature extractor model of the YOLO_v2 network leads to a different detection result, among which the AP value is 0.89, 0, and 0 for ‘resnet18’, ‘alexnet’, and ‘vgg16’, respectively meanwhile, the ‘googlenet’ (AP = 0.84) and ‘mobilenetv2’ (AP = 0.87) also demonstrate comparable AP values. In terms of computing speed, the ‘alexnet’ takes the least computational time, the ‘squeezenet’ and ‘resnet18’ are ranked second and third respectively; therefore, the ‘resnet18’ is the best feature extractor model in terms of precision and computational cost. Additionally, through the parametric study (influence on detection results of the training epoch, feature extraction layer, and testing image size), the associated parameters indeed have an impact on the detection results. It is demonstrated that: excellent crack detection results can be achieved by the YOLO_v2 detector, in which an appropriate feature extractor model, training epoch, feature extraction layer, and testing image size play an important role.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Xi-Yan Li ◽  
Xia-Bing Zhou ◽  
Qing-Lei Zhou ◽  
Shi-Jing Han ◽  
Zheng Liu

With the development of cloud computing, high-capacity reversible data hiding in an encrypted image (RDHEI) has attracted increasing attention. The main idea of RDHEI is that an image owner encrypts a cover image, and then a data hider embeds secret information in the encrypted image. With the information hiding key, a receiver can extract the embedded data from the hidden image; with the encryption key, the receiver reconstructs the original image. In this paper, we can embed data in the form of random bits or scanned documents. The proposed method takes full advantage of the spatial correlation in the original images to vacate the room for embedding information before image encryption. By jointly using Sudoku and Arnold chaos encryption, the encrypted images retain the vacated room. Before the data hiding phase, the secret information is preprocessed by a halftone, quadtree, and S-BOX transformation. The experimental results prove that the proposed method not only realizes high-capacity reversible data hiding in encrypted images but also reconstructs the original image completely.


2019 ◽  
Vol 9 (15) ◽  
pp. 3097 ◽  
Author(s):  
Diego Renza ◽  
Jaime Andres Arango ◽  
Dora Maria Ballesteros

This paper addresses a problem in the field of audio forensics. With the aim of providing a solution that helps Chain of Custody (CoC) processes, we propose an integrity verification system that includes capture (mobile based), hash code calculation and cloud storage. When the audio is recorded, a hash code is generated in situ by the capture module (an application), and it is sent immediately to the cloud. Later, the integrity of the audio recording given as evidence can be verified according to the information stored in the cloud. To validate the properties of the proposed scheme, we conducted several tests to evaluate if two different inputs could generate the same hash code (collision resistance), and to evaluate how much the hash code changes when small changes occur in the input (sensitivity analysis). According to the results, all selected audio signals provide different hash codes, and these values are very sensitive to small changes over the recorded audio. On the other hand, in terms of computational cost, less than 2 s per minute of recording are required to calculate the hash code. With the above results, our system is useful to verify the integrity of audio recordings that may be relied on as digital evidence.


Sign in / Sign up

Export Citation Format

Share Document