A Method for Image Forgery Detection Based on Error Level Analysis (ELA) Technique

Author(s):  
Emanuele Morra ◽  
Roberto Revetria ◽  
Danilo Pecorino ◽  
Gabriele Galli ◽  
Andrea Mungo ◽  
...  

In the last years, there has been growing a large increase in digital imaging techniques, and their applications became more and more pivotal in many critical scenarios. Conversely, hand in hand with this technological boost, imaging forgeries have increased more and more along with their level of precision. In this view, the use of digital tools, aiming to verify the integrity of a certain image, is essential. Indeed, insurance is a field that extensively uses images for filling claim requests and a robust forgery detection is essential. This paper proposes an approach which aims to introduce a full-automated system for identifying potential splicing frauds in images of car plates by overcoming traditional problems using artificial neural networks (ANN). For instance, classic fraud-detection algorithms are impossible to fully automatize whereas modern deep learning approaches require vast training datasets that are not available most of the time. The method developed in this paper uses Error Level Analysis (ELA) performed on car license plates as an input for a trained model which is able to classify license plates in either original or forged.

Author(s):  
Ida Bagus Kresna Sudiatmika ◽  
Fathur Rahman ◽  
Trisno Trisno ◽  
Suyoto Suyoto

Author(s):  
Wina Permana Sari ◽  
Hisyam Fahmi

Digital image modification or image forgery is easy to do today. The authenticity verification of an image become important to protect the image integrity so that the image is not being misused. Error Level Analysis (ELA) can be used to detect the modification in image by lowering the quality of image and comparing the error level. The use of deep learning approach is a state-of-the-art in solving cases of image data classification. This study wants to know the effect of adding ELA extraction process in the image forgery detection using deep learning approach. The Convolutional Neural Network (CNN), which is a deep learning method, is used as a method to do the image forgery detection. The impacts of applying different ELA compression levels, such as 10, 50, and 90 percent, were also compared in this study. According to the results, adopting the ELA feature increases validation accuracy by about 2.7% and give the better test accuracy. However, the use of ELA will slow down the processing time by about 5.6%.


2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


Entropy ◽  
2020 ◽  
Vol 22 (2) ◽  
pp. 249
Author(s):  
Weiguo Zhang ◽  
Chenggang Zhao ◽  
Yuxing Li

The quality and efficiency of generating face-swap images have been markedly strengthened by deep learning. For instance, the face-swap manipulations by DeepFake are so real that it is tricky to distinguish authenticity through automatic or manual detection. To augment the efficiency of distinguishing face-swap images generated by DeepFake from real facial ones, a novel counterfeit feature extraction technique was developed based on deep learning and error level analysis (ELA). It is related to entropy and information theory such as cross-entropy loss function in the final softmax layer. The DeepFake algorithm is only able to generate limited resolutions. Therefore, this algorithm results in two different image compression ratios between the fake face area as the foreground and the original area as the background, which would leave distinctive counterfeit traces. Through the ELA method, we can detect whether there are different image compression ratios. Convolution neural network (CNN), one of the representative technologies of deep learning, can extract the counterfeit feature and detect whether images are fake. Experiments show that the training efficiency of the CNN model can be significantly improved by the ELA method. In addition, the proposed technique can accurately extract the counterfeit feature, and therefore achieves outperformance in simplicity and efficiency compared with direct detection methods. Specifically, without loss of accuracy, the amount of computation can be significantly reduced (where the required floating-point computing power is reduced by more than 90%).


Proceedings ◽  
2019 ◽  
Vol 46 (1) ◽  
pp. 29
Author(s):  
Weiguo Zhang ◽  
Chenggang Zhao

New developments in artificial intelligence (AI) have significantly improved the quality and efficiency in generating fake face images; for example, the face manipulations by DeepFake are so realistic that it is difficult to distinguish their authenticity—either automatically or by humans. In order to enhance the efficiency of distinguishing facial images generated by AI from real facial images, a novel model has been developed based on deep learning and error level analysis (ELA) detection, which is related to entropy and information theory, such as cross-entropy loss function in the final Softmax layer, normalized mutual information in image preprocessing, and some applications of an encoder based on information theory. Due to the limitations of computing resources and production time, the DeepFake algorithm can only generate limited resolutions, resulting in two different image compression ratios between the fake face area as the foreground and the original area as the background, which leaves distinctive artifacts. By using the error level analysis detection method, we can detect the presence or absence of different image compression ratios and then use Convolution neural network (CNN) to detect whether the image is fake. Experiments show that the training efficiency of the CNN model can be significantly improved by using the ELA method. And the detection accuracy rate can reach more than 97% based on CNN architecture of this method. Compared to the state-of-the-art models, the proposed model has the advantages such as fewer layers, shorter training time, and higher efficiency.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Kai Kiwitz ◽  
Christian Schiffer ◽  
Hannah Spitzer ◽  
Timo Dickscheid ◽  
Katrin Amunts

AbstractThe distribution of neurons in the cortex (cytoarchitecture) differs between cortical areas and constitutes the basis for structural maps of the human brain. Deep learning approaches provide a promising alternative to overcome throughput limitations of currently used cytoarchitectonic mapping methods, but typically lack insight as to what extent they follow cytoarchitectonic principles. We therefore investigated in how far the internal structure of deep convolutional neural networks trained for cytoarchitectonic brain mapping reflect traditional cytoarchitectonic features, and compared them to features of the current grey level index (GLI) profile approach. The networks consisted of a 10-block deep convolutional architecture trained to segment the primary and secondary visual cortex. Filter activations of the networks served to analyse resemblances to traditional cytoarchitectonic features and comparisons to the GLI profile approach. Our analysis revealed resemblances to cellular, laminar- as well as cortical area related cytoarchitectonic features. The networks learned filter activations that reflect the distinct cytoarchitecture of the segmented cortical areas with special regard to their laminar organization and compared well to statistical criteria of the GLI profile approach. These results confirm an incorporation of relevant cytoarchitectonic features in the deep convolutional neural networks and mark them as a valid support for high-throughput cytoarchitectonic mapping workflows.


Intensification in the occurrence of brain diseases and the need for the initial diagnosis for ailments like Tumor, Alzheimer’s, Epilepsy and Parkinson’s has riveted the attention of researchers. Machine learning practices, specifically deep learning, is considered as a beneficial diagnostic tool. Deep learning approaches to neuroimaging will assist computer-aided analysis of neurological diseases. Feature extraction of neuroimages carried out using Artificial Neural Networks leads to better diagnoses. In this study, all the brain diseases are revisited to consolidate the methodologies carried out by various authors in the literature.


2019 ◽  
Vol 8 (6) ◽  
pp. 258 ◽  
Author(s):  
Yu Feng ◽  
Frank Thiemann ◽  
Monika Sester

Cartographic generalization is a problem, which poses interesting challenges to automation. Whereas plenty of algorithms have been developed for the different sub-problems of generalization (e.g., simplification, displacement, aggregation), there are still cases, which are not generalized adequately or in a satisfactory way. The main problem is the interplay between different operators. In those cases the human operator is the benchmark, who is able to design an aesthetic and correct representation of the physical reality. Deep learning methods have shown tremendous success for interpretation problems for which algorithmic methods have deficits. A prominent example is the classification and interpretation of images, where deep learning approaches outperform traditional computer vision methods. In both domains-computer vision and cartography-humans are able to produce good solutions. A prerequisite for the application of deep learning is the availability of many representative training examples for the situation to be learned. As this is given in cartography (there are many existing map series), the idea in this paper is to employ deep convolutional neural networks (DCNNs) for cartographic generalizations tasks, especially for the task of building generalization. Three network architectures, namely U-net, residual U-net and generative adversarial network (GAN), are evaluated both quantitatively and qualitatively in this paper. They are compared based on their performance on this task at target map scales 1:10,000, 1:15,000 and 1:25,000, respectively. The results indicate that deep learning models can successfully learn cartographic generalization operations in one single model in an implicit way. The residual U-net outperforms the others and achieved the best generalization performance.


Sign in / Sign up

Export Citation Format

Share Document