scholarly journals Digital Image Watermarking Processor Based on Deep Learning

Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1183
Author(s):  
Jae-Eun Lee ◽  
Ji-Won Kang ◽  
Woo-Suk Kim ◽  
Jin-Kyum Kim ◽  
Young-Ho Seo ◽  
...  

Much research and development have been made to implement deep neural networks for various purposes with hardware. We implement the deep learning algorithm with a dedicated processor. Watermarking technology for ultra-high resolution digital images and videos needs to be implemented in hardware for real-time or high-speed operation. We propose an optimization methodology to implement a deep learning-based watermarking algorithm in hardware. The proposed optimization methodology includes algorithm and memory optimization. Next, we analyze a fixed-point number system suitable for implementing neural networks as hardware for watermarking. Using these, a hardware structure of a dedicated processor for watermarking based on deep learning technology is proposed and implemented as an application-specific integrated circuit (ASIC).

2020 ◽  
Vol 10 (7) ◽  
pp. 2361
Author(s):  
Fan Yang ◽  
Wenjin Zhang ◽  
Laifa Tao ◽  
Jian Ma

As we enter the era of big data, we have to face big data generated by industrial systems that are massive, diverse, high-speed, and variability. In order to effectively deal with big data possessing these characteristics, deep learning technology has been widely used. However, the existing methods require great human involvement that is heavily depend on domain expertise and may thus be non-representative and biased from task to similar task, so for a wide variety of prognostic and health management (PHM) tasks, how to apply the developed deep learning algorithms to similar tasks to reduce the amount of development and data collection costs has become an urgent problem. Based on the idea of transfer learning and the structures of deep learning PHM algorithms, this paper proposes two transfer strategies via transferring different elements of deep learning PHM algorithms, analyzes the possible transfer scenarios in practical application, and proposes transfer strategies applicable in each scenario. At the end of this paper, the deep learning algorithm of bearing fault diagnosis based on convolutional neural networks (CNN) is transferred based on the proposed method, which was carried out under different working conditions and for different objects, respectively. The experiments verify the value and effectiveness of the proposed method and give the best choice of transfer strategy.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1579
Author(s):  
Dongqi Wang ◽  
Qinghua Meng ◽  
Dongming Chen ◽  
Hupo Zhang ◽  
Lisheng Xu

Automatic detection of arrhythmia is of great significance for early prevention and diagnosis of cardiovascular disease. Traditional feature engineering methods based on expert knowledge lack multidimensional and multi-view information abstraction and data representation ability, so the traditional research on pattern recognition of arrhythmia detection cannot achieve satisfactory results. Recently, with the increase of deep learning technology, automatic feature extraction of ECG data based on deep neural networks has been widely discussed. In order to utilize the complementary strength between different schemes, in this paper, we propose an arrhythmia detection method based on the multi-resolution representation (MRR) of ECG signals. This method utilizes four different up to date deep neural networks as four channel models for ECG vector representations learning. The deep learning based representations, together with hand-crafted features of ECG, forms the MRR, which is the input of the downstream classification strategy. The experimental results of big ECG dataset multi-label classification confirm that the F1 score of the proposed method is 0.9238, which is 1.31%, 0.62%, 1.18% and 0.6% higher than that of each channel model. From the perspective of architecture, this proposed method is highly scalable and can be employed as an example for arrhythmia recognition.


Author(s):  
A. F. Chernyavsky ◽  
A. A. Kolyada ◽  
S. Yu. Protasenya

The article is devoted to the problem of creation of high-speed neural networks (NN) for calculation of interval-index characteristics of a minimally redundant modular code. The functional base of the proposed solution is an advanced class of neural networks of a final ring. These neural networks perform position-modular code transformations of scalable numbers using a modified reduction technology. A developed neural network has a uniform parallel structure, easy to implement and requires the time expenditures of the order (3[log2b]+ [log2k]+6tsum  close to the lower theoretical estimate. Here b and k is the average bit capacity and the number of modules respectively; t sum is the duration of the two-place operation of adding integers. The refusal from a normalization of the numbers of the modular code leads to a reduction of the required set of NN of the finite ring on the (k – 1) component. At the same time, the abnormal configuration of minimally redundant modular coding requires an average k-fold increase in the interval index module (relative to the rest of the bases of the modular number system). It leads to an adequate increase in hardware expenses on this module. Besides, the transition from normalized to unregulated coding reduces the level of homogeneity of the structure of the NN for calculating intervalindex characteristics. The possibility of reducing the structural complexity of the proposed NN by using abnormal intervalindex characteristics is investigated.


Deep Learning technology can accurately predict the presence of diseases and pests in the agricultural farms. Upon this Machine learning algorithm, we can even predict accurately the chance of any disease and pest attacks in future For spraying the correct amount of fertilizer/pesticide to elimate host, the normal human monitoring system unable to predict accurately the total amount and ardent of pest and disease attack in farm. At the specified target area the artificial percepton tells the value accurately and give corrective measure and amount of fertilizers/ pesticides to be sprayed.


BMC Genomics ◽  
2019 ◽  
Vol 20 (S9) ◽  
Author(s):  
Yang-Ming Lin ◽  
Ching-Tai Chen ◽  
Jia-Ming Chang

Abstract Background Tandem mass spectrometry allows biologists to identify and quantify protein samples in the form of digested peptide sequences. When performing peptide identification, spectral library search is more sensitive than traditional database search but is limited to peptides that have been previously identified. An accurate tandem mass spectrum prediction tool is thus crucial in expanding the peptide space and increasing the coverage of spectral library search. Results We propose MS2CNN, a non-linear regression model based on deep convolutional neural networks, a deep learning algorithm. The features for our model are amino acid composition, predicted secondary structure, and physical-chemical features such as isoelectric point, aromaticity, helicity, hydrophobicity, and basicity. MS2CNN was trained with five-fold cross validation on a three-way data split on the large-scale human HCD MS2 dataset of Orbitrap LC-MS/MS downloaded from the National Institute of Standards and Technology. It was then evaluated on a publicly available independent test dataset of human HeLa cell lysate from LC-MS experiments. On average, our model shows better cosine similarity and Pearson correlation coefficient (0.690 and 0.632) than MS2PIP (0.647 and 0.601) and is comparable with pDeep (0.692 and 0.642). Notably, for the more complex MS2 spectra of 3+ peptides, MS2PIP is significantly better than both MS2PIP and pDeep. Conclusions We showed that MS2CNN outperforms MS2PIP for 2+ and 3+ peptides and pDeep for 3+ peptides. This implies that MS2CNN, the proposed convolutional neural network model, generates highly accurate MS2 spectra for LC-MS/MS experiments using Orbitrap machines, which can be of great help in protein and peptide identifications. The results suggest that incorporating more data for deep learning model may improve performance.


2021 ◽  
pp. 26-34
Author(s):  
Yuqian Li ◽  
Weiguo Xu

AbstractArchitects usually design ideation and conception by hand-sketching. Sketching is a direct expression of the architect’s creativity. But 2D sketches are often vague, intentional and even ambiguous. In the research of sketch-based modeling, it is the most difficult part to make the computer to recognize the sketches. Because of the development of artificial intelligence, especially deep learning technology, Convolutional Neural Networks (CNNs) have shown obvious advantages in the field of extracting features and matching, and Generative Adversarial Neural Networks (GANs) have made great breakthroughs in the field of architectural generation which make the image-to-image translation become more and more popular. As the building images are gradually developed from the original sketches, in this research, we try to develop a system from the sketches to the images of buildings using CycleGAN algorithm. The experiment demonstrates that this method could achieve the mapping process from the sketches to images, and the results show that the sketches’ features could be recognised in the process. By the learning and training process of the sketches’ reconstruction, the features of the images are also mapped to the sketches, which strengthen the architectural relationship in the sketch, so that the original sketch can gradually approach the building images, and then it is possible to achieve the sketch-based modeling technology.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Huiying Zhang ◽  
Jinjin Guo ◽  
Guie Sun

High-dimensional deep learning has been applied in all walks of life at present, among which the most representative one is the logistics path optimization combining multimedia with high-dimensional deep learning. Using multimedia logistics to explore and operate the best path can make the whole logistics industry get innovation and leap forward. How to use high-dimensional deep learning to conduct visual logistics operation management is an opportunity and a problem facing the whole logistics industry at present. The application of high-dimensional deep learning technology can help logistics enterprises improve their management level, realize intelligent decision-making, and enable accurate prediction. Starting from the total amount of logistics, regional layout, enterprise scale, and high-dimensional deep learning algorithm, this paper analyzes the current situation of China’s logistic development through multiweight analysis and explores the best path for multimedia logistics.


Sign in / Sign up

Export Citation Format

Share Document