scholarly journals IoT-Blockchain Enabled Optimized Provenance System for Food Industry 4.0 Using Advanced Deep Learning

Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2990 ◽  
Author(s):  
Prince Waqas Khan ◽  
Yung-Cheol Byun ◽  
Namje Park

Agriculture and livestock play a vital role in social and economic stability. Food safety and transparency in the food supply chain are a significant concern for many people. Internet of Things (IoT) and blockchain are gaining attention due to their success in versatile applications. They generate a large amount of data that can be optimized and used efficiently by advanced deep learning (ADL) techniques. The importance of such innovations from the viewpoint of supply chain management is significant in different processes such as for broadened visibility, provenance, digitalization, disintermediation, and smart contracts. This article takes the secure IoT–blockchain data of Industry 4.0 in the food sector as a research object. Using ADL techniques, we propose a hybrid model based on recurrent neural networks (RNN). Therefore, we used long short-term memory (LSTM) and gated recurrent units (GRU) as a prediction model and genetic algorithm (GA) optimization jointly to optimize the parameters of the hybrid model. We select the optimal training parameters by GA and finally cascade LSTM with GRU. We evaluated the performance of the proposed system for a different number of users. This paper aims to help supply chain practitioners to take advantage of the state-of-the-art technologies; it will also help the industry to make policies according to the predictions of ADL.

Energies ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 3004
Author(s):  
Khadijeh Alibabaei ◽  
Pedro D. Gaspar ◽  
Tânia M. Lima

Deep learning has already been successfully used in the development of decision support systems in various domains. Therefore, there is an incentive to apply it in other important domains such as agriculture. Fertilizers, electricity, chemicals, human labor, and water are the components of total energy consumption in agriculture. Yield estimates are critical for food security, crop management, irrigation scheduling, and estimating labor requirements for harvesting and storage. Therefore, estimating product yield can reduce energy consumption. Two deep learning models, Long Short-Term Memory and Gated Recurrent Units, have been developed for the analysis of time-series data such as agricultural datasets. In this paper, the capabilities of these models and their extensions, called Bidirectional Long Short-Term Memory and Bidirectional Gated Recurrent Units, to predict end-of-season yields are investigated. The models use historical data, including climate data, irrigation scheduling, and soil water content, to estimate end-of-season yield. The application of this technique was tested for tomato and potato yields at a site in Portugal. The Bidirectional Long Short-Term memory outperformed the Gated Recurrent Units network, the Long Short-Term Memory, and the Bidirectional Gated Recurrent Units network on the validation dataset. The model was able to capture the nonlinear relationship between irrigation amount, climate data, and soil water content and predict yield with an MSE of 0.017 to 0.039. The performance of the Bidirectional Long Short-Term Memory in the test was compared with the most commonly used deep learning method, the Convolutional Neural Network, and machine learning methods including a Multi-Layer Perceptrons model and Random Forest Regression. The Bidirectional Long Short-Term Memory outperformed the other models with an R2 score between 0.97 and 0.99. The results show that analyzing agricultural data with the Long Short-Term Memory model improves the performance of the model in terms of accuracy. The Convolutional Neural Network model achieved the second-best performance. Therefore, the deep learning model has a remarkable ability to predict the yield at the end of the season.


2021 ◽  
Vol 11 (13) ◽  
pp. 5853
Author(s):  
Hyesook Son ◽  
Seokyeon Kim ◽  
Hanbyul Yeon ◽  
Yejin Kim ◽  
Yun Jang ◽  
...  

The output of a deep-learning model delivers different predictions depending on the input of the deep learning model. In particular, the input characteristics might affect the output of a deep learning model. When predicting data that are measured with sensors in multiple locations, it is necessary to train a deep learning model with spatiotemporal characteristics of the data. Additionally, since not all of the data measured together result in increasing the accuracy of the deep learning model, we need to utilize the correlation characteristics between the data features. However, it is difficult to interpret the deep learning output, depending on the input characteristics. Therefore, it is necessary to analyze how the input characteristics affect prediction results to interpret deep learning models. In this paper, we propose a visualization system to analyze deep learning models with air pollution data. The proposed system visualizes the predictions according to the input characteristics. The input characteristics include space-time and data features, and we apply temporal prediction networks, including gated recurrent units (GRU), long short term memory (LSTM), and spatiotemporal prediction networks (convolutional LSTM) as deep learning models. We interpret the output according to the characteristics of input to show the effectiveness of the system.


Author(s):  
Quanzhong Liu ◽  
Jinxiang Chen ◽  
Yanze Wang ◽  
Shuqin Li ◽  
Cangzhi Jia ◽  
...  

Abstract DNA N4-methylcytosine (4mC) is an important epigenetic modification that plays a vital role in regulating DNA replication and expression. However, it is challenging to detect 4mC sites through experimental methods, which are time-consuming and costly. Thus, computational tools that can identify 4mC sites would be very useful for understanding the mechanism of this important type of DNA modification. Several machine learning-based 4mC predictors have been proposed in the past 3 years, although their performance is unsatisfactory. Deep learning is a promising technique for the development of more accurate 4mC site predictions. In this work, we propose a deep learning-based approach, called DeepTorrent, for improved prediction of 4mC sites from DNA sequences. It combines four different feature encoding schemes to encode raw DNA sequences and employs multi-layer convolutional neural networks with an inception module integrated with bidirectional long short-term memory to effectively learn the higher-order feature representations. Dimension reduction and concatenated feature maps from the filters of different sizes are then applied to the inception module. In addition, an attention mechanism and transfer learning techniques are also employed to train the robust predictor. Extensive benchmarking experiments demonstrate that DeepTorrent significantly improves the performance of 4mC site prediction compared with several state-of-the-art methods.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4884
Author(s):  
Danish Javeed ◽  
Tianhan Gao ◽  
Muhammad Taimoor Khan ◽  
Ijaz Ahmad

The Internet of Things (IoT) has emerged as a new technological world connecting billions of devices. Despite providing several benefits, the heterogeneous nature and the extensive connectivity of the devices make it a target of different cyberattacks that result in data breach and financial loss. There is a severe need to secure the IoT environment from such attacks. In this paper, an SDN-enabled deep-learning-driven framework is proposed for threats detection in an IoT environment. The state-of-the-art Cuda-deep neural network, gated recurrent unit (Cu- DNNGRU), and Cuda-bidirectional long short-term memory (Cu-BLSTM) classifiers are adopted for effective threat detection. We have performed 10 folds cross-validation to show the unbiasedness of results. The up-to-date publicly available CICIDS2018 data set is introduced to train our hybrid model. The achieved accuracy of the proposed scheme is 99.87%, with a recall of 99.96%. Furthermore, we compare the proposed hybrid model with Cuda-Gated Recurrent Unit, Long short term memory (Cu-GRULSTM) and Cuda-Deep Neural Network, Long short term memory (Cu- DNNLSTM), as well as with existing benchmark classifiers. Our proposed mechanism achieves impressive results in terms of accuracy, F1-score, precision, speed efficiency, and other evaluation metrics.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 924
Author(s):  
Moslem Imani ◽  
Hoda Fakour ◽  
Wen-Hau Lan ◽  
Huan-Chin Kao ◽  
Chi Ming Lee ◽  
...  

Despite the great significance of precisely forecasting the wind speed for development of the new and clean energy technology and stable grid operators, the stochasticity of wind speed makes the prediction a complex and challenging task. For improving the security and economic performance of power grids, accurate short-term wind power forecasting is crucial. In this paper, a deep learning model (Long Short-term Memory (LSTM)) has been proposed for wind speed prediction. Knowing that wind speed time series is nonlinear stochastic, the mutual information (MI) approach was used to find the best subset from the data by maximizing the joint MI between subset and target output. To enhance the accuracy and reduce input characteristics and data uncertainties, rough set and interval type-2 fuzzy set theory are combined in the proposed deep learning model. Wind speed data from an international airport station in the southern coast of Iran Bandar-Abbas City was used as the original input dataset for the optimized deep learning model. Based on the statistical results, the rough set LSTM (RST-LSTM) model showed better prediction accuracy than fuzzy and original LSTM, as well as traditional neural networks, with the lowest error for training and testing datasets in different time horizons. The suggested model can support the optimization of the control approach and the smooth procedure of power system. The results confirm the superior capabilities of deep learning techniques for wind speed forecasting, which could also inspire new applications in meteorology assessment.


Author(s):  
Claire Brenner ◽  
Jonathan Frame ◽  
Grey Nearing ◽  
Karsten Schulz

ZusammenfassungDie Verdunstung ist ein entscheidender Prozess im globalen Wasser‑, Energie- sowie Kohlenstoffkreislauf. Daten zur räumlich-zeitlichen Dynamik der Verdunstung sind daher von großer Bedeutung für Klimamodellierungen, zur Abschätzung der Auswirkungen der Klimakrise sowie nicht zuletzt für die Landwirtschaft.In dieser Arbeit wenden wir zwei Machine- und Deep Learning-Methoden für die Vorhersage der Verdunstung mit täglicher und halbstündlicher Auflösung für Standorte des FLUXNET-Datensatzes an. Das Long Short-Term Memory Netzwerk ist ein rekurrentes neuronales Netzwerk, welchen explizit Speichereffekte berücksichtigt und Zeitreihen der Eingangsgrößen analysiert (entsprechend physikalisch-basierten Wasserbilanzmodellen). Dem gegenüber gestellt werden Modellierungen mit XGBoost, einer Entscheidungsbaum-Methode, die in diesem Fall nur Informationen für den zu bestimmenden Zeitschritt erhält (entsprechend physikalisch-basierten Energiebilanzmodellen). Durch diesen Vergleich der beiden Modellansätze soll untersucht werden, inwieweit sich durch die Berücksichtigung von Speichereffekten Vorteile für die Modellierung ergeben.Die Analysen zeigen, dass beide Modellansätze gute Ergebnisse erzielen und im Vergleich zu einem ausgewerteten Referenzdatensatz eine höhere Modellgüte aufweisen. Vergleicht man beide Modelle, weist das LSTM im Mittel über alle 153 untersuchten Standorte eine bessere Übereinstimmung mit den Beobachtungen auf. Allerdings zeigt sich eine Abhängigkeit der Güte der Verdunstungsvorhersage von der Vegetationsklasse des Standorts; vor allem wärmere, trockene Standorte mit kurzer Vegetation werden durch das LSTM besser repräsentiert, wohingegen beispielsweise in Feuchtgebieten XGBoost eine bessere Übereinstimmung mit den Beobachtung liefert. Die Relevanz von Speichereffekten scheint daher zwischen Ökosystemen und Standorten zu variieren.Die präsentierten Ergebnisse unterstreichen das Potenzial von Methoden der künstlichen Intelligenz für die Beschreibung der Verdunstung.


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 495
Author(s):  
Imayanmosha Wahlang ◽  
Arnab Kumar Maji ◽  
Goutam Saha ◽  
Prasun Chakrabarti ◽  
Michal Jasinski ◽  
...  

This article experiments with deep learning methodologies in echocardiogram (echo), a promising and vigorously researched technique in the preponderance field. This paper involves two different kinds of classification in the echo. Firstly, classification into normal (absence of abnormalities) or abnormal (presence of abnormalities) has been done, using 2D echo images, 3D Doppler images, and videographic images. Secondly, based on different types of regurgitation, namely, Mitral Regurgitation (MR), Aortic Regurgitation (AR), Tricuspid Regurgitation (TR), and a combination of the three types of regurgitation are classified using videographic echo images. Two deep-learning methodologies are used for these purposes, a Recurrent Neural Network (RNN) based methodology (Long Short Term Memory (LSTM)) and an Autoencoder based methodology (Variational AutoEncoder (VAE)). The use of videographic images distinguished this work from the existing work using SVM (Support Vector Machine) and also application of deep-learning methodologies is the first of many in this particular field. It was found that deep-learning methodologies perform better than SVM methodology in normal or abnormal classification. Overall, VAE performs better in 2D and 3D Doppler images (static images) while LSTM performs better in the case of videographic images.


Sign in / Sign up

Export Citation Format

Share Document