Diagnostic method based DL approach to detect the lack of elements from the leaves of diseased plants

Author(s):  
Mohamed Elleuch ◽  
Fatma Marzougui ◽  
Monji Kherallah

The main problem in agriculture is the attack of diseases on the leaves of plants and the spread of agricultural pests. For this reason, we will present how to treat certain phenomena of disease in plants, or how to prevent and do the precautionary measures to adopt a modern method to diagnose the deficiency of the leaves elements of the diseased plants. Thus, the deep learning is the most appropriate solution to detect the properties of the leaves and is essential in the tracking of large fields of crops as well as automatically detecting the symptoms of the leaves characteristics as soon as they appear on the plants leaves. In this paper, we clarified the Transfer Learning (TL) architecture for VGG-16 and the other architecture like ResNet to detect plants that suffer from diseases in the sheet due to a lack of ingredient using a set of increased data based on the leaves of healthy and unhealthy plants alike. The experimental results show that significant detection accuracy improvement has been achieved thanks to our proposed model compared to other reported methods.

Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4736
Author(s):  
Sk. Tanzir Mehedi ◽  
Adnan Anwar ◽  
Ziaur Rahman ◽  
Kawsar Ahmed

The Controller Area Network (CAN) bus works as an important protocol in the real-time In-Vehicle Network (IVN) systems for its simple, suitable, and robust architecture. The risk of IVN devices has still been insecure and vulnerable due to the complex data-intensive architectures which greatly increase the accessibility to unauthorized networks and the possibility of various types of cyberattacks. Therefore, the detection of cyberattacks in IVN devices has become a growing interest. With the rapid development of IVNs and evolving threat types, the traditional machine learning-based IDS has to update to cope with the security requirements of the current environment. Nowadays, the progression of deep learning, deep transfer learning, and its impactful outcome in several areas has guided as an effective solution for network intrusion detection. This manuscript proposes a deep transfer learning-based IDS model for IVN along with improved performance in comparison to several other existing models. The unique contributions include effective attribute selection which is best suited to identify malicious CAN messages and accurately detect the normal and abnormal activities, designing a deep transfer learning-based LeNet model, and evaluating considering real-world data. To this end, an extensive experimental performance evaluation has been conducted. The architecture along with empirical analyses shows that the proposed IDS greatly improves the detection accuracy over the mainstream machine learning, deep learning, and benchmark deep transfer learning models and has demonstrated better performance for real-time IVN security.


Author(s):  
S. Arokiaraj ◽  
Dr. N. Viswanathan

With the advent of Internet of things(IoT),HA (HA) recognition has contributed the more application in health care in terms of diagnosis and Clinical process. These devices must be aware of human movements to provide better aid in the clinical applications as well as user’s daily activity.Also , In addition to machine and deep learning algorithms, HA recognition systems has significantly improved in terms of high accurate recognition. However, the most of the existing models designed needs improvisation in terms of accuracy and computational overhead. In this research paper, we proposed a BAT optimized Long Short term Memory (BAT-LSTM) for an effective recognition of human activities using real time IoT systems. The data are collected by implanting the Internet of things) devices invasively. Then, proposed BAT-LSTM is deployed to extract the temporal features which are then used for classification to HA. Nearly 10,0000 dataset were collected and used for evaluating the proposed model. For the validation of proposed framework, accuracy, precision, recall, specificity and F1-score parameters are chosen and comparison is done with the other state-of-art deep learning models. The finding shows the proposed model outperforms the other learning models and finds its suitability for the HA recognition.


2020 ◽  
Author(s):  
varan singhrohila ◽  
Nitin Gupta ◽  
Amit Kaul ◽  
Deepak Sharma

<div>The ongoing pandemic of COVID-19 has shown</div><div>the limitations of our current medical institutions. There</div><div>is a need for research in the field of automated diagnosis</div><div>for speeding up the process while maintaining accuracy</div><div>and reducing computational requirements. In this work, an</div><div>automatic diagnosis of COVID-19 infection from CT scans</div><div>of the patients using Deep Learning technique is proposed.</div><div>The proposed model, ReCOV-101 uses full chest CT scans to</div><div>detect varying degrees of COVID-19 infection, and requires</div><div>less computational power. Moreover, in order to improve</div><div>the detection accuracy the CT-scans were preprocessed by</div><div>employing segmentation and interpolation. The proposed</div><div>scheme is based on the residual network, taking advantage</div><div>of skip connection, allowing the model to go deeper.</div><div>Moreover, the model was trained on a single enterpriselevel</div><div>GPU such that it can easily be provided on the edge of</div><div>the network, reducing communication with the cloud often</div><div>required for processing the data. The objective of this work</div><div>is to demonstrate a less hardware-intensive approach for COVID-19 detection with excellent performance that can</div><div>be combined with medical equipment and help ease the</div><div>examination procedure. Moreover, with the proposed model</div><div>an accuracy of 94.9% was achieved.</div>


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 445 ◽  
Author(s):  
Laith Alzubaidi ◽  
Omran Al-Shamma ◽  
Mohammed A. Fadhel ◽  
Laith Farhan ◽  
Jinglan Zhang ◽  
...  

Breast cancer is a significant factor in female mortality. An early cancer diagnosis leads to a reduction in the breast cancer death rate. With the help of a computer-aided diagnosis system, the efficiency increased, and the cost was reduced for the cancer diagnosis. Traditional breast cancer classification techniques are based on handcrafted features techniques, and their performance relies upon the chosen features. They also are very sensitive to different sizes and complex shapes. However, histopathological breast cancer images are very complex in shape. Currently, deep learning models have become an alternative solution for diagnosis, and have overcome the drawbacks of classical classification techniques. Although deep learning has performed well in various tasks of computer vision and pattern recognition, it still has some challenges. One of the main challenges is the lack of training data. To address this challenge and optimize the performance, we have utilized a transfer learning technique which is where the deep learning models train on a task, and then fine-tune the models for another task. We have employed transfer learning in two ways: Training our proposed model first on the same domain dataset, then on the target dataset, and training our model on a different domain dataset, then on the target dataset. We have empirically proven that the same domain transfer learning optimized the performance. Our hybrid model of parallel convolutional layers and residual links is utilized to classify hematoxylin–eosin-stained breast biopsy images into four classes: invasive carcinoma, in-situ carcinoma, benign tumor and normal tissue. To reduce the effect of overfitting, we have augmented the images with different image processing techniques. The proposed model achieved state-of-the-art performance, and it outperformed the latest methods by achieving a patch-wise classification accuracy of 90.5%, and an image-wise classification accuracy of 97.4% on the validation set. Moreover, we have achieved an image-wise classification accuracy of 96.1% on the test set of the microscopy ICIAR-2018 dataset.


Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1621
Author(s):  
Riaan Zoetmulder ◽  
Praneeta R. Konduri ◽  
Iris V. Obdeijn ◽  
Efstratios Gavves ◽  
Ivana Išgum ◽  
...  

Final lesion volume (FLV) is a surrogate outcome measure in anterior circulation stroke (ACS). In posterior circulation stroke (PCS), this relation is plausibly understudied due to a lack of methods that automatically quantify FLV. The applicability of deep learning approaches to PCS is limited due to its lower incidence compared to ACS. We evaluated strategies to develop a convolutional neural network (CNN) for PCS lesion segmentation by using image data from both ACS and PCS patients. We included follow-up non-contrast computed tomography scans of 1018 patients with ACS and 107 patients with PCS. To assess whether an ACS lesion segmentation generalizes to PCS, a CNN was trained on ACS data (ACS-CNN). Second, to evaluate the performance of only including PCS patients, a CNN was trained on PCS data. Third, to evaluate the performance when combining the datasets, a CNN was trained on both datasets. Finally, to evaluate the performance of transfer learning, the ACS-CNN was fine-tuned using PCS patients. The transfer learning strategy outperformed the other strategies in volume agreement with an intra-class correlation of 0.88 (95% CI: 0.83–0.92) vs. 0.55 to 0.83 and a lesion detection rate of 87% vs. 41–77 for the other strategies. Hence, transfer learning improved the FLV quantification and detection rate of PCS lesions compared to the other strategies.


2020 ◽  
Author(s):  
varan singhrohila ◽  
Nitin Gupta ◽  
Amit Kaul ◽  
Deepak Sharma

<div>The ongoing pandemic of COVID-19 has shown</div><div>the limitations of our current medical institutions. There</div><div>is a need for research in the field of automated diagnosis</div><div>for speeding up the process while maintaining accuracy</div><div>and reducing computational requirements. In this work, an</div><div>automatic diagnosis of COVID-19 infection from CT scans</div><div>of the patients using Deep Learning technique is proposed.</div><div>The proposed model, ReCOV-101 uses full chest CT scans to</div><div>detect varying degrees of COVID-19 infection, and requires</div><div>less computational power. Moreover, in order to improve</div><div>the detection accuracy the CT-scans were preprocessed by</div><div>employing segmentation and interpolation. The proposed</div><div>scheme is based on the residual network, taking advantage</div><div>of skip connection, allowing the model to go deeper.</div><div>Moreover, the model was trained on a single enterpriselevel</div><div>GPU such that it can easily be provided on the edge of</div><div>the network, reducing communication with the cloud often</div><div>required for processing the data. The objective of this work</div><div>is to demonstrate a less hardware-intensive approach for COVID-19 detection with excellent performance that can</div><div>be combined with medical equipment and help ease the</div><div>examination procedure. Moreover, with the proposed model</div><div>an accuracy of 94.9% was achieved.</div>


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Mundher Mohammed Taresh ◽  
Ningbo Zhu ◽  
Talal Ahmed Ali Ali ◽  
Asaad Shakir Hameed ◽  
Modhi Lafta Mutar

The novel coronavirus disease 2019 (COVID-19) is a contagious disease that has caused thousands of deaths and infected millions worldwide. Thus, various technologies that allow for the fast detection of COVID-19 infections with high accuracy can offer healthcare professionals much-needed help. This study is aimed at evaluating the effectiveness of the state-of-the-art pretrained Convolutional Neural Networks (CNNs) on the automatic diagnosis of COVID-19 from chest X-rays (CXRs). The dataset used in the experiments consists of 1200 CXR images from individuals with COVID-19, 1345 CXR images from individuals with viral pneumonia, and 1341 CXR images from healthy individuals. In this paper, the effectiveness of artificial intelligence (AI) in the rapid and precise identification of COVID-19 from CXR images has been explored based on different pretrained deep learning algorithms and fine-tuned to maximise detection accuracy to identify the best algorithms. The results showed that deep learning with X-ray imaging is useful in collecting critical biological markers associated with COVID-19 infections. VGG16 and MobileNet obtained the highest accuracy of 98.28%. However, VGG16 outperformed all other models in COVID-19 detection with an accuracy, F1 score, precision, specificity, and sensitivity of 98.72%, 97.59%, 96.43%, 98.70%, and 98.78%, respectively. The outstanding performance of these pretrained models can significantly improve the speed and accuracy of COVID-19 diagnosis. However, a larger dataset of COVID-19 X-ray images is required for a more accurate and reliable identification of COVID-19 infections when using deep transfer learning. This would be extremely beneficial in this pandemic when the disease burden and the need for preventive measures are in conflict with the currently available resources.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Zhenbo Lu ◽  
Wei Zhou ◽  
Shixiang Zhang ◽  
Chen Wang

Quick and accurate crash detection is important for saving lives and improved traffic incident management. In this paper, a feature fusion-based deep learning framework was developed for video-based urban traffic crash detection task, aiming at achieving a balance between detection speed and accuracy with limited computing resource. In this framework, a residual neural network (ResNet) combined with attention modules was proposed to extract crash-related appearance features from urban traffic videos (i.e., a crash appearance feature extractor), which were further fed to a spatiotemporal feature fusion model, Conv-LSTM (Convolutional Long Short-Term Memory), to simultaneously capture appearance (static) and motion (dynamic) crash features. The proposed model was trained by a set of video clips covering 330 crash and 342 noncrash events. In general, the proposed model achieved an accuracy of 87.78% on the testing dataset and an acceptable detection speed (FPS > 30 with GTX 1060). Thanks to the attention module, the proposed model can capture the localized appearance features (e.g., vehicle damage and pedestrian fallen-off) of crashes better than conventional convolutional neural networks. The Conv-LSTM module outperformed conventional LSTM in terms of capturing motion features of crashes, such as the roadway congestion and pedestrians gathering after crashes. Compared to traditional motion-based crash detection model, the proposed model achieved higher detection accuracy. Moreover, it could detect crashes much faster than other feature fusion-based models (e.g., C3D). The results show that the proposed model is a promising video-based urban traffic crash detection algorithm that could be used in practice in the future.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Faten Hamed Nahhas ◽  
Helmi Z. M. Shafri ◽  
Maher Ibrahim Sameen ◽  
Biswajeet Pradhan ◽  
Shattri Mansor

This paper reports on a building detection approach based on deep learning (DL) using the fusion of Light Detection and Ranging (LiDAR) data and orthophotos. The proposed method utilized object-based analysis to create objects, a feature-level fusion, an autoencoder-based dimensionality reduction to transform low-level features into compressed features, and a convolutional neural network (CNN) to transform compressed features into high-level features, which were used to classify objects into buildings and background. The proposed architecture was optimized for the grid search method, and its sensitivity to hyperparameters was analyzed and discussed. The proposed model was evaluated on two datasets selected from an urban area with different building types. Results show that the dimensionality reduction by the autoencoder approach from 21 features to 10 features can improve detection accuracy from 86.06% to 86.19% in the working area and from 77.92% to 78.26% in the testing area. The sensitivity analysis also shows that the selection of the hyperparameter values of the model significantly affects detection accuracy. The best hyperparameters of the model are 128 filters in the CNN model, the Adamax optimizer, 10 units in the fully connected layer of the CNN model, a batch size of 8, and a dropout of 0.2. These hyperparameters are critical to improving the generalization capacity of the model. Furthermore, comparison experiments with the support vector machine (SVM) show that the proposed model with or without dimensionality reduction outperforms the SVM models in the working area. However, the SVM model achieves better accuracy in the testing area than the proposed model without dimensionality reduction. This study generally shows that the use of an autoencoder in DL models can improve the accuracy of building recognition in fused LiDAR–orthophoto data.


2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
Aleksei Grigorev ◽  
Zhihong Tian ◽  
Seungmin Rho ◽  
Jianxin Xiong ◽  
Shaohui Liu ◽  
...  

AbstractThe person re-identification is one of the most significant problems in computer vision and surveillance systems. The recent success of deep convolutional neural networks in image classification has inspired researchers to investigate the application of deep learning to the person re-identification. However, the huge amount of research on this problem considers classical settings, where pedestrians are captured by static surveillance cameras, although there is a growing demand for analyzing images and videos taken by drones. In this paper, we aim at filling this gap and provide insights on the person re-identification from drones. To our knowledge, it is the first attempt to tackle this problem under such constraints. We present the person re-identification dataset, named DRone HIT (DRHIT01), which is collected by using a drone. It contains 101 unique pedestrians, which are annotated with their identities. Each pedestrian has about 500 images. We propose to use a combination of triplet and large-margin Gaussian mixture (L-GM) loss to tackle the drone-based person re-identification problem. The proposed network equipped with multi-branch design, channel group learning, and combination of loss functions is evaluated on the DRHIT01 dataset. Besides, transfer learning from the most popular person re-identification datasets is evaluated. Experiment results demonstrate the importance of transfer learning and show that the proposed model outperforms the classic deep learning approach.


Sign in / Sign up

Export Citation Format

Share Document