scholarly journals Detection of the Progression of Anthesis in Field-Grown Maize Tassels: A Case Study

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Seyed Vahid Mirnezami ◽  
Srikant Srinivasan ◽  
Yan Zhou ◽  
Patrick S. Schnable ◽  
Baskar Ganapathysubramanian

The tassel of the maize plant is responsible for the production and dispersal of pollen for subsequent capture by the silk (stigma) and fertilization of the ovules. Both the amount and timing of pollen shed are physiological traits that impact the production of a hybrid seed. This study describes an automated end-to-end pipeline that combines deep learning and image processing approaches to extract tassel flowering patterns from time-lapse camera images of plants grown under field conditions. Inbred lines from the SAM and NAM diversity panels were grown at the Curtiss farm at Iowa State University, Ames, IA, during the summer of 2016. Using a set of around 500 pole-mounted cameras installed in the field, images of plants were captured every 10 minutes of daylight hours over a three-week period. Extracting data from imaging performed under field conditions is challenging due to variabilities in weather, illumination, and the morphological diversity of tassels. To address these issues, deep learning algorithms were used for tassel detection, classification, and segmentation. Image processing approaches were then used to crop the main spike of the tassel to track reproductive development. The results demonstrated that deep learning with well-labeled data is a powerful tool for detecting, classifying, and segmenting tassels. Our sequential workflow exhibited the following metrics: mAP for tassel detection was 0.91, F1 score obtained for tassel classification was 0.93, and accuracy of semantic segmentation in creating a binary image from the RGB tassel images was 0.95. This workflow was used to determine spatiotemporal variations in the thickness of the main spike—which serves as a proxy for anthesis progression.

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4442
Author(s):  
Zijie Niu ◽  
Juntao Deng ◽  
Xu Zhang ◽  
Jun Zhang ◽  
Shijia Pan ◽  
...  

It is important to obtain accurate information about kiwifruit vines to monitoring their physiological states and undertake precise orchard operations. However, because vines are small and cling to trellises, and have branches laying on the ground, numerous challenges exist in the acquisition of accurate data for kiwifruit vines. In this paper, a kiwifruit canopy distribution prediction model is proposed on the basis of low-altitude unmanned aerial vehicle (UAV) images and deep learning techniques. First, the location of the kiwifruit plants and vine distribution are extracted from high-precision images collected by UAV. The canopy gradient distribution maps with different noise reduction and distribution effects are generated by modifying the threshold and sampling size using the resampling normalization method. The results showed that the accuracies of the vine segmentation using PSPnet, support vector machine, and random forest classification were 71.2%, 85.8%, and 75.26%, respectively. However, the segmentation image obtained using depth semantic segmentation had a higher signal-to-noise ratio and was closer to the real situation. The average intersection over union of the deep semantic segmentation was more than or equal to 80% in distribution maps, whereas, in traditional machine learning, the average intersection was between 20% and 60%. This indicates the proposed model can quickly extract the vine distribution and plant position, and is thus able to perform dynamic monitoring of orchards to provide real-time operation guidance.


2021 ◽  
Vol 26 (1) ◽  
pp. 200-215
Author(s):  
Muhammad Alam ◽  
Jian-Feng Wang ◽  
Cong Guangpei ◽  
LV Yunrong ◽  
Yuanfang Chen

AbstractIn recent years, the success of deep learning in natural scene image processing boosted its application in the analysis of remote sensing images. In this paper, we applied Convolutional Neural Networks (CNN) on the semantic segmentation of remote sensing images. We improve the Encoder- Decoder CNN structure SegNet with index pooling and U-net to make them suitable for multi-targets semantic segmentation of remote sensing images. The results show that these two models have their own advantages and disadvantages on the segmentation of different objects. In addition, we propose an integrated algorithm that integrates these two models. Experimental results show that the presented integrated algorithm can exploite the advantages of both the models for multi-target segmentation and achieve a better segmentation compared to these two models.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Dominik Jens Elias Waibel ◽  
Sayedali Shetab Boushehri ◽  
Carsten Marr

Abstract Background Deep learning contributes to uncovering molecular and cellular processes with highly performant algorithms. Convolutional neural networks have become the state-of-the-art tool to provide accurate and fast image data processing. However, published algorithms mostly solve only one specific problem and they typically require a considerable coding effort and machine learning background for their application. Results We have thus developed InstantDL, a deep learning pipeline for four common image processing tasks: semantic segmentation, instance segmentation, pixel-wise regression and classification. InstantDL enables researchers with a basic computational background to apply debugged and benchmarked state-of-the-art deep learning algorithms to their own data with minimal effort. To make the pipeline robust, we have automated and standardized workflows and extensively tested it in different scenarios. Moreover, it allows assessing the uncertainty of predictions. We have benchmarked InstantDL on seven publicly available datasets achieving competitive performance without any parameter tuning. For customization of the pipeline to specific tasks, all code is easily accessible and well documented. Conclusions With InstantDL, we hope to empower biomedical researchers to conduct reproducible image processing with a convenient and easy-to-use pipeline.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251899
Author(s):  
Samir M. Badawy ◽  
Abd El-Naser A. Mohamed ◽  
Alaa A. Hefnawy ◽  
Hassan E. Zidan ◽  
Mohammed T. GadAllah ◽  
...  

Computer aided diagnosis (CAD) of biomedical images assists physicians for a fast facilitated tissue characterization. A scheme based on combining fuzzy logic (FL) and deep learning (DL) for automatic semantic segmentation (SS) of tumors in breast ultrasound (BUS) images is proposed. The proposed scheme consists of two steps: the first is a FL based preprocessing, and the second is a Convolutional neural network (CNN) based SS. Eight well-known CNN based SS models have been utilized in the study. Studying the scheme was by a dataset of 400 cancerous BUS images and their corresponding 400 ground truth images. SS process has been applied in two modes: batch and one by one image processing. Three quantitative performance evaluation metrics have been utilized: global accuracy (GA), mean Jaccard Index (mean intersection over union (IoU)), and mean BF (Boundary F1) Score. In the batch processing mode: quantitative metrics’ average results over the eight utilized CNNs based SS models over the 400 cancerous BUS images were: 95.45% GA instead of 86.08% without applying fuzzy preprocessing step, 78.70% mean IoU instead of 49.61%, and 68.08% mean BF score instead of 42.63%. Moreover, the resulted segmented images could show tumors’ regions more accurate than with only CNN based SS. While, in one by one image processing mode: there has been no enhancement neither qualitatively nor quantitatively. So, only when a batch processing is needed, utilizing the proposed scheme may be helpful in enhancing automatic ss of tumors in BUS images. Otherwise applying the proposed approach on a one-by-one image mode will disrupt segmentation’s efficiency. The proposed batch processing scheme may be generalized for an enhanced CNN based SS of a targeted region of interest (ROI) in any batch of digital images. A modified small dataset is available: https://www.kaggle.com/mohammedtgadallah/mt-small-dataset (S1 Data).


2021 ◽  
Vol 13 (24) ◽  
pp. 5100
Author(s):  
Teerapong Panboonyuen ◽  
Kulsawasd Jitkajornwanich ◽  
Siam Lawawirojwong ◽  
Panu Srestasathiern ◽  
Peerapon Vateekul

Transformers have demonstrated remarkable accomplishments in several natural language processing (NLP) tasks as well as image processing tasks. Herein, we present a deep-learning (DL) model that is capable of improving the semantic segmentation network in two ways. First, utilizing the pre-training Swin Transformer (SwinTF) under Vision Transformer (ViT) as a backbone, the model weights downstream tasks by joining task layers upon the pretrained encoder. Secondly, decoder designs are applied to our DL network with three decoder designs, U-Net, pyramid scene parsing (PSP) network, and feature pyramid network (FPN), to perform pixel-level segmentation. The results are compared with other image labeling state of the art (SOTA) methods, such as global convolutional network (GCN) and ViT. Extensive experiments show that our Swin Transformer (SwinTF) with decoder designs reached a new state of the art on the Thailand Isan Landsat-8 corpus (89.8% F1 score), Thailand North Landsat-8 corpus (63.12% F1 score), and competitive results on ISPRS Vaihingen. Moreover, both our best-proposed methods (SwinTF-PSP and SwinTF-FPN) even outperformed SwinTF with supervised pre-training ViT on the ImageNet-1K in the Thailand, Landsat-8, and ISPRS Vaihingen corpora.


Author(s):  
Yinglei Song ◽  
Mohammad N.A. Rana ◽  
Junfeng Qu ◽  
Chunmei Liu

Background: Recently, deep learning based methods have become an important approach to the accurate analysis of medical images. Methods: This paper provides a comprehensive survey of the most important deep learning based methods that have been developed for medical image processing. A number of important contributions made in last five years are summarized and surveyed. Results: Specifically, deep learning based algorithms developed for image segmentation, image classification, registration, object detection and other important problems are reviewed. In addition, an overview of challenges that currently exist in the field and potential directions for future research is provided in the end of the survey.


Author(s):  
Priti P. Rege ◽  
Shaheera Akhter

Text separation in document image analysis is an important preprocessing step before executing an optical character recognition (OCR) task. It is necessary to improve the accuracy of an OCR system. Traditionally, for separating text from a document, different feature extraction processes have been used that require handcrafting of the features. However, deep learning-based methods are excellent feature extractors that learn features from the training data automatically. Deep learning gives state-of-the-art results on various computer vision, image classification, segmentation, image captioning, object detection, and recognition tasks. This chapter compares various traditional as well as deep-learning techniques and uses a semantic segmentation method for separating text from Devanagari document images using U-Net and ResU-Net models. These models are further fine-tuned for transfer learning to get more precise results. The final results show that deep learning methods give more accurate results compared with conventional methods of image processing for Devanagari text extraction.


2018 ◽  
Author(s):  
Yuta Tokuoka ◽  
Takahiro G Yamada ◽  
Noriko F Hiroi ◽  
Tetsuya J Kobayashi ◽  
Kazuo Yamagata ◽  
...  

AbstractIn embryology, image processing methods such as segmentation are applied to acquiring quantitative criteria from time-series three-dimensional microscopic images. When used to segment cells or intracellular organelles, several current deep learning techniques outperform traditional image processing algorithms. However, segmentation algorithms still have unsolved problems, especially in bioimage processing. The most critical issue is that the existing deep learning-based algorithms for bioimages can perform only semantic segmentation, which distinguishes whether a pixel is within an object (for example, nucleus) or not. In this study, we implemented a novel segmentation algorithm, based on deep learning, which segments each nucleus and adds different labels to the detected objects. This segmentation algorithm is called instance segmentation. Our instance segmentation algorithm, implemented as a neural network, which we named QCA Net, substantially outperformed 3D U-Net, which is the best semantic segmentation algorithm that uses deep learning. Using QCA Net, we quantified the nuclear number, volume, surface area, and center of gravity coordinates during the development of mouse embryos. In particular, QCA Net distinguished nuclei of embryonic cells from those of polar bodies formed in meiosis. We consider that QCA Net can greatly contribute to bioimage segmentation in embryology by generating quantitative criteria from segmented images.


Author(s):  
Chandra Pal Kushwah

Image segmentation for applications like scene understanding, medical image analysis, robotic vision, video tracking, improving reality, and image compression is a key subject of image processing and image evaluation. Semantic segmentation is an integral aspect of image comprehension and is essential for image processing tasks. Semantic segmentation is a complex process in computer vision applications. Many techniques have been developed, from self-sufficient cars, human interaction, robotics, medical science, agriculture, and so on, to tackle the issue.In a short period, satellite imagery will provide a lot of large-scale knowledge about the earth's surfaces, saving time. With the growth & development of satellite image sensors, the recorded object resolution was improved with advanced image processing techniques. Improving the performance of deep learning models in a broad range of vision applications, important work has recently been carried out to evaluate approaches for deep learning models in image segmentation.In this paper,a detailed overview provides onImage segmentation and describes its techniques likeregion, edge, feature, threshold, and model-based. Also, provide Semantic Segmentation, Satellite imageries, and Deep learning & its Techniques like-DNN, CNN, RNN, RBM, and so on.CNN is one of the efficient deep learning techniques among all of them that can be usedwith the U-net model in further work.


2020 ◽  
Author(s):  
Cefa Karabağ ◽  
Martin L. Jones ◽  
Christopher J. Peddie ◽  
Anne E. Weston ◽  
Lucy M. Collinson ◽  
...  

AbstractIn this work, images of a HeLa cancer cell were semantically segmented with one traditional image-processing algorithm and three deep learning architectures: VGG16, ResNet18 and Inception-ResNet-v2. Three hundred slices, each 2000 × 2000 pixels, of a HeLa Cell were acquired with Serial Block Face Scanning Electron Microscopy. The deep learning architectures were pre-trained with ImageNet and then fine-tuned with transfer learning. The image-processing algorithm followed a pipeline of several traditional steps like edge detection, dilation and morphological operators. The algorithms were compared by measuring pixel-based segmentation accuracy and Jaccard index against a labelled ground truth. The results indicated a superior performance of the traditional algorithm (Accuracy = 99%, Jaccard = 93%) over the deep learning architectures: VGG16 (93%, 90%), ResNet18 (94%, 88%), Inception-ResNet-v2 (94%, 89%).


Sign in / Sign up

Export Citation Format

Share Document