scholarly journals Analysis of high-throughput plant image data with the information system IAP

2012 ◽  
Vol 9 (2) ◽  
pp. 16-18 ◽  
Author(s):  
Christian Klukas ◽  
Jean-Michel Pape ◽  
Alexander Entzian

Summary This work presents a sophisticated information system, the Integrated Analysis Platform (IAP), an approach supporting large-scale image analysis for different species and imaging systems. In its current form, IAP supports the investigation of Maize, Barley and Arabidopsis plants based on images obtained in different spectra. Several components of the IAP system, which are described in this work, cover the complete end-to-end pipeline, starting with the image transfer from the imaging infrastructure, (grid distributed) image analysis, data management for raw data and analysis results, to the automated generation of experiment reports.

2011 ◽  
Vol 22 (No. 4) ◽  
pp. 133-142 ◽  
Author(s):  
I. Švec ◽  
M. Hrušková

Abstract: Baking quality of flour from six wheat cultivars (harvest 2002 and 2003), belonging to the quality classes A and B, was evaluated using the fermented dough test. Analytical traits of kernel and flour showed differences between the classes which were confirmed by the baking test with the full-bread-formula according to Czech method. In addition to standard methods of the bread parameters description (specific bread volume and bread shape measurements) rheological measurements of penetrometer and image analysis were used in effort to differentiate wheat samples into the quality classes. The results of the baking test proved significant differences in specific bread volumes – the highest volume in class A was obtained with the cultivar Vinjet and in class B with SG-S1098 – approx. 410 and 420 ml/100 g. Although significant correlations among image analysis data and specific bread volume having been proved, any image analysis parameter did not distinguish the quality classes. Only the penetronetric measurements made with bread crumb were suitable for such purpose (r = 0.9083; for  = 0.01). Among image analysis data the total cell area of the crumb had the strongest correlation with specific bread volume (r = 0.7840; for α = 0.01).    


PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e4088 ◽  
Author(s):  
Malia A. Gehan ◽  
Noah Fahlgren ◽  
Arash Abbasi ◽  
Jeffrey C. Berry ◽  
Steven T. Callen ◽  
...  

Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.


Author(s):  
Malia A Gehan ◽  
Noah Fahlgren ◽  
Arash Abbasi ◽  
Jeffrey C Berry ◽  
Steven T Callen ◽  
...  

Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.


Author(s):  
Malia A Gehan ◽  
Noah Fahlgren ◽  
Arash Abbasi ◽  
Jeffrey C Berry ◽  
Steven T Callen ◽  
...  

Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.


2009 ◽  
Vol 14 (8) ◽  
pp. 944-955 ◽  
Author(s):  
Daniel F. Gilbert ◽  
Till Meinhof ◽  
Rainer Pepperkok ◽  
Heiko Runz

In this article, the authors describe the image analysis software DetecTiff©, which allows fully automated object recognition and quantification from digital images. The core module of the LabView©-based routine is an algorithm for structure recognition that employs intensity thresholding and size-dependent particle filtering from microscopic images in an iterative manner. Detected structures are converted into templates, which are used for quantitative image analysis. DetecTiff © enables processing of multiple detection channels and provides functions for template organization and fast interpretation of acquired data. The authors demonstrate the applicability of DetecTiff© for automated analysis of cellular uptake of fluorescencelabeled low-density lipoproteins as well as diverse other image data sets from a variety of biomedical applications. Moreover, the performance of DetecTiff© is compared with preexisting image analysis tools. The results show that DetecTiff© can be applied with high consistency for automated quantitative analysis of image data (e.g., from large-scale functional RNAi screening projects). ( Journal of Biomolecular Screening 2009:944-955)


2010 ◽  
Vol 15 (7) ◽  
pp. 892-899 ◽  
Author(s):  
Karol Kozak ◽  
Gabor Bakos ◽  
Antje Hoff ◽  
Emily Bennett ◽  
Dara Dunican ◽  
...  

High-content screening (HCS) technologies are becoming increasingly used in both large-scale drug discovery and basic research programs. These automated imaging and analysis technologies enable the researcher to elucidate the complex biology that underlies the functions of genes, proteins, and other biomolecules at the cellular level. HCS combines the power of automated digital microscopy and advanced software-based image analysis algorithms to detect and quantify biological changes in cells and tissues. This technology is a particularly powerful tool when used to interrogate the cellular effects of exogenously applied agents such as RNAi and/or small molecules. HCS allows for the evaluation of cellular perturbations that occur both at the level of the single cell and within cellular populations. In a multivariate approach, multiple cellular parameters are collected, allowing for more complex analysis. However, in these scenarios, data flow and management still represent substantial bottlenecks in HCS projects. HCS data include a diversity of information from multiple sources such as details pertaining to screening libraries (e.g., siRNA and small molecules), image stacks acquired from automated microscopes (of which there may be up to several million), and the image analysis data. From this, postprocessing algorithms are required to generate statistical, quality control bioinformatic information and ultimately a final hit list. To accomplish these individual tasks, numerous tools can be used to perform each analytical step; however, management of the entire information flow currently requires the use of commercially available proprietary software, the scope of which is often limited, or bespoke customized scripts. In this article, the authors introduce an open-source research tool that allows for the management of the entire data flow of the HCS data chain, by handling and linking information and providing many powerful postprocessing and visualization tools.


2019 ◽  
Author(s):  
Alan Bauer ◽  
Aaron George Bostrom ◽  
Joshua Ball ◽  
Christopher Applegate ◽  
Tao Cheng ◽  
...  

AbstractAerial imagery is regularly used by farmers and growers to monitor crops during the growing season. To extract meaningful phenotypic information from large-scale aerial images collected regularly from the field, high-throughput analytic solutions are required, which not only produce high-quality measures of key crop traits, but also support agricultural practitioners to make reliable management decisions of their crops. Here, we report AirSurf-Lettuce, an automated and open-source aerial image analysis platform that combines modern computer vision, up-to-date machine learning, and modular software engineering to measure yield-related phenotypes of millions of lettuces across the field. Utilising ultra-large normalized difference vegetation index (NDVI) images acquired by fixed-wing light aircrafts together with a deep-learning classifier trained with over 100,000 labelled lettuce signals, the platform is capable of scoring and categorising iceberg lettuces with high accuracy (>98%). Furthermore, novel analysis functions have been developed to map lettuce size distribution in the field, based on which global positioning system (GPS) tagged harvest regions can be derived to enable growers and farmers’ precise harvest strategies and marketability estimates before the harvest.


2016 ◽  
Vol 64 (7) ◽  
Author(s):  
Johannes Stegmaier ◽  
Benjamin Schott ◽  
Eduard Hübner ◽  
Manuel Traub ◽  
Maryam Shahid ◽  
...  

AbstractNew imaging techniques enable visualizing and analyzing a multitude of unknown phenomena in many areas of science at high spatio-temporal resolution. The rapidly growing amount of image data, however, can hardly be analyzed manually and, thus, future research has to focus on automated image analysis methods that allow one to reliably extract the desired information from large-scale multidimensional image data. Starting with infrastructural challenges, we present new software tools, validation benchmarks and processing strategies that help coping with large-scale image data. The presented methods are illustrated on typical problems observed in developmental biology that can be answered, e.g., by using time-resolved 3D microscopy images.


Author(s):  
Robert W. Mackin

This paper presents two advances towards the automated three-dimensional (3-D) analysis of thick and heavily-overlapped regions in cytological preparations such as cervical/vaginal smears. First, a high speed 3-D brightfield microscope has been developed, allowing the acquisition of image data at speeds approaching 30 optical slices per second. Second, algorithms have been developed to detect and segment nuclei in spite of the extremely high image variability and low contrast typical of such regions. The analysis of such regions is inherently a 3-D problem that cannot be solved reliably with conventional 2-D imaging and image analysis methods.High-Speed 3-D imaging of the specimen is accomplished by moving the specimen axially relative to the objective lens of a standard microscope (Zeiss) at a speed of 30 steps per second, where the stepsize is adjustable from 0.2 - 5μm. The specimen is mounted on a computer-controlled, piezoelectric microstage (Burleigh PZS-100, 68/μm displacement). At each step, an optical slice is acquired using a CCD camera (SONY XC-11/71 IP, Dalsa CA-D1-0256, and CA-D2-0512 have been used) connected to a 4-node array processor system based on the Intel i860 chip.


Sign in / Sign up

Export Citation Format

Share Document