A high-resolution detecting system based on machine vision for defects on large aperture and super-smooth surface

2015 ◽  
Author(s):  
Yongying Yang ◽  
Limin Zhao ◽  
Shitong Wang ◽  
Pin Cao ◽  
Dong Liu ◽  
...  
2013 ◽  
Author(s):  
Cynthia S. Froning ◽  
Steven Osterman ◽  
Eric Burgh ◽  
Matthew Beasley ◽  
Paul Scowen ◽  
...  

2021 ◽  
Author(s):  
Haotian Wang ◽  
Chaoming Li ◽  
Xinrong Chen ◽  
Zhe Huang ◽  
Jiayao Pan ◽  
...  

1993 ◽  
Author(s):  
David J. Litwiller ◽  
Mike Miethig ◽  
Brian C. Doody ◽  
P. Tom Jenkins

1994 ◽  
Author(s):  
David J. Litwiller ◽  
Mike Miethig ◽  
Brian C. Doody ◽  
P. Tom Jenkins

1993 ◽  
Author(s):  
Peter Seitz ◽  
Martin Stalder ◽  
Jeffrey M. Raynor ◽  
Graham K. Lang ◽  
Cor L. Claeys ◽  
...  

Crystals ◽  
2018 ◽  
Vol 8 (11) ◽  
pp. 421 ◽  
Author(s):  
Fu-Ming Tzu ◽  
Jung-Hua Chou

Among colours, the green colour has the most sensitivity in human vision so that green colour defects on displays can be effortlessly perceived by a photopic eye with the most intensity in the wavelength 555 nm of the spectrum. With the market moving forward to high resolution, displays can have resolutions of 10 million pixels. Therefore, the method of detecting the appearance of the panel using ultra-high resolutions in TFT-LCD is important. The machine vision associated with transmission chromaticity spectrometer that quantises the defects are explored, such as blackening and whitening. The result shows the significant phenomena to recognize the non-uniformity of film-related chromatic variation. In contrast, the quantitative assessment illustrates that the just noticeable difference (JND) of chromaticity CIE xyY at 0.001 is the measuring sensitivity for the chromatic variables (x, y), whereas JND is a perceptible threshold for a colour difference metric. Moreover, an optical device associated with a 198Hg discharge lamp calibrates the spectrometer accuracy.


2020 ◽  
Vol 21 (6) ◽  
pp. 1366-1384 ◽  
Author(s):  
João Valente ◽  
Bilal Sari ◽  
Lammert Kooistra ◽  
Henk Kramer ◽  
Sander Mücher

Abstract Knowing before harvesting how many plants have emerged and how they are growing is key in optimizing labour and efficient use of resources. Unmanned aerial vehicles (UAV) are a useful tool for fast and cost efficient data acquisition. However, imagery need to be converted into operational spatial products that can be further used by crop producers to have insight in the spatial distribution of the number of plants in the field. In this research, an automated method for counting plants from very high-resolution UAV imagery is addressed. The proposed method uses machine vision—Excess Green Index and Otsu’s method—and transfer learning using convolutional neural networks to identify and count plants. The integrated methods have been implemented to count 10 weeks old spinach plants in an experimental field with a surface area of 3.2 ha. Validation data of plant counts were available for 1/8 of the surface area. The results showed that the proposed methodology can count plants with an accuracy of 95% for a spatial resolution of 8 mm/pixel in an area up to 172 m2. Moreover, when the spatial resolution decreases with 50%, the maximum additional counting error achieved is 0.7%. Finally, a total amount of 170 000 plants in an area of 3.5 ha with an error of 42.5% was computed. The study shows that it is feasible to count individual plants using UAV-based off-the-shelf products and that via machine vision/learning algorithms it is possible to translate image data in non-expert practical information.


Sign in / Sign up

Export Citation Format

Share Document