On-Line Flank Wear Estimation Using an Adaptive Observer and Computer Vision, Part 2: Experiment

1993 ◽  
Vol 115 (1) ◽  
pp. 37-43 ◽  
Author(s):  
Jong-Jin Park ◽  
A. Galip Ulsoy

An on-line flank wear estimation system, using the integrated method presented in Part 1 of the paper, is implemented in a laboratory environment, and its performance is evaluated through turning experiments. A computer vision system is developed using an image processing algorithm, a commercially available computer vision system, and a microscopic lens. The developed algorithm is based on the difference between the intensity of the reflected light from a flank wear surface and that from the background. The difference is very significant and an appropriate selection of the intensity threshold level yields an acceptable binary image of the flank wear. This image is used by the vision computer for the calculation of the flank wear. The flank wear model parameters that need to be known a priori are determined through several preliminary experiments, or from data available in the literature. Cutting conditions are selected to satisfy the assumptions made on the design of the adaptive observer presented in Part 1. The resulting cutting conditions are typical of those used in finishing cutting operations. The integrated method is tested in turning experiments under both constant and time varying cutting conditions, and yields very accurate on-line estimation of the flank wear development.

1993 ◽  
Vol 115 (1) ◽  
pp. 30-36 ◽  
Author(s):  
Jong-Jin Park ◽  
A. Galip Ulsoy

The problem of developing a reliable on-line flank wear measurement system is treated using the integration of an adaptive observer, based on cutting force measurement, and computer vision. In this part of the paper, the theoretical basis and design of the integrated method is presented. Implementation issues are discussed in Part 2 of the paper along with experimental results. The flank wear is modeled as the summation of two unmeasurable states in a nonlinear dynamic system realized in state space equation form. The inputs to the system are the feed, the cutting speed, and the depth of cut (i.e., the cutting conditions) and the output is the cutting force. Based on a simplified version of this flank wear model, an adaptive observer is designed by combining the observer technique and the recursive least squares parameter estimation algorithm. The designed adaptive observer indirectly measures the flank wear and simultaneously estimates one unknown model parameter, using measurements of the cutting force and the cutting conditions. The adaptive observer is integrated with a computer vision system which can directly measure the flank wear with good accuracy. In the integrated system, the adaptive observer is intermittently calibrated using direct flank wear measurements via computer vision. In this part of the paper, the integrated method is presented without referring to any specific computer vision technique. However, a computer vision technique is developed in Part 2 of the paper for an experimental evaluation of the proposed method. The fundamental idea behind the proposed integrated method is that a less accurate indirect flank wear measuring method (i.e., the adaptive observer) is intermittently calibrated by a more accurate direct measurement method (i.e., computer vision).


2021 ◽  
pp. 101474
Author(s):  
Innocent Nyalala ◽  
Cedric Okinda ◽  
Nelson Makange ◽  
Tchalla Korohou ◽  
Qi Chao ◽  
...  

Author(s):  
Joy Iong-Zong Chen ◽  
Jen-Ting Chang

The study of a robotic arm copied with 3D-printer combines computer vision system with tracking algorithm is proposed in the paper. Moreover, the designing to the intelligent vehicle system with the integration of electromechanical for planning to apply it to the operations in various fields is presented too. The main purpose of this work tries to avoid the complicated process with traditional manual adjustment or teaching. It is expected to achieve the purpose that the robotic arm can grab the target automatically, classify the target and place it in the specified area, and even accurately realize the classification through training to distinguish the characteristics of the target. Eventually, the mechanical arm's movement behavior is able to be corrected through a real-time image data feedback control system. In words, with the experiment that the computer vision system is used to assist the robotic arm to detect the color and position of the target. By adding color features for algorithm training as well as through human-machine collaboration, which approves that the proposed algorithm has well known that the accuracy of target tracking definitely depends on both of two parameters include “object locations” and the “illustration direction” of light source. The difference will far from 75.2% to 89.0%.


2020 ◽  
Vol 286 ◽  
pp. 110102 ◽  
Author(s):  
Shuxiang Fan ◽  
Jiangbo Li ◽  
Yunhe Zhang ◽  
Xi Tian ◽  
Qingyan Wang ◽  
...  

2021 ◽  
pp. 105084
Author(s):  
Bojana Milovanovic ◽  
Ilija Djekic ◽  
Jelena Miocinovic ◽  
Bartosz G. Solowiej ◽  
Jose M. Lorenzo ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Metals ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 387
Author(s):  
Martin Choux ◽  
Eduard Marti Bigorra ◽  
Ilya Tyapin

The rapidly growing deployment of Electric Vehicles (EV) put strong demands on the development of Lithium-Ion Batteries (LIBs) but also into its dismantling process, a necessary step for circular economy. The aim of this study is therefore to develop an autonomous task planner for the dismantling of EV Lithium-Ion Battery pack to a module level through the design and implementation of a computer vision system. This research contributes to moving closer towards fully automated EV battery robotic dismantling, an inevitable step for a sustainable world transition to an electric economy. For the proposed task planner the main functions consist in identifying LIB components and their locations, in creating a feasible dismantling plan, and lastly in moving the robot to the detected dismantling positions. Results show that the proposed method has measurement errors lower than 5 mm. In addition, the system is able to perform all the steps in the order and with a total average time of 34 s. The computer vision, robotics and battery disassembly have been successfully unified, resulting in a designed and tested task planner well suited for product with large variations and uncertainties.


Sign in / Sign up

Export Citation Format

Share Document