scholarly journals Tundish Cover Flux Thickness Measurement Method and Instrumentation Based on Computer Vision in Continuous Casting Tundish

2013 ◽  
Vol 2013 ◽  
pp. 1-15 ◽  
Author(s):  
Meng Lu ◽  
He Qing ◽  
Xie Zhi ◽  
Yang Weimin ◽  
Ci Ying ◽  
...  

Thickness of tundish cover flux (TCF) plays an important role in continuous casting (CC) steelmaking process. Traditional measurement method of TCF thickness is single/double wire methods, which have several problems such as personal security, easily affected by operators, and poor repeatability. To solve all these problems, in this paper, we specifically designed and built an instrumentation and presented a novel method to measure the TCF thickness. The instrumentation was composed of a measurement bar, a mechanical device, a high-definition industrial camera, a Siemens S7-200 programmable logic controller (PLC), and a computer. Our measurement method was based on the computer vision algorithms, including image denoising method, monocular range measurement method, scale invariant feature transform (SIFT), and image gray gradient detection method. Using the present instrumentation and method, images in the CC tundish can be collected by camera and transferred to computer to do imaging processing. Experiments showed that our instrumentation and method worked well at scene of steel plants, can accurately measure the thickness of TCF, and overcome the disadvantages of traditional measurement methods, or even replace the traditional ones.

2012 ◽  
Vol 457-458 ◽  
pp. 804-809
Author(s):  
Jun Liu ◽  
Feng Yang ◽  
Jun Xie ◽  
Zheng Jun Zeng

Acquiring the thickness of slag layer in continuous casting tundish accurately is beneficial to continous casting stably, and improves enterprise benefit. A new method to measure the thickness of slag layer in continuous casting tundish, based on temperature information is put forward in the papers. A measuring bar, made of Refractory Material, was inserted into the tundish to perceive the temperature. Then the thickness of slag layer may be obtained accurately by temperature interface between air layer and slag layer, and slag layer and molten steel layer. After applying to the slag layer thickness measurement in steel metallurgy field, it has a favorable application prospects because of the measuring error about less than 2.6mm.


Author(s):  
Shing Hwang Doong

Chip on film (COF) is a special packaging technology to pack integrated circuits in a flexible carrier tape. Chips packed with COF are primarily used in the display industry. Reel editing is a critical step in COF quality control to remove sections of congregating NG (not good) chips from a reel. Today, COF manufactures hire workers to count consecutive NG chips in a rolling reel with naked eyes. When the count is greater than a preset number, the corresponding section is removed. A novel method using object detection and object tracking is proposed to solve this problem. Object detection techniques including convolutional neural network (CNN), template matching (TM), and scale invariant feature transform (SIFT) were used to detect NG marks, and object tracking was used to track them with IDs so that congregating NG chips could be counted reliably. Using simulation videos similar to worksite scenes, experiments show that both CNN and TM detectors could solve the reel editing problem, while SIFT detectors failed. Furthermore, TM is better than CNN by yielding a real time solution.


2019 ◽  
Vol 22 (16) ◽  
pp. 3461-3472 ◽  
Author(s):  
Chuan-Zhi Dong ◽  
F Necati Catbas

Most of the existing vision-based displacement measurement methods require manual speckles or targets to improve the measurement performance in non-stationary imagery environments. To minimize the use of manual speckles and targets, feature points regarded as virtual markers can be utilized for non-target measurement. In this study, an advanced feature matching strategy is presented, which replaces the handcrafted descriptors with learned descriptors called Visual Geometry Group, of the University of Oxford descriptors to achieve better performance. The feasibility and performance of the proposed method is verified by comparative studies with a laboratory experiment on a two-span bridge model and then with a field application on a railway bridge. The proposed approach of integrated use of Scale Invariant Feature Transform and Visual Geometry Group improved the measurement accuracy by about 24% when compared with the commonly used existing feature matching-based displacement measurement method using Scale Invariant Feature Transform feature and descriptor.


Information ◽  
2018 ◽  
Vol 9 (12) ◽  
pp. 299
Author(s):  
Ende Wang ◽  
Jinlei Jiao ◽  
Jingchao Yang ◽  
Dongyi Liang ◽  
Jiandong Tian

Keypoint matching is of fundamental importance in computer vision applications. Fish-eye lenses are convenient in such applications that involve a very wide angle of view. However, their use has been limited by the lack of an effective matching algorithm. The Scale Invariant Feature Transform (SIFT) algorithm is an important technique in computer vision to detect and describe local features in images. Thus, we present a Tri-SIFT algorithm, which has a set of modifications to the SIFT algorithm that improve the descriptor accuracy and matching performance for fish-eye images, while preserving its original robustness to scale and rotation. After the keypoint detection of the SIFT algorithm is completed, the points in and around the keypoints are back-projected to a unit sphere following a fish-eye camera model. To simplify the calculation in which the image is on the sphere, the form of descriptor is based on the modification of the Gradient Location and Orientation Histogram (GLOH). In addition, to improve the invariance to the scale and the rotation in fish-eye images, the gradient magnitudes are replaced by the area of the surface, and the orientation is calculated on the sphere. Extensive experiments demonstrate that the performance of our modified algorithms outweigh that of SIFT and other related algorithms for fish-eye images.


2014 ◽  
Vol 7 (3) ◽  
Author(s):  
Kentaro Takemura ◽  
Tomohisa Yamakawa ◽  
Jun Takamatsu ◽  
Tsukasa Ogasawara

Researchers are considering the use of eye tracking in head-mounted camera systems, such as Google’s Project Glass. Typical methods require detailed calibration in advance, but long periods of use disrupt the calibration record between the eye and the scene camera. In addition, the focused object might not be estimated even if the point-of-regard is estimated using a portable eye-tracker. Therefore, we propose a novel method for estimating the object that a user is focused upon, where an eye camera captures the reflection on the corneal surface. Eye and environment information can be extracted from the corneal surface image simultaneously. We use inverse ray tracing to rectify the reflected image and a scale-invariant feature transform to estimate the object where the point-of-regard is located. Unwarped images can also be generated continuously from corneal surface images. We consider that our proposed method could be applied to a guidance system and we confirmed the feasibility of this application in experiments that estimated the object focused upon and the point-of-regard.


Author(s):  
Aswini N ◽  
Uma S V

<span lang="EN-US">Unmanned Aerial Vehicles or commonly known as drones are better suited for "dull, dirty, or dangerous" missions than manned aircraft. The drone can be either remotely controlled or it can travel as per predefined path using complex automation algorithm built during its development. In general, Unmanned Aerial Vehicle (UAV) is the combination of Drone in the air and control system on the ground. Design of an UAV means integrating hardware, software, sensors, actuators, communication systems and payloads into a single unit for the application involved. To make it completely autonomous, the most challenging problem faced by UAVs is obstacle avoidance. In this paper, a novel method to detect frontal obstacles using monocular camera is proposed. Computer Vision algorithms like Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Feature (SURF) are used to detect frontal obstacles and then distance of the obstacle from camera is calculated. To meet the defined objectives, designed system is tested with self-developed videos which are captured by DJI Phantom 4 pro.</span>


1969 ◽  
Vol 4 (1) ◽  
Author(s):  
Luis Miguel Prócel M. ◽  
Vicent Caselles

En el presente trabajo se desarrolla un algoritmo para la detección y agrupación de logos en imágenes. Para la detección de logos se usa el descriptor “Scale-Invariant Feature Transform” (SIFT) que es uno de los más estudiado y usado en la detección de patrones en los campos de análisis de imágenes y visión por computadora (computer vision). Luego, se desarrolla un algoritmo geométrico para la agrupación y el conteo de los logos detectados. Este algoritmo se basa en el algoritmo llamado “Geometric Hashing”. Finalmente, se realizan pruebas para analizar la robustez del algoritmo.


2012 ◽  
Vol 190-191 ◽  
pp. 1099-1103 ◽  
Author(s):  
Jun Guo ◽  
Chang Ren Zhu

In this paper we propose an automatic ship detection method in High Resolution optical satellite images based on neighbor context information. First, a pre-detection of targets gives us candidates. For each candidate, we choose an extended region called candidate with neighborhood which comprises candidate and its neighbor area. Second, the patches of candidate with neighborhood are got by a regular grid, and their SIFT(Scale Invariant Feature Transform) features are extracted. Then the SIFT features of training images are clustered with the K-means algorithm to form a codebook of the patches. We quantize the patches of candidate with neighborhood according to this codebook and get the visual word representation. Finally by applying spatial pyramid matching, the candidates are classified with SVM (support vector machine). Experiment results are given for a set of images show that our method has got predominant performance.


Sign in / Sign up

Export Citation Format

Share Document