scholarly journals Real-time Quantitative Visual Inspection using Extended Reality

2021 ◽  
Vol 6 (1) ◽  
pp. 1-3
Author(s):  
Zaid Abbas Al-Sabbag ◽  
Jason Paul Connelly ◽  
Chul Min Yeum ◽  
Sriram Narasimhan

In this study, we propose a technique for quantitative visual inspection that can quantify structural damage using extended reality (XR). The XR headset can display and overlay graphical information on the physical space and process the data from the built-in camera and depth sensor. Also, the device permits accessing and analyzing image and video stream in real-time and utilizing 3D meshes of the environment and camera pose information. By leveraging these features for the XR headset, we build a workflow and graphic interface to capture the images, segment damage regions, and evaluate the physical size of damage. A deep learning-based interactive segmentation algorithm called f-BRS was deployed to precisely segment damage regions through the XR headset. A ray-casting algorithm is implemented to obtain 3D locations corresponding to the pixel locations of the damage region on the image. The size of the damage region is computed from the 3D locations of its boundary. The performance of the proposed method is demonstrated through a field experiment at an in-service bridge where spalling damage is present at its abutment. The experiment shows that the proposed method provides sub-centimeter accuracy for the size estimation.

Author(s):  
M. Hermann ◽  
B. Ruf ◽  
M. Weinmann

Abstract. Real-time 3D reconstruction enables fast dense mapping of the environment which benefits numerous applications, such as navigation or live evaluation of an emergency. In contrast to most real-time capable approaches, our method does not need an explicit depth sensor. Instead, we only rely on a video stream from a camera and its intrinsic calibration. By exploiting the self-motion of the unmanned aerial vehicle (UAV) flying with oblique view around buildings, we estimate both camera trajectory and depth for selected images with enough novel content. To create a 3D model of the scene, we rely on a three-stage processing chain. First, we estimate the rough camera trajectory using a simultaneous localization and mapping (SLAM) algorithm. Once a suitable constellation is found, we estimate depth for local bundles of images using a Multi-View Stereo (MVS) approach and then fuse this depth into a global surfel-based model. For our evaluation, we use 55 video sequences with diverse settings, consisting of both synthetic and real scenes. We evaluate not only the generated reconstruction but also the intermediate products and achieve competitive results both qualitatively and quantitatively. At the same time, our method can keep up with a 30 fps video for a resolution of 768 × 448 pixels.


2011 ◽  
Vol 8 (1) ◽  
pp. 409048 ◽  
Author(s):  
Chuliang Wei ◽  
Qin Xin ◽  
W. H. Chung ◽  
Shun-yee Liu ◽  
Hwa-yaw Tam ◽  
...  

Wheel defects on trains, such as flat wheels and out-of-roundness, inevitably jeopardize the safety of railway operations. Regular visual inspection and checking by experienced workers are the commonly adopted practice to identify wheel defects. However, the defects may not be spotted in time. Therefore, an automatic, remote-sensing, reliable, and accurate monitoring system for wheel condition is always desirable. The paper describes a real-time system to monitor wheel defects based on fiber Bragg grating sensors. Track strain response upon wheel-rail interaction is measured and processed to generate a condition index which directly reflects the wheel condition. This approach is verified by extensive field test, and the preliminary results show that this electromagnetic-immune system provides an effective alternative for wheel defects detection. The system significantly increases the efficiency of maintenance management and reduces the cost for defects detection, and more importantly, avoids derailment timely.


1999 ◽  
pp. 71-84 ◽  
Author(s):  
G. Medioni ◽  
G. Guy ◽  
H. Rom ◽  
A. François
Keyword(s):  

2020 ◽  
Vol 21 (3) ◽  
pp. 181-190
Author(s):  
Jaroslav Frnda ◽  
Marek Durica ◽  
Mihail Savrasovs ◽  
Philippe Fournier-Viger ◽  
Jerry Chun-Wei Lin

AbstractThis paper deals with an analysis of Kohonen map usage possibility for real-time evaluation of end-user video quality perception. The Quality of Service framework (QoS) describes how the network impairments (network utilization or packet loss) influence the picture quality, but it does not reflect precisely on customer subjective perceived quality of received video stream. There are several objective video assessment metrics based on mathematical models trying to simulate human visual system but each of them has its own evaluation scale. This causes a serious problem for service providers to identify a critical point when intervention into the network behaviour is needed. On the other hand, subjective tests (Quality of Experience concept) are time-consuming and costly and of course, cannot be performed in real-time. Therefore, we proposed a mapping function able to predict subjective end-user quality perception based on the situation in a network, video stream features and results obtained from the objective video assessment method.


2021 ◽  
Author(s):  
William Lamb ◽  
Dallon Asnes ◽  
Jonathan Kupfer ◽  
Emma Lickey ◽  
Jeremy Bakken ◽  
...  

<div>Hot spotting in photovoltaic (PV) panels causes physical damage, power loss, reduced lifetime reliability, and increased manufacturing costs. The problem arises routinely in defect-free standard panels; any string of cells that receives uneven illumination can develop hot spots, and the temperature rise often exceeds 100°C in conventional silicon panels despite on-panel bypass diodes, the standard mitigation technique. Bypass diodes limit the power dissipated in a cell subjected to reverse bias, but they do not prevent hot spots from forming. An alternative control method has been suggested by Kernahan [1] that senses in real time the dynamic conductance |dI/dV| of a string of cells and adjusts its operating current so that a partially shaded cell is never forced into reverse bias. We start by exploring the behavior of individual illuminated PV cells when externally forced into reverse bias. We observe that cells can suffer significant heating and structural damage, with desoldering of cell-tabbing and discolorations on the front cell surface. Then we test PV panels and confirm Kernahan’s proposed panel-level solution that anticipates and prevents hot spots in real time. Simulations of cells and panels confirm our experimental observations and provide insights into both the operation of Kernahan’s method and panel performance.</div>


2020 ◽  
Author(s):  
Krzysztof Blachut ◽  
Hubert Szolc ◽  
Mateusz Wasala ◽  
Tomasz Kryjak ◽  
Marek Gorgon

In this paper we present a vision based hardware-software control system enabling autonomous landing of a mul-tirotor unmanned aerial vehicle (UAV). It allows the detection of a marked landing pad in real-time for a 1280 x 720 @ 60 fps video stream. In addition, a LiDAR sensor is used to measure the altitude above ground. A heterogeneous Zynq SoC device is used as the computing platform. The solution was tested on a number of sequences and the landing pad was detected with 96% accuracy. This research shows that a reprogrammable heterogeneous computing system is a good solution for UAVs because it enables real-time data stream processing with relatively low energy consumption.


Sign in / Sign up

Export Citation Format

Share Document