Simultaneous Large Scale Sheet Metal Geometry and Strain Measurement

Author(s):  
A. D. Spence ◽  
D. W. Capson ◽  
M. P. Sklad ◽  
H.-L. Chan ◽  
J. P. Mitchell

The need to simultaneously measure sheet metal geometry and strain arises during research, die tryout, statistical process control production part approval, or in response to manufacturing exceptions. Several optical systems have been developed for strain and geometry measurement of small specimens. Large scale geometric measurement is possible using coordinate measuring machines equipped with touch probes. This Technical Brief summarizes the extension of these methods using three dimensional gray level point clouds obtained from either laser digitizers or an in-house developed stereo vision system.

2019 ◽  
Vol 484 (6) ◽  
pp. 672-677
Author(s):  
A. V. Vokhmintcev ◽  
A. V. Melnikov ◽  
K. V. Mironov ◽  
V. V. Burlutskiy

A closed-form solution is proposed for the problem of minimizing a functional consisting of two terms measuring mean-square distances for visually associated characteristic points on an image and meansquare distances for point clouds in terms of a point-to-plane metric. An accurate method for reconstructing three-dimensional dynamic environment is presented, and the properties of closed-form solutions are described. The proposed approach improves the accuracy and convergence of reconstruction methods for complex and large-scale scenes.


2018 ◽  
Vol 8 (2) ◽  
pp. 20170048 ◽  
Author(s):  
M. I. Disney ◽  
M. Boni Vicari ◽  
A. Burt ◽  
K. Calders ◽  
S. L. Lewis ◽  
...  

Terrestrial laser scanning (TLS) is providing exciting new ways to quantify tree and forest structure, particularly above-ground biomass (AGB). We show how TLS can address some of the key uncertainties and limitations of current approaches to estimating AGB based on empirical allometric scaling equations (ASEs) that underpin all large-scale estimates of AGB. TLS provides extremely detailed non-destructive measurements of tree form independent of tree size and shape. We show examples of three-dimensional (3D) TLS measurements from various tropical and temperate forests and describe how the resulting TLS point clouds can be used to produce quantitative 3D models of branch and trunk size, shape and distribution. These models can drastically improve estimates of AGB, provide new, improved large-scale ASEs, and deliver insights into a range of fundamental tree properties related to structure. Large quantities of detailed measurements of individual 3D tree structure also have the potential to open new and exciting avenues of research in areas where difficulties of measurement have until now prevented statistical approaches to detecting and understanding underlying patterns of scaling, form and function. We discuss these opportunities and some of the challenges that remain to be overcome to enable wider adoption of TLS methods.


Author(s):  
Ali Khaloo ◽  
David Lattanzi ◽  
Adam Jachimowicz

Dams are a critical infrastructure system for many communities, but they are also one of the most challenging to inspect. Dams are typically very large and complex structures, and the result is that inspections are often time-intensive and require expensive, specialized equipment and training to provide inspectors with comprehensive access to the structure. The scale and nature of dam inspections also introduces additional safety risks to the inspectors. Unmanned aerial vehicles (UAV) have the potential to address many of these challenges, particularly when used as a data acquisition platform for photogrammetric three-dimensional (3D) reconstruction and analysis, though the nature of both UAV and modern photogrammetric methods necessitates careful planning and coordination for integration. This paper presents a case study on one such integration at the Brighton Dam, a large-scale concrete gravity dam in Maryland, USA. A combination of multiple UAV platforms and multi-scale photogrammetry was used to create two comprehensive and high-resolution 3D point clouds of the dam and surrounding environment at intervals. These models were then assessed for their overall quality, as well as their ability to resolve flaws and defects that were artificially applied to the structure between inspection intervals. The results indicate that the integrated process is capable of generating models that accurately render a variety of defect types with sub-millimeter accuracy. Recommendations for mission planning and imaging specifications are provided as well.


Author(s):  
S. Bullinger ◽  
C. Bodensteiner ◽  
M. Arens

Abstract. The reconstruction of accurate three-dimensional environment models is one of the most fundamental goals in the field of photogrammetry. Since satellite images provide suitable properties for obtaining large-scale environment reconstructions, there exist a variety of Stereo Matching based methods to reconstruct point clouds for satellite image pairs. Recently, a Structure from Motion (SfM) based approach has been proposed, which allows to reconstruct point clouds from multiple satellite images. In this work, we propose an extension of this SfM based pipeline that allows us to reconstruct not only point clouds but watertight meshes including texture information. We provide a detailed description of several steps that are mandatory to exploit state-of-the-art mesh reconstruction algorithms in the context of satellite imagery. This includes a decomposition of finite projective camera calibration matrices, a skew correction of corresponding depth maps and input images as well as the recovery of real-world depth maps from reparameterized depth values. The paper presents an extensive quantitative evaluation on multi-date satellite images demonstrating that the proposed pipeline combined with current meshing algorithms outperforms state-of-the-art point cloud reconstruction algorithms in terms of completeness and median error. We make the source code of our pipeline publicly available.


2021 ◽  
Author(s):  
Yipeng Yuan

Demand for three-dimensional (3D) urban models keeps growing in various civil and military applications. Topographic LiDAR systems are capable of acquiring elevation data directly over terrain features. However, the task of creating a large-scale virtual environment still remains a time-consuming and manual work. In this thesis a method for 3D building reconstruction, consisting of building roof detection, roof outline extraction and regularization, and 3D building model generation, directly from LiDAR point clouds is developed. In the proposed approach, a new algorithm called Gaussian Markov Random Field (GMRF) and Markov Chain Monte Carlo (MCMC) is used to segment point clouds for building roof detection. The modified convex hull (MCH) algorithm is used for the extraction of roof outlines followed by the regularization of the extracted outlines using the modified hierarchical regularization algorithm. Finally, 3D building models are generated in an ArcGIS environment. The results obtained demonstrate the effectiveness and satisfactory accuracy of the developed method.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yunchao Tang ◽  
Mingyou Chen ◽  
Yunfan Lin ◽  
Xueyu Huang ◽  
Kuangyu Huang ◽  
...  

A four-ocular vision system is proposed for the three-dimensional (3D) reconstruction of large-scale concrete-filled steel tube (CFST) under complex testing conditions. These measurements are vitally important for evaluating the seismic performance and 3D deformation of large-scale specimens. A four-ocular vision system is constructed to sample the large-scale CFST; then point cloud acquisition, point cloud filtering, and point cloud stitching algorithms are applied to obtain a 3D point cloud of the specimen surface. A point cloud correction algorithm based on geometric features and a deep learning algorithm are utilized, respectively, to correct the coordinates of the stitched point cloud. This enhances the vision measurement accuracy in complex environments and therefore yields a higher-accuracy 3D model for the purposes of real-time complex surface monitoring. The performance indicators of the two algorithms are evaluated on actual tasks. The cross-sectional diameters at specific heights in the reconstructed models are calculated and compared against laser rangefinder data to test the performance of the proposed algorithms. A visual tracking test on a CFST under cyclic loading shows that the reconstructed output well reflects the complex 3D surface after correction and meets the requirements for dynamic monitoring. The proposed methodology is applicable to complex environments featuring dynamic movement, mechanical vibration, and continuously changing features.


2019 ◽  
Vol 8 (9) ◽  
pp. 425
Author(s):  
Weite Li ◽  
Kenya Shigeta ◽  
Kyoko Hasegawa ◽  
Liang Li ◽  
Keiji Yano ◽  
...  

In this paper, we propose a method to visualize large-scale colliding point clouds by highlighting their collision areas, and apply the method to visualization of collision simulation. Our method uses our recent work that achieved precise three-dimensional see-through imaging, i.e., transparent visualization, of large-scale point clouds that were acquired via laser scanning of three-dimensional objects. We apply the proposed collision visualization method to two applications: (1) The revival of the festival float procession of the Gion Festival, Kyoto city, Japan. The city government plans to revive the original procession route, which is narrow and not used at present. For the revival, it is important to know whether the festival floats would collide with houses, billboards, electric wires, or other objects along the original route. (2) Plant simulations based on laser-scanned datasets of existing and new facilities. The advantageous features of our method are the following: (1) A transparent visualization with a correct depth feel that is helpful to robustly determine the collision areas; (2) the ability to visualize high collision risk areas and real collision areas; and (3) the ability to highlight target visualized areas by increasing the corresponding point densities.


1996 ◽  
Author(s):  
Lawrence C. West ◽  
Charles W. Roberts ◽  
Emil C. Piscani ◽  
Madan Dubey ◽  
Kenneth A. Jones ◽  
...  

Author(s):  
Bowen Dai ◽  
Chris Bailey-Kellogg

Abstract Motivation Protein–protein interactions drive wide-ranging molecular processes, and characterizing at the atomic level how proteins interact (beyond just the fact that they interact) can provide key insights into understanding and controlling this machinery. Unfortunately, experimental determination of three-dimensional protein complex structures remains difficult and does not scale to the increasingly large sets of proteins whose interactions are of interest. Computational methods are thus required to meet the demands of large-scale, high-throughput prediction of how proteins interact, but unfortunately, both physical modeling and machine learning methods suffer from poor precision and/or recall. Results In order to improve performance in predicting protein interaction interfaces, we leverage the best properties of both data- and physics-driven methods to develop a unified Geometric Deep Neural Network, ‘PInet’ (Protein Interface Network). PInet consumes pairs of point clouds encoding the structures of two partner proteins, in order to predict their structural regions mediating interaction. To make such predictions, PInet learns and utilizes models capturing both geometrical and physicochemical molecular surface complementarity. In application to a set of benchmarks, PInet simultaneously predicts the interface regions on both interacting proteins, achieving performance equivalent to or even much better than the state-of-the-art predictor for each dataset. Furthermore, since PInet is based on joint segmentation of a representation of a protein surfaces, its predictions are meaningful in terms of the underlying physical complementarity driving molecular recognition. Availability and implementation PInet scripts and models are available at https://github.com/FTD007/PInet. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document