scholarly journals Confidence Measure of the Shallow-Water Bathymetry Map Obtained through the Fusion of Lidar and Multiband Image Data

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Zhongping Lee ◽  
Mingjia Shangguan ◽  
Rodrigo A. Garcia ◽  
Wendian Lai ◽  
Xiaomei Lu ◽  
...  

With the advancement of Lidar technology, bottom depth (H) of optically shallow waters (OSW) can be measured accurately with an airborne or space-borne Lidar system (HLidar hereafter), but this data product consists of a line format, rather than the desired charts or maps, particularly when the Lidar system is on a satellite. Meanwhile, radiometric measurements from multiband imagers can also be used to infer H (Himager hereafter) of OSW with variable accuracy, though a map of bottom depth can be obtained. It is logical and advantageous to use the two data sources from collocated measurements to generate a more accurate bathymetry map of OSW, where usually image-specific empirical algorithms are developed and applied. Here, after an overview of both the empirical and semianalytical algorithms for the estimation of H from multiband imagers, we emphasize that the uncertainty of Himager varies spatially, although it is straightforward to draw regressions between HLidar and radiometric data for the generation of Himager. Further, we present a prototype system to map the confidence of Himager pixel-wise, which has been lacking until today in the practices of passive remote sensing of bathymetry. We advocate the generation of a confidence measure in parallel with Himager, which is important and urgent for broad user communities.

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2015 ◽  
Vol 64 (1) ◽  
pp. 113-124 ◽  
Author(s):  
Stewart Walker ◽  
Arleta Pietrzak

Abstract Efficient, accurate data collection from imagery is the key to an economical generation of useful geospatial products. Incremental developments of traditional geospatial data collection and the arrival of new image data sources cause new software packages to be created and existing ones to be adjusted to enable such data to be processed. In the past, BAE Systems’ digital photogrammetric workstation, SOCET SET®, met fin de siècle expectations in data processing and feature extraction. Its successor, SOCET GXP®, addresses today’s photogrammetric requirements and new data sources. SOCET GXP is an advanced workstation for mapping and photogrammetric tasks, with automated functionality for triangulation, Digital Elevation Model (DEM) extraction, orthorectification and mosaicking, feature extraction and creation of 3-D models with texturing. BAE Systems continues to add sensor models to accommodate new image sources, in response to customer demand. New capabilities added in the latest version of SOCET GXP facilitate modeling, visualization and analysis of 3-D features.


2021 ◽  
Vol 65 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian ◽  
Xiushan Lu

Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2909 ◽  
Author(s):  
Hanjun Jiang ◽  
Shaolin Xiang ◽  
Yanshu Guo ◽  
Zhihua Wang

The surgery quality of the total knee arthroplasty (TKA) depends on how accurate the knee prosthesis is implanted. The knee prosthesis is composed of the femoral component, the plastic spacer and the tibia component. The instant and kinetic relative pose of the knee prosthesis is one key aspect for the surgery quality evaluation. In this work, a wireless visualized sensing system with the instant and kinetic prosthesis pose reconstruction has been proposed and implemented. The system consists of a multimodal sensing device, a wireless data receiver and a data processing workstation. The sensing device has the identical shape and size as the spacer. During the surgery, the sensing device temporarily replaces the spacer and captures the images and the contact force distribution inside the knee joint prosthesis. It is connected to the external data receiver wirelessly through a 432 MHz data link, and the data is then sent to the workstation for processing. The signal processing method to analyze the instant and kinetic prosthesis pose from the image data has been investigated. Experiments on the prototype system show that the absolute reconstruction errors of the flexion-extension rotation angle (the pitch rotation of the femoral component around the horizontal long axis of the spacer), the internal–external rotation (the yaw rotation of the femoral component around the spacer vertical axis) and the mediolateral translation displacement between the centers of the femoral component and the spacer based on the image data are less than 1.73°, 1.08° and 1.55 mm, respectively. It provides a force balance measurement with error less than ±5 N. The experiments also show that kinetic pose reconstruction can be used to detect the surgery defection that cannot be detected by the force measurement or instant pose reconstruction.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1324 ◽  
Author(s):  
Katharina Willburger ◽  
Kurt Schwenk ◽  
Jörg Brauchle

The monitoring of worldwide ship traffic is a field of high topicality. Activities like piracy, ocean dumping, and refugee transportation are in the news every day. The detection of ships in remotely sensed data from airplanes, drones, or spacecraft contributes to maritime situational awareness. However, the crucial factor is the up-to-dateness of the extracted information. With ground-based processing, the time between image acquisition and delivery of the extracted product data is in the range of several hours, mainly due to the time consumed by storing and transmission of the large image data. By processing and analyzing them on-board and transmitting the product data directly as ship position, heading, and velocity, the delay can be shortened to some minutes. Real-time connections via satellite telecommunication services allow small packets of information to be sent directly to the user without significant delay. The AMARO (Autonomous Real-Time Detection of Moving Maritime Objects) project at DLR is a feasibility study of an on-board ship detection system involving on-board processing and real-time communication. The operation of a prototype system was successfully demonstrated on an airborne platform in spring 2018. The on-ground user could be informed about detected vessels within minutes after sighting without a direct communication link. In this article, the scope, aim, and design of the AMARO system are described, and the results of the flight experiment are presented in detail.


i-Perception ◽  
2017 ◽  
Vol 8 (5) ◽  
pp. 204166951773348 ◽  
Author(s):  
Jan Koenderink ◽  
Andrea van Doorn

Generic red, green, and blue images can be regarded as data sources of coarse (three bins) local spectra, typical data volumes are 104 to 107 spectra. Image data bases often yield hundreds or thousands of images, yielding data sources of 109 to 1010 spectra. There is usually no calibration, and there often are various nonlinear image transformations involved. However, we argue that sheer numbers make up for such ambiguity. We propose a model of spectral data mining that applies to the sublunar realm, spectra due to the scattering of daylight by objects from the generic terrestrial environment. The model involves colorimetry and ecological physics. Whereas the colorimetry is readily dealt with, one needs to handle the ecological physics with heuristic methods. The results suggest evolutionary causes of the human visual system. We also suggest effective methods to generate red, green, and blue color gamuts for various terrains.


1996 ◽  
Vol 5 (1) ◽  
pp. 61-71 ◽  
Author(s):  
Michitaka Hirose ◽  
Kazuhisa Takahashi ◽  
Tomoki Koshizuka ◽  
Taku Morinobu ◽  
Yoichi Watanabe

During recent years, the use of virtual reality technology has become widespread and popular. However, to further broaden the application of virtual reality, more sophisticated and realistic virtual worlds need to be developed. Traditionally, most virtual worlds are generated using three-dimensional (3D) computer graphics incorporating 3D geometric models and various rendering software. However, if 3D models become very complex, the delay time caused by rendering calculations makes it difficult for the user to be able to interact with the virtual world. Also, the production of realistic 3D computer graphics is very cost and labor intensive. From a very practical point of view, it is clear that we need some alternate approaches to realize a truly realistic virtual world. In this paper, the authors introduce an alternate method of generating virtual worlds other than 3D computer graphics. The method discussed here is to generate virtual worlds by processing 2D real images taken by video cameras. For this purpose, a special video camera system that can record image data indexed by position data was developed. Using recorded image data indexed by position data we are able to experience the virtual image world interactively. This method has become realistic due to advances in multimedia computers capable of handling large image data. A tested prototype of this kind of system is discussed in some depth, along with the capability and limitations of this prototype system.


Author(s):  
Vincent Tao ◽  
Ted Q. K. Wang

A pipeline project normally not only covers a large geographic range, but also deals with a variety of data sources, such as geological, geographical, environmental, engineering and socioeconomic data. GIS has proven to be the effective approach to integrating, managing and analyzing these heterogeneous data sources. Due to the nature of pipeline applications, the third dimension of geospatial data is of considerable importance for pipeline planning, construction and maintenance. There is an increasing demand for the development of a 3-D GIS for pipeline applications. With the advent of Internet, distributed computing and computer graphics technologies, development of web-based 3-D GIS becomes technologically possible. The combination of 3-D GIS and web-based computing technologies opens a whole new avenue to the pipeline industry. In this paper, we will address the development of a web-based 3D GIS in terms of benefits and technical challenges. The detailed system architecture as well as the algorithms developed is also discussed. Finally, potential applications for the pipeline industry are introduced and a prototype system, GeoEye 3D, developed by the Department of Geomatics Engineering at the University of Calgary is described.


Sign in / Sign up

Export Citation Format

Share Document