Analysis of Temporal Latency Variation on Network Coordinate System for Host Selection in Large-Scale Distributed Network

Author(s):  
Hiroshi Yamamoto ◽  
Katsuyuki Yamazaki
Author(s):  
Jiabo Zhang ◽  
Xibin Wang ◽  
Ke Wen ◽  
Yinghao Zhou ◽  
Yi Yue ◽  
...  

Purpose The purpose of this study is the presentation and research of a simple and rapid calibration methodology for industrial robot. Extensive research efforts were devoted to meet the requirements of online compensation, closed-loop feedback control and high-precision machining during the flexible machining process of robot for large-scale cabin. Design/methodology/approach A simple and rapid method to design and construct the transformation relation between the base coordinate system of robot and the measurement coordinate system was proposed based on geometric constraint. By establishing the Denavit–Hartenberg model for robot calibration, a method of two-step error for kinematic parameters calibration was put forward, which aided in achievement of step-by-step calibration of angle and distance errors. Furthermore, KUKA robot was considered as the research object, and related experiments were performed based on laser tracker. Findings The experimental results demonstrated that the accuracy of the coordinate transformation could reach 0.128 mm, which meets the transformation requirements. Compared to other methods used in this study, the calibration method of two-step error could significantly improve the positioning accuracy of robot up to 0.271 mm. Originality/value The methodology based on geometric constraint and two-step error is simple and can rapidly calibrate the kinematic parameters of robot. It also leads to the improvement in the positioning accuracy of robot.


PLoS ONE ◽  
2018 ◽  
Vol 13 (10) ◽  
pp. e0203670 ◽  
Author(s):  
Jungrim Kim ◽  
Mincheol Shin ◽  
Jeongwoo Kim ◽  
Chihyun Park ◽  
Sujin Lee ◽  
...  

2018 ◽  
Vol 24 (6) ◽  
pp. 582-608 ◽  
Author(s):  
Fernando M. Ramírez

Viewpoint-invariant face recognition is thought to be subserved by a distributed network of occipitotemporal face-selective areas that, except for the human anterior temporal lobe, have been shown to also contain face-orientation information. This review begins by highlighting the importance of bilateral symmetry for viewpoint-invariant recognition and face-orientation perception. Then, monkey electrophysiological evidence is surveyed describing key tuning properties of face-selective neurons—including neurons bimodally tuned to mirror-symmetric face-views—followed by studies combining functional magnetic resonance imaging (fMRI) and multivariate pattern analyses to probe the representation of face-orientation and identity information in humans. Altogether, neuroimaging studies suggest that face-identity is gradually disentangled from face-orientation information along the ventral visual processing stream. The evidence seems to diverge, however, regarding the prevalent form of tuning of neural populations in human face-selective areas. In this context, caveats possibly leading to erroneous inferences regarding mirror-symmetric coding are exposed, including the need to distinguish angular from Euclidean distances when interpreting multivariate pattern analyses. On this basis, this review argues that evidence from the fusiform face area is best explained by a view-sensitive code reflecting head angular disparity, consistent with a role of this area in face-orientation perception. Finally, the importance is stressed of explicit models relating neural properties to large-scale signals.


A theory already developed is applied to the case of two-dimensional motion parallel at each point of space to some member, Ʃ, of a one-parameter family of surfaces, the coordinate-system being a network of orthogonal curves drawn on Ʃ. The geodesic curvatures of the orthogonal curves and their relationship to the Gaussian curvature of Ʃ are worked out.The equations of motion and of continuity are expressed in terms of the geodesic curvatures. Meyer’s aerodynamical equations are derived as particular cases when the network is fixed in space and the surfaces are all planes. A formula for a large-scale gradient wind is also obtained as an example of the use of a moving network drawn on a sphere.


2016 ◽  
Vol 10 (2) ◽  
Author(s):  
Xianwen Yu ◽  
Huiqing Wang ◽  
Jinling Wang

AbstractWhile producing large-scale larger than 1:2000 maps in cities or towns, the obstruction from buildings leads to difficult and heavy tasks of measuring mapping control points. In order to avoid measuring the mapping control points and shorten the time of fieldwork, in this paper, a quick mapping method is proposed. This method adjusts many free blocks of surveys together, and transforms the points from all free blocks of surveys into the same coordinate system. The entire surveying area is divided into many free blocks, and connection points are set on the boundaries between free blocks. An independent coordinate system of every free block is established via completely free station technology, and the coordinates of the connection points, detail points and control points in every free block in the corresponding independent coordinate systems are obtained based on poly-directional open traverses. Error equations are established based on connection points, which are determined together to obtain the transformation parameters. All points are transformed from the independent coordinate systems to a transitional coordinate system via the transformation parameters. Several control points are then measured by GPS in a geodetic coordinate system. All the points can then be transformed from the transitional coordinate system to the geodetic coordinate system. In this paper, the implementation process and mathematical formulas of the new method are presented in detail, and the formula to estimate the precision of surveys is given. An example has demonstrated that the precision of using the new method could meet large-scale mapping needs.


2020 ◽  
Author(s):  
Yuan Gao

This thesis discusses approaches and techniques to convert Sparsely- Sampled Light Fields (SSLFs) into Densely-Sampled Light Fields (DSLFs), which can be used for visualization on 3DTV and Virtual Reality (VR) de- vices. Exemplarily, a movable 1D large-scale light field acquisition system for capturing SSLFs in real-world environments is evaluated. This system consists of 24 sparsely placed RGB cameras and two Kinect V2 sensors. The real-world SSLF data captured with this setup can be leveraged to reconstruct real-world DSLFs. To this end, three challenging problems require to be solved for this system: (i) how to estimate the rigid trans- formation from the coordinate system of a Kinect V2 to the coordinate system of an RGB camera; (ii) how to register the two Kinect V2 sensors with a large displacement; (iii) how to reconstruct a DSLF from a SSLF with moderate and large disparity ranges. To overcome these three challenges, we propose: (i) a novel self- calibration method, which takes advantage of the geometric constraints from the scene and the cameras, for estimating the rigid transformations from the camera coordinate frame of one Kinect V2 to the camera coordi- nate frames of 12-nearest RGB cameras; (ii) a novel coarse-to-fine approach for recovering the rigid transformation from the coordinate system of one Kinect to the coordinate system of the other by means of local color and geometry information; (iii) several novel algorithms that can be categorized into two groups for reconstructing a DSLF from an input SSLF, including novel view synthesis methods, which are inspired by the state-of-the-art video frame interpolation algorithms, and Epipolar-Plane Image (EPI) in- painting methods, which are inspired by the Shearlet Transform (ST)-based DSLF reconstruction approaches.


Author(s):  
Maurizio Galetto ◽  
Luca Mastrogiacomo ◽  
Barbara Pralio

The aim of this paper is to describe the architecture and the working principles of a novel InfraRed (IR) optical-based distributed system, designed to perform low-cost indoor coordinate measurements of large-size objects. The hardware/software architecture and system functionalities are discussed, focusing the attention on the integration of methods for distributed network configuration, sensors self-calibration, 3D point localization, and data processing. A preliminary performance evaluation of the sensor devices as well as of the overall measuring system is carried out by discussing the experimental results obtained with a system prototype.


Sign in / Sign up

Export Citation Format

Share Document