A study of transformation of object location from an image plane to a 3D space based on vanishing point and reference points on the 3D axes

Author(s):  
Teerasak Chotikawanid ◽  
Watcharapan Suwansantisuk ◽  
Pinit Kumhom
Semiotica ◽  
2018 ◽  
Vol 2018 (225) ◽  
pp. 19-38 ◽  
Author(s):  
Donna E. West

AbstractDirecting attention to features in the here and now (via individual or joint ventures) is the single, most basic purpose for Dicisigns in human ontogeny. To effectively individuate in the stream of relational awareness, attentional devices must maximize notice of the dynamicity of primary graphical displays. This is a complex process, in that it requires codification of several interconnected but individualized spatial systems and event correlates: associating objects with locations, utilizing other objects as reference points, using intrinsic sidedness and absolute points of reference to orient, and anticipating potential alterations of participants within the spatial array. Early awareness of shifting object location relies upon a double sign (index, icon) to identify and implement landmarks for precise object location. Afterward, establishing other persons/objects as referent points becomes critical. Determining orientation and motility ultimately requires individuating-shape representamen which can leverage spatial inferencing –defining participant action schemes via event profiles. In other words, expectations of action paths which attentional signs afford drives well-formed abductions of participants’ likely momentary orientational shifts. Nonetheless, to successfully predict these shifts, Dicisigns must supersede affiliation with single energetic interpretants; they need to incorporate logical interpretants realized in agent-receiver role reciprocation.


Author(s):  
WIRAT KESRARAT ◽  
THOTSAPON SORTRAKUL

This research proposed a methodology for specifying the location of an object with image processing. The objectives of this methodology are to capture the target area, and specify the location of the object by using image. In order to locate the dropping object on the image plane efficiently, consecutive images are analyzed and a threshold operation is proposed. Because the accuracy of the dropping objects location on the difference of consecutive images image plane is usually influenced by noise. Moreover, transformation unit is adopted to map the XY coordinate on image plane into the world coordinate for an accuracy of the dropping objects position. After we get the actual XY coordinate of the dropping object, we can find the distance from the target point (center) and clock direction of the dropping object related to the center also. In addition, by using one digital video camera set on the tower and pan to capture the image on the target area to detect the dropping object from the air to the ground. It made the proposed methodology provide easier portability to detect the dropping object in any area.


Author(s):  
N. Zeller ◽  
C. A. Noury ◽  
F. Quint ◽  
C. Teulière ◽  
U. Stilla ◽  
...  

In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.


Author(s):  
N. Zeller ◽  
C. A. Noury ◽  
F. Quint ◽  
C. Teulière ◽  
U. Stilla ◽  
...  

In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.


Author(s):  
E. Nocerino ◽  
F. Menna ◽  
B. Chemisky ◽  
P. Drap

Abstract. Although fully autonomous mapping methods are becoming more and more common and reliable, still the human operator is regularly employed in many 3D surveying missions. In a number of underwater applications, divers or pilots of remotely operated vehicles (ROVs) are still considered irreplaceable, and tools for real-time visualization of the mapped scene are essential to support and maximize the navigation and surveying efforts. For underwater exploration, image mosaicing has proved to be a valid and effective approach to visualize large mapped areas, often employed in conjunction with autonomous underwater vehicles (AUVs) and ROVs. In this work, we propose the use of a modified image mosaicing algorithm that coupled with image-based real-time navigation and mapping algorithms provides two visual navigation aids. The first is a classic image mosaic, where the recorded and processed images are incrementally added, named 2D sequential image mosaicing (2DSIM). The second one geometrically transform the images so that they are projected as planar point clouds in the 3D space providing an incremental point cloud mosaicing, named 3D sequential image plane projection (3DSIP). In the paper, the implemented procedure is detailed, and experiments in different underwater scenarios presented and discussed. Technical considerations about computational efforts, frame rate capabilities and scalability to different and more compact architectures (i.e. embedded systems) is also provided.


2020 ◽  
Vol 958 (4) ◽  
pp. 41-50
Author(s):  
S.M. Mokrova ◽  
R.P. Petrov ◽  
V.N. Milich

The article deals with the algorithm for determining the exterior and interior orientation elements of an infrared image obtained from an unmanned aerial vehicle using four reference points. The idea of the proposed algorithm is to determine the true position of the image by the defined three-dimensional spatial coordinates of the reference points in the image at the time of shooting. The image plane is built up on the defined points. The coordinates of the principal point of the image are calculated by making a perpendicular from the perspective center to the plane of the image. The focal length is equal to the length of this perpendicular. Euler angles characterizing the position of the camera at the time of shooting are calculated after determining the axes’ directions of the inclined image coordinate system. The proposed algorithm is effective even in the case when all the elements of the image orientation are unknown. Calculations of the image elements on model examples with different initial data show high accuracy. The possibility of obtaining the necessary accuracy for the orthotransformation procedure was confirmed on real images.


2021 ◽  
Vol 87 (3) ◽  
pp. 76-84
Author(s):  
O. V. Vladimirova ◽  
Yu. D. Grigoriev

A problem of optimizing the configuration of a navigation measuring system is considered in terms of the experimental design using a distance navigation problem for position of the object location. It is shown that the stated problem is equivalent to the problem of A-optimal experimental design for a regression function (nonlinear in parameters) and can be reduced to a trigonometric model. The response function, Fisher’s information and the sensitivity factor of the navigation system in case of two and three beacons and correlated measurements are presented in an explicit form. Using the equivalence theorem for A-criterion in the case of two-dimensional (plane) distance problem we confirm again the Barabanovs’s result that matrixes of A-optimal designs are the Kolmogorov – Maltsev matrixes. A similar result holds for the D-optimality criterion in the considered case. The effect of the measurement correlation in a distance navigation problem with two and three reference points is considered. The formulas for the sensitivity factors expressed in terms of bearings on the reference points and intersection angle of object are derived. In addition to a problem of optimizing the network configuration, the data processing problem in two-dimensional distance navigation problem with two reference points is also considered. The location of the object is determined in two ways, i.e., using the geometrical method and method of resultants. In the first method the solution of a distance navigation problem comes to the consideration of two independent quadratic equations for determination of the first and the second coordinates of the object. The equations are obtained in the explicit form. The second method also leads to two quadratic equations for determination of the object location. This is an option of the exclusion method which provides for an explicit form of conditions ensuring the solution of the considered problem for determination of the object location. Examples are considered that confirm the stated conclusions.


2020 ◽  
Vol 1 (1) ◽  
pp. 08-13
Author(s):  
Yaseen Mustafa

The resection in 3D space is a common problem in surveying engineering and photogrammetry based on observed distances, angles, and coordinates. This resection problem is nonlinear and comprises redundant observations which is normally solved using the least-squares method in an iterative approach. In this paper, we introduce a vigorous angular based resection method that converges to the global minimum even with very challenging starting values of the unknowns. The method is based on deriving oblique angles from the measured horizontal and vertical angles by solving spherical triangles. The derived oblique angles tightly connected the rays enclosed between the resection point and the reference points. Both techniques of the nonlinear least square adjustment either using the Gauss-Newton or Levenberg – Marquardt are applied in two 3D resection experiments. In both numerical methods, the results converged steadily to the global minimum using the proposed angular resection even with improper starting values. However, applying the Levenberg – Marquardt method proved to reach the global minimum solution in all the challenging situations and outperformed the Gauss-Newton method.


2012 ◽  
Vol 246-247 ◽  
pp. 22-27
Author(s):  
Zheng Zhang ◽  
Xiao Wei Liu ◽  
Guang You Yang

A kind of calculation model of 3D space transformation is introduced, which is applicable to the monocular vision of robot manipulator, and the three-dimensional space plane mapping problem of image plane to the actual horizontal plane of monocular vision has been solved. It transforms the imaging coordinate system of target in monocular vision into the world coordinate system of the manipulator, so as to calculate the relative position of targets and the manipulator. The algorithm has better accuracy and reliability, which is proved by contrasting and testing the calculation result of object coordinate system transformed to the actual position coordinates to the sampling points in embedded platform.


Author(s):  
Paweł Rotter ◽  
Witold Byrski ◽  
Michał Dajda ◽  
Grzegorz Łojek

AbstractIn the double-plane method for stereo vision system calibration, the correspondence between screen coordinates and location in 3D space is calculated based on four plane-to-plane transformations; there are two planes of the calibration pattern and two cameras. The method is intuitive, and easy to implement, but, the main disadvantage is ill-conditioning for some spatial locations. In this paper we propose a method which exploits the third plane which physically does not belong to the calibration pattern, but can be calculated from the set of reference points. Our algorithm uses a combination of three calibration planes, with weights which depend on screen coordinates of the point of interest; a pair of planes which could cause numerical errors receives small weights and have practically no influence on the final results. We analyse errors, and their distribution in 3D space, for the basic and the improved algorithm. Experiments demonstrate high accuracy and reliability of our method compared to the basic version; root mean square error and maximum error, are reduced by factors of 4 and 20 respectively.


Sign in / Sign up

Export Citation Format

Share Document