A Region-Based Stereo

1996 ◽  
Vol 8 (2) ◽  
pp. 171-176
Author(s):  
Hiroshi Katsulai ◽  
◽  
Hirotaka Niwa

The stereo, which is a method of obtaining depth information of scene from images obtained from at least two different directions, plays a very important role in applications to robots and similar equipments. The most difficult task in stereo method is to match individual parts of a two-dimensional projected image to those of another image.1,2) With respect to the method of matching, many studies have been conducted, and various techniques have been proposed. The stereo method based on features has attracted attention in recent years. However, it often fails in matching parts when attempting matching using points as features3) because it is difficult to specify points in images. On the other hand, proposals have been made for using line segments, which are easier than points to extract, as opposed to points for matching individual parts.6) Furthermore, methods have been developed which use regional features as an extension of the method which uses line segments.7) The method that uses regional features is considered to have a higher probability of success in matching images than the methods that use points or straight line segments because regional features contain a relatively large amount of description. However, no sufficient studies have been made yet on the region-based stereo. This study situation makes it necessary to conduct basic studies on regionbased stereo. This paper employs regions as features for matching, describes the stereo algorithm that directly employs region segmentation, and investigates the appropriateness of the algorithm by means of computer simulations. It is assumed that the three-dimensional object is a polyhedron, and each face of the object is projected onto a two-dimensional projection plane with uniform brightness using central projection. Region segmentation is delicate and does not necessarily ensure stable results. However, it is considered that a pair of two-dimensional projected images does not contain very large differences if the same scene is to be observed from slightly different directions. This paper uses the centroid of region, which represents the position of region, region shape, and the gray level of region as features for matching. Some consideration is taken on the matching technique to increase the accuracy of matching by performing an operation that is almost equal to enumerating all regional elements, using the sum of the similarity values of regional features as the evaluation function. A three-dimensional plane can be calculated from two matching regions by matching the two boundary points at the same height in the two projected images.

Author(s):  
C.-Y. Kuo ◽  
J.D. Frost ◽  
J.S. Lai ◽  
L.B. Wang

Digital image analysis provides the capability for rapid measurement of particle characteristics. When an image is captured and digitized, numerous measurements can be made in near real time for each particle. Usually, image analysis techniques treat particles as two-dimensional objects since only the two-dimensional projection of the particles is captured. In this study, three-dimensional analysis of aggregate particles that was performed by attaching aggregates in sample trays with two perpendicular faces is described. After the initial projected image of the aggregates is captured and measured, the sample trays are rotated 90 degrees so that the aggregates are now perpendicular to their original orientation and the dimensions of the aggregates in the new projected image are captured and measured. The long, intermediate, and short particle dimensions ( dL, dI, and dS, respectively) provide direct measures of the flatness and elongation of the particles. Some other shape indexes can also be derived from the measurements of area and perimeter length. The proposed image analysis method was verified by comparing the results obtained with manual measurements of particle dimensions for uniform size [passing 12.7 mm (1/2 in.) sieve and retained on 9.5 mm (3/8 in.) sieve] aggregates. Three-dimensional image analysis was also performed on five aggregates of standard size No. 89 from different sources, and the results are summarized herein. The proposed method is expected to improve field quality control of aggregates used in hot mix asphalt.


1996 ◽  
Vol 8 (6) ◽  
pp. 1321-1340 ◽  
Author(s):  
Joseph J. Atick ◽  
Paul A. Griffin ◽  
A. Norman Redlich

The human visual system is proficient in perceiving three-dimensional shape from the shading patterns in a two-dimensional image. How it does this is not well understood and continues to be a question of fundamental and practical interest. In this paper we present a new quantitative approach to shape-from-shading that may provide some answers. We suggest that the brain, through evolution or prior experience, has discovered that objects can be classified into lower-dimensional object-classes as to their shape. Extraction of shape from shading is then equivalent to the much simpler problem of parameter estimation in a low-dimensional space. We carry out this proposal for an important class of three-dimensional (3D) objects: human heads. From an ensemble of several hundred laser-scanned 3D heads, we use principal component analysis to derive a low-dimensional parameterization of head shape space. An algorithm for solving shape-from-shading using this representation is presented. It works well even on real images where it is able to recover the 3D surface for a given person, maintaining facial detail and identity, from a single 2D image of his face. This algorithm has applications in face recognition and animation.


Author(s):  
James A. Lake ◽  
Henry S. Slayter

Cysts of Entamoeba Invadens contain large ordered arrays of closely packed helices which absorb strongly in the ultraviolet. The helices consist of small, approximately spherical particles about 250Å in diameter. Several lines of evidence have indicated that they may be ribosomes. We shall refer to these particles as ribosomes in this paper.DeRosier and Klug (1) have demonstrated that it is possible to reconstruct a three dimensional object from two dimensional projected images, i.e. micrographs, provided that sufficient views, of individual molecules are available. A single view (micrograph) of one ribosomal helix provides many views of individual ribosomes.


2011 ◽  
Vol 21 (05) ◽  
pp. 495-506 ◽  
Author(s):  
KHALED ELBASSIONI ◽  
AMR ELMASRY ◽  
KAZUHISA MAKINO

We show that finding the simplices containing a fixed given point among those defined on a set of n points can be done in O(n + k) time for the two-dimensional case, and in O(n2 + k) time for the three-dimensional case, where k is the number of these simplices. As a byproduct, we give an alternative (to the algorithm in Ref. 4) O(n log r) algorithm that finds the red-blue boundary for n bichromatic points on the line, where r is the size of this boundary. Another byproduct is an O(n2 + t) algorithm that finds the intersections of line segments having two red endpoints with those having two blue endpoints defined on a set of n bichromatic points in the plane, where t is the number of these intersections.


Author(s):  
Sree Shankar ◽  
Rahul Rai

AbstractPrimary among all the activities involved in conceptual design is freehand sketching. There have been significant efforts in recent years to enable digital design methods that leverage humans’ sketching skills. Conventional sketch-based digital interfaces are built on two-dimensional touch-based devices like sketchers and drawing pads. The transition from two-dimensional to three-dimensional (3-D) digital sketch interfaces represents the latest trend in developing new interfaces that embody intuitiveness and human–human interaction characteristics. In this paper, we outline a novel screenless 3-D sketching system. The system uses a noncontact depth-sensing RGB-D camera for user input. Only depth information (no RGB information) is used in the framework. The system tracks the user's palm during the sketching process and converts the data into a 3-D sketch. As the generated data is noisy, making sense of what is sketched is facilitated through a beautification process that is suited to 3-D sketches. To evaluate the performance of the system and the beautification scheme, user studies were performed on multiple participants for both single-stroke and multistroke sketching scenarios.


Author(s):  
Amit S. Jariwala ◽  
Robert E. Schwerzel ◽  
Michael Werve ◽  
David W. Rosen

Stereolithography is an additive manufacturing process in which liquid photopolymer resin is cross-linked and converted to solid polymer with an ultraviolet light source. Exposure Controlled Projection Lithography (ECPL) is a stereolithographic process in which incident radiation, patterned by a dynamic mask, passes through a transparent substrate to cure a photopolymer layer that grows progressively from the substrate surface. In contrast to existing stereolithography techniques, this technique uses a gray-scale projected image, or alternatively a series of binary bit-map images, to produce a three-dimensional polymer object with the desired shape, and it can be used on either flat or curved substrates. Like most stereolithographic technologies, ECPL works in a unidirectional fashion. Calibration constants derived experimentally are fed to the software used to control the system. This unidirectional fabrication method does not, by itself, allow the system to compensate for minor variations, thereby limiting the overall accuracy of the process. We present here a simple, real-time monitoring system based on interferometry, which can be used to provide feedback control to the ECPL process, thus making it more robust and increasing system accuracy. The results obtained from this monitoring system provide a means to better visualize and understand the various phenomena occurring during the photopolymerization of transparent photopolymers.


Sign in / Sign up

Export Citation Format

Share Document