Study of Shape Representation Using Internal Radiated-light Projection

2002 ◽  
Vol 14 (4) ◽  
pp. 357-365
Author(s):  
Takahiro Doi ◽  
◽  
Shigeo Hirose

Recent developments in 3D sensors have raised the possibility of using them in an increasing number of engineering applications. However, since most 3D sensors, such as the laser range finder, are based on the use of light, which moves in straight lines, the measurement area is limited to the front of an object, making the back an ""invisible"" surface. To calculate such unmeasurable areas, a system that memorizes shapes often encountered in objects and superimposes them on the scene is required. To realize such a type of system, an appropriate 3D shape representation is needed. This representation should 1) be able to handle and compare partial and complete sets of data of object shapes, and 2) operate quickly enough to be applicable to real-time tasks. We developed a novel shape representation framework called ""Internal Radiated-light Projection (IRP)"" to represent and compare 3D objects. This representation projects local shape information of an object on a sphere by imaginary rays from the ""kernel"" of the object. To describe local shape information and arrange shapes properly, we propose Harmonic Contour Analysis (HCA) and the Shape Matrix. These concepts are characterized by 1) simplicity; 2) the use of local shapes and their adjacent information; and, by using the Shape Matrix, 3) the consideration of the effect of gravity and stable poses for objects. In IRP representation, we can categorize objects in known classes and calculate their positions and attitudes. This paper explains the basic concept behind IRP, which is a way of representing local 3D shapes by HCA and categorizing them using the Shape Matrix. We then present experiments in object recognition for both virtual and real objects to demonstrate its efficiency and feasibility.

2012 ◽  
Vol 239-240 ◽  
pp. 694-699
Author(s):  
Li Feng Yao ◽  
Jian Fei Ouyang ◽  
Xiang Ma

In bio-medicine and other fields, shape analysis is very important for diagnosis of diseases and prediction of shape variation. This paper focuses on the surface parameterization of tube-like 3D objects to obtain and analyze shape information from a sample shape, including its size and the shape variation between different samples. It can well represent the global and local shape information for statistical analysis and for the construction of Medial Shape Model. Firstly, we extract the axis curve of the object by a heat conduction model. Secondly, we obtain the latitude circles by using the normal planes to cross the surface. Then we get the final parameterized surface with quad-dominant meshes by registering the points of single latitude circle and between different circles through coordinate transformation and alignment. Subsequently, we apply the approach to parameterization of a rib bone.


Author(s):  
Yutong Feng ◽  
Yifan Feng ◽  
Haoxuan You ◽  
Xibin Zhao ◽  
Yue Gao

Mesh is an important and powerful type of data for 3D shapes and widely studied in the field of computer vision and computer graphics. Regarding the task of 3D shape representation, there have been extensive research efforts concentrating on how to represent 3D shapes well using volumetric grid, multi-view and point cloud. However, there is little effort on using mesh data in recent years, due to the complexity and irregularity of mesh data. In this paper, we propose a mesh neural network, named MeshNet, to learn 3D shape representation from mesh data. In this method, face-unit and feature splitting are introduced, and a general architecture with available and effective blocks are proposed. In this way, MeshNet is able to solve the complexity and irregularity problem of mesh and conduct 3D shape representation well. We have applied the proposed MeshNet method in the applications of 3D shape classification and retrieval. Experimental results and comparisons with the state-of-the-art methods demonstrate that the proposed MeshNet can achieve satisfying 3D shape classification and retrieval performance, which indicates the effectiveness of the proposed method on 3D shape representation.


Author(s):  
Mahyar Najibi ◽  
Guangda Lai ◽  
Abhijit Kundu ◽  
Zhichao Lu ◽  
Vivek Rathod ◽  
...  
Keyword(s):  

2010 ◽  
Vol 159 ◽  
pp. 128-131
Author(s):  
Jiang Zhou ◽  
Xin Yu Ma

In the case of traditional 3D shape retrieval systems, the objects retrieved are based mainly on the computation of low-level features that are used to detect so-called regions-of-interest. This paper focuses on obtaining the retrieved objects in a machine understandable and intelligent manner. We explore the different kinds of semantic descriptions for retrieval of 3D shapes. Based on ontology technology, we decompose a 3D objects into meaningful parts semi-automatically. Each part can be regarded as a 3D object, and further be semantically annotated according to ontology vocabulary from the Chinese cultural relics. Three kinds of semantic models such as description semantics of domain knowledge, spatial semantics and scenario semantics, are presented for describing semantic annotations from different viewpoints. These annotated semantics can accurately grasp complete semantic descriptions of 3D shapes.


2005 ◽  
Vol 56 (5) ◽  
pp. 791 ◽  
Author(s):  
M. Palmer ◽  
A. Álvarez ◽  
J. Tomás ◽  
B. Morales-Nin

Individual and population age structures constitute essential knowledge for proper management of commercial fisheries. Despite the important advances made in age determination using otolith growth structures, there is still a need to improve both precision and accuracy. The problem of increasing precision in age estimations has been addressed via increasing automation in the identification of growth marks. However, approaches based on otolith size, weight, perimeter, and related measurements (including contour analysis) have moderate success in age prediction. Likewise, early attempts of image analysis have reported poor results, both in cases of 1D (grey-intensity profiles) or 2D images. Recent developments in image analysis have broken this trend, and fully automatic techniques could be an alternative for routine ageing in the near future. Here, we propose a new method for 2D feature extraction that provides robust numerical descriptors of the growth structures of otoliths.


Author(s):  
J. A. Romero ◽  
L. A. Diago ◽  
C. Nara ◽  
J. Shinoda ◽  
I. Hagiwara

Creating complex 3D objects from a flat sheet of material using origami folding techniques has attracted attention in science and engineering. Here, we introduce the concept of “Norigami” that is a mixture of three Japanese words: “Nori” that means glue, “Ori” that means Folding, and “Kami”/“Gami” that means paper. Using traditional origami, spherical or other spatial object are very difficult to achieve by a robot due to the complexity of the movements involved. In Norigami complex 3D shapes can be achieved by a machine or robot mixing simple origami folding with pasting patterns. In the current work, a Norigami robot is designed and developed using Lego NXT technology in order to create a spherical object that can be mass produced.


2021 ◽  
pp. 095679762110107
Author(s):  
Uri Korisky ◽  
Liad Mudrik

Most of our interactions with our environment involve manipulating real 3D objects. Accordingly, 3D objects seem to enjoy preferential processing compared with 2D images, for example, in capturing attention or being better remembered. But are they also more readily perceived? Thus far, the possibility of preferred detection for real 3D objects could not be empirically tested because suppression from awareness has been applied only to on-screen stimuli. Here, using a variant of continuous flash suppression (CFS) with augmented-reality goggles (“real-life” CFS), we managed to suppress both real 3D objects and their 2D representations. In 20 healthy young adults, real objects broke suppression faster than their photographs. Using 3D printing, we also showed in 50 healthy young adults that this finding held only for meaningful objects, whereas no difference was found for meaningless, novel ones (a similar trend was observed in another experiment with 20 subjects, yet it did not reach significance). This suggests that the effect might be mediated by affordances facilitating detection of 3D objects under interocular suppression.


Author(s):  
Jingsheng Zhang ◽  
Shana Smith

Shape matching is one of the fundamental problems in content-based 3D shape retrieval. Since there are typically a large number of possible matches in a shape database, there is a crucial need to perform shape matching efficiently. As a result, shapes must be reduced into a simpler shape representation, and computational complexity is one of the most important criteria for evaluating 3D shape representations. To meet the need, the investigators have implemented a new effective and efficient approach for 3D shape matching, which uses a simplified octree representation of 3D mesh models. The simplified octree representation was developed to improve time and space efficiency over prior representations. In addition, octree representations are rapidly becoming the standard file format for delivering 3D content across the Internet. The proposed approach stores octree information in XML files, rather than using a new data file type, to facilitate comparing models over the Internet. New methods for normalizing models, generating octrees, and comparing models were developed. The proposed approach allows users to efficiently exchange shape information and compare models over the Internet, in standardized data and data file formats, without transferring exact model files. The proposed approach is the first step in a project which will build a complete 3D model database and data retrieval system, which can be incorporated with other data mining techniques.


Author(s):  
Zhizhong Han ◽  
Mingyang Shang ◽  
Yu-Shen Liu ◽  
Matthias Zwicker

In this paper, we present a novel unsupervised representation learning approach for 3D shapes, which is an important research challenge as it avoids the manual effort required for collecting supervised data. Our method trains an RNNbased neural network architecture to solve multiple view inter-prediction tasks for each shape. Given several nearby views of a shape, we define view inter-prediction as the task of predicting the center view between the input views, and reconstructing the input views in a low-level feature space. The key idea of our approach is to implement the shape representation as a shape-specific global memory that is shared between all local view inter-predictions for each shape. Intuitively, this memory enables the system to aggregate information that is useful to better solve the view inter-prediction tasks for each shape, and to leverage the memory as a viewindependent shape representation. Our approach obtains the best results using a combination of L2 and adversarial losses for the view inter-prediction task. We show that VIP-GAN outperforms state-of-the-art methods in unsupervised 3D feature learning on three large-scale 3D shape benchmarks.


Sign in / Sign up

Export Citation Format

Share Document