Three-dimensional object recognition from two-dimensional images using wavelet transforms and neural networks

1998 ◽  
Vol 37 (3) ◽  
pp. 763 ◽  
Author(s):  
Sylvain Desche⁁nes
1992 ◽  
Vol 14 (2) ◽  
pp. 159-185 ◽  
Author(s):  
James S. Prater ◽  
William D. Richard

This paper describes a method for segmenting transrectal ultrasound images of the prostate using feedforward neural networks. Segmenting two-dimensional images of the prostate into prostate and nonprostate regions is required when forming a three-dimensional image of the prostate from a set of parallel two-dimensional images. Three neural network architectures are presented as examples and discussed. Each of these networks was trained using a small portion of a training image segmented by an expert sonographer. The results of applying the trained networks to the entire training image and to adjacent images in the two-dimensional image set are presented and discussed. The final network architecture was also trained with additional data from two other images in the set. The results of applying this retrained network to each of the images in the set are presented and discussed.


Algorithms ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 99
Author(s):  
Yang Zheng ◽  
Jieyu Zhao ◽  
Yu Chen ◽  
Chen Tang ◽  
Shushi Yu

With the widespread success of deep learning in the two-dimensional field, how to apply deep learning methods from two-dimensional to three-dimensional field has become a current research hotspot. Among them, the polygon mesh structure in the three-dimensional representation as a complex data structure provides an effective shape approximate representation for the three-dimensional object. Although the traditional method can extract the characteristics of the three-dimensional object through the graphical method, it cannot be applied to more complex objects. However, due to the complexity and irregularity of the mesh data, it is difficult to directly apply convolutional neural networks to 3D mesh data processing. Considering this problem, we propose a deep learning method based on a capsule network to effectively classify mesh data. We first design a polynomial convolution template. Through a sliding operation similar to a two-dimensional image convolution window, we directly sample on the grid surface, and use the window sampling surface as the minimum unit of calculation. Because a high-order polynomial can effectively represent a surface, we fit the approximate shape of the surface through the polynomial, use the polynomial parameter as the shape feature of the surface, and add the center point coordinates and normal vector of the surface as the pose feature of the surface. The feature is used as the feature vector of the surface. At the same time, to solve the problem of the introduction of a large number of pooling layers in traditional convolutional neural networks, the capsule network is introduced. For the problem of nonuniform size of the input grid model, the capsule network attitude parameter learning method is improved by sharing the weight of the attitude matrix. The amount of model parameters is reduced, and the training efficiency of the 3D mesh model is further improved. The experiment is compared with the traditional method and the latest two methods on the SHREC15 data set. Compared with the MeshNet and MeshCNN, the average recognition accuracy in the original test set is improved by 3.4% and 2.1%, and the average after fusion of features the accuracy reaches 93.8%. At the same time, under the premise of short training time, this method can also achieve considerable recognition results through experimental verification. The three-dimensional mesh classification method proposed in this paper combines the advantages of graphics and deep learning methods, and effectively improves the classification effect of 3D mesh model.


2011 ◽  
Vol 279 (1730) ◽  
pp. 841-846 ◽  
Author(s):  
Elena Mascalzoni ◽  
Daniel Osorio ◽  
Lucia Regolin ◽  
Giorgio Vallortigara

Bilateral symmetry is visually salient to diverse animals including birds, but whereas experimental studies typically use bilaterally symmetrical two-dimensional patterns that are viewed approximately fronto-parallel; in nature, animals observe three-dimensional objects from all angles. Many animals and plant structures have a plane of bilateral symmetry. Here, we first (experiment I) give evidence that young poultry chicks readily generalize bilateral symmetry as a feature of two-dimensional patterns in fronto-parallel view. We then test the ability of chicks to recognize symmetry in images that would be produced by the transformed view produced by a 40° horizontal combined with a 20° vertical rotation of a pattern on a spherical surface. Experiment II gives evidence that chicks trained to distinguish symmetrical from asymmetrical patterns treat rotated views of symmetrical ‘objects’ as symmetrical. Experiment III gives evidence that chicks trained to discriminate rotated views of symmetrical ‘objects’ from asymmetrical patterns generalize to novel symmetrical objects either in fronto-parallel or rotated view. These findings emphasize the importance of bilateral symmetry for three-dimensional object recognition and raise questions about the underlying mechanisms of symmetry perception.


Sign in / Sign up

Export Citation Format

Share Document