scholarly journals Matching of 3D Model and Semantic Description via Multi-Modal Auxiliary Classifier Generative Adversarial Network With Autoencoder

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 177585-177594
Author(s):  
Long Zhang ◽  
Li Liu ◽  
Huaxiang Zhang ◽  
Xiuxiu Chen ◽  
Tianshi Wang ◽  
...  
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 170355-170363
Author(s):  
Xinying Wang ◽  
Dikai Xu ◽  
Fangming Gu

2021 ◽  
Vol 11 (16) ◽  
pp. 7536
Author(s):  
Kyungho Yu ◽  
Juhyeon Noh ◽  
Hee-Deok Yang

Recently, three-dimensional (3D) content used in various fields has attracted attention owing to the development of virtual reality and augmented reality technologies. To produce 3D content, we need to model the objects as vertices. However, high-quality modeling is time-consuming and costly. Drawing-based modeling is a technique that shortens the time required for modeling. It refers to creating a 3D model based on a user’s line drawing, which is a 3D feature represented by two-dimensional (2D) lines. The extracted line drawing provides information about a 3D model in the 2D space. It is sometimes necessary to generate a line drawing from a 2D cartoon image to represent the 3D information of a 2D cartoon image. The extraction of consistent line drawings from 2D cartoons is difficult because the styles and techniques differ depending on the designer who produces the 2D cartoons. Therefore, it is necessary to extract line drawings that show the geometric characteristics well in 2D cartoon shapes of various styles. This paper proposes a method for automatically extracting line drawings. The 2D cartoon shading image and line drawings are learned using a conditional generative adversarial network model, which outputs the line drawings of the cartoon artwork. The experimental results show that the proposed method can obtain line drawings representing the 3D geometric characteristics with a 2D line when a 2D cartoon painting is used as the input.


Technologies ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 82
Author(s):  
Hang Zhang

Machine learning, especially the GAN (Generative Adversarial Network) model, has been developed tremendously in recent years. Since the NVIDIA Machine Learning group presented the StyleGAN in December 2018, it has become a new way for designers to make machines learn different or similar types of architectural photos, drawings, and renderings, then generate (a) similar fake images, (b) style-mixing images, and (c) truncation trick images. The author both collected and created input image data, and specially made architectural plan and section drawing inputs with a clear design purpose, then applied StyleGAN to train specific networks on these datasets. With the training process, we could look into the deep relationship between these input architectural plans or sections, then generate serialized transformation images (truncation trick images) to form the 3D (three-dimensional) model with a decent resolution (up to 1024 × 1024 × 1024 pixels). Though the results of the 3D model generation are difficult to use directly in 3D spatial modeling, these unexpected 3D forms still could inspire new design methods and greater possibilities of architectural plan and section design.


Author(s):  
V. Gorbatsevich ◽  
B. Kulgildin ◽  
M. Melnichenko ◽  
O. Vygolov ◽  
Y. Vizilter

Abstract. The paper addresses the problem of a city heightmap restoration using satellite view image and some manually created area with 3D data. We propose the approach based on generative adversarial networks. Our algorithm contains three steps: low quality 3D restoration, buildings segmentation using restored model, and high-quality 3D restoration. CNN architecture based on original ResDilation blocks and ResNet is used for steps one and three. Training and test datasets were retrieved from National Lidar Dataset (United States) and the algorithm achieved approximately MSE = 3.84 m on this data. In addition, we tested our model on the completely different ISPRS Potsdam dataset and obtained MSE = 5.1 m.


2017 ◽  
Author(s):  
Benjamin Sanchez-Lengeling ◽  
Carlos Outeiral ◽  
Gabriel L. Guimaraes ◽  
Alan Aspuru-Guzik

Molecular discovery seeks to generate chemical species tailored to very specific needs. In this paper, we present ORGANIC, a framework based on Objective-Reinforced Generative Adversarial Networks (ORGAN), capable of producing a distribution over molecular space that matches with a certain set of desirable metrics. This methodology combines two successful techniques from the machine learning community: a Generative Adversarial Network (GAN), to create non-repetitive sensible molecular species, and Reinforcement Learning (RL), to bias this generative distribution towards certain attributes. We explore several applications, from optimization of random physicochemical properties to candidates for drug discovery and organic photovoltaic material design.


Sign in / Sign up

Export Citation Format

Share Document