scholarly journals Visualizing 3D imagery by mouth using candy-like models

2021 ◽  
Vol 7 (22) ◽  
pp. eabh0691
Author(s):  
Katelyn M. Baumer ◽  
Juan J. Lopez ◽  
Surabi V. Naidu ◽  
Sanjana Rajendran ◽  
Miguel A. Iglesias ◽  
...  

Handheld models help students visualize three-dimensional (3D) objects, especially students with blindness who use large 3D models to visualize imagery by hand. The mouth has finer tactile sensors than hand, which could improve visualization using microscopic models that are portable, inexpensive, and disposable. The mouth remains unused in tactile learning. Here, we created bite-size 3D models of protein molecules from “gummy bear” gelatin or nontoxic resin. Models were made as small as rice grain and could be coded with flavor and packaged like candy. Mouth, hands, and eyesight were tested at identifying specific structures. Students recognized structures by mouth at 85.59% accuracy, similar to recognition by eyesight using computer animation. Recall accuracy of structures was higher by mouth than hand for 40.91% of students, equal for 31.82%, and lower for 27.27%. The convenient use of entire packs of tiny, cheap, portable models can make 3D imagery more accessible to students.

2004 ◽  
Vol 13 (6) ◽  
pp. 692-707 ◽  
Author(s):  
Sara Keren ◽  
Ilan Shimshoni ◽  
Ayellet Tal

This paper discusses the problem of inserting 3D models into a single image. The main focus of the paper is on the accurate recovery of the camera's parameters, so that 3D models can be inserted in the “correct” position and orientation. The paper addresses two issues. The first is an automatic extraction of the principal vanishing points from an image. The second is a theoretical and an experimental analysis of the errors. To test the concept, a system that “plants” virtual 3D objects in the image was implemented. It was tested on many indoor augmented-reality scenes. Our analysis and experiments have shown that errors in the placement of the objects are unnoticeable.


Author(s):  
Yoshinori Teshima ◽  
Yohsuke Hosoya ◽  
Kazuma Sakai ◽  
Tsukasa Nakano ◽  
Akiko Tanaka ◽  
...  

AbstractTo understand geographical positions, globes adapted for tactile learning is needed for people with visual impairments. Therefore, we created three-dimensional (3D) tactile models of the earth for the visually impaired, utilizing the exact topography data obtained by planetary explorations. Additively manufactured 3D models of the earth can impart an exact shape of relief on their spherical surfaces. In this study, we made improvements to existing models to satisfy the requirements of tactile learning. These improvements were the addition of the equator, prime meridian, and two poles to a basis model. Hence, eight types of model were proposed. The equator and the prime meridian were expressed by the belt on four models (i.e., B1, B2, B3, and B4). The height of their belt was pro-vided in four stages. The equator and the prime meridian were expressed by the gutter on four models (i.e., C1, C2, C3, and C4). The width of their gutter was provided in four stages. The north pole was expressed by a cone, while the south pole was expressed by a cylinder. The two poles have a common shape in all of the eight models. Evaluation experiments revealed that the Earth models developed in this study were useful for tactile learning of the visually impaired.


2020 ◽  
Vol 69 (1) ◽  
pp. 440-444
Author(s):  
A.R. Turganbayeva ◽  
◽  
F.K. Bolysbekova ◽  

This article describes in detail the capabilities of the Autodesk 3D Studio Max editor, which allows secondary school students to master three-dimensional computer modeling. To do this, we selected and studied modeling methods that allow us to create models of various complexity. The article provides modules and operators that can create part models and create real-world effects, create relationships between parts, and combine parts with each other and other objects. We studied the well-known visualization tools for working with three-dimensional graphics Autodesk 3D Studio Max. As a result of the experiment, it was proved that this platform is popular due to a wide range of features that facilitate the creation of complex 3D objects and scenes. It turned out that the Autodesk FBX cross-platform was designed to create 3D data and share it. It provides access to 3D models created in most third-party systems. Conclusions were made that it is available for high school students to master.


Author(s):  
Deuk-Hee Lee ◽  
Sehyung Park ◽  
Sungdo Ha ◽  
Yunyeong Lee

This paper presents a framework-based procedure to generate three-dimensional electronic catalogs (3d e-catalogs), which link three-dimensional viewing windows (3d viewing windows) to e-catalogs. The 3d viewing windows include the three-dimensional interactive and event-driven objects (3d objects) of products for e-catalogs; the 3d viewing windows view and manipulate the 3d objects. The framework provides users with the template models of 3d viewing windows and 3d objects; the template model of a 3d viewing window is defined in HTML and the template model of a 3d object is defined in VRML. Users will specify the components of the template models, and complete the 3d viewing windows including the 3d objects of the new products to be displayed in 3d e-catalogs. In addition, the framework presents the way to get hierarchical 3d models from CAD models of products.


2018 ◽  
Vol 7 (1) ◽  
pp. 11
Author(s):  
Syed Muhammad Ali ◽  
Zeeshan Mahmood ◽  
Dr. Tahir Qadri

This paper presents an intuitive and interactive computer simulated augmented reality interface that gives the illusion of a 3D immersive environment. The projector displays a rendered virtual scene on a flat 2D surface (floor or table) based on the user’s viewpoint to create a head coupled perspective. The projected image is view-dependent which changes and deforms relative to user’s position in space. The nature of perspective projection is distorted and anamorphic such that the deformations in the image give an illusion of a virtual three dimensional holographic scene in which the objects are popping out or floating above the projection plane like real 3D objects. Also, the user can manipulate and interact with 3D objects in a virtual environment by controlling the position and orientation of 3D models, interacting with GUI incorporated in virtual scene and can view, move, manipulate and observe the details of objects from any angle naturally by using his hands. The head and hand tracking is achieved by a low cost 3D depth sensor ‘Kinect’. We describe the implementation of the system in OpenGL and Unity3D game engine. Stereoscopic 3D along with other enhancements are also introduced which further improves the 3D perception. The approach does not require head mounted displays or expensive 3D hologram projectors as it is based on perspective projection technique. Our experiments show the potential of the system providing users a powerful, realistic illusion of 3D.


2018 ◽  
Vol 14 (4) ◽  
pp. 379-384 ◽  
Author(s):  
Celso Dal Ré Carneiro ◽  
Kauan Martins dos Santos ◽  
Thiago Rivaben Lopes ◽  
Filipe Constantino dos Santos ◽  
Jorge Vicente Lopes da Silva ◽  
...  

Three-dimensional modeling connects several fields of knowledge, both basic and applied. 3D models are relevant in educa-tional research because the manipulation of 3D objects favors students' acquisition of spatial vision, but in the Geosciences, there are few didactic publications in Portuguese on the subject. The authors develop an educational research project to produce three-dimensional models of didactic examples of sedimentary basins: the Paraná Basin (Silurian-Upper Cretaceous), the Tau-baté and the São Paulo basins (Neogene). 3D-compatible files will be produced to compose didactic and display material, from maps and geological-structural profiles of certain regional stratigraphic levels of each basin. The research challenges are: (a) to obtain an overview of the available resources for 3D modeling; (b) to evaluate their potential, characteristics, advantages and limitations for applications in Geology and Geosciences; (c) to create computational models of the basins; (d) to produce at least one physical model based on one of the computational models of each basin. The resources will subsidize training work-shops for in-service teachers, technical-scientific articles and Internet pages.


2018 ◽  
Vol 1 (1) ◽  
pp. 11
Author(s):  
Syed Muhammad Ali ◽  
Zeeshan Mahmood ◽  
Dr. Tahir Qadri

This paper presents an intuitive and interactive computer simulated augmented reality interface that gives the illusion of a 3D immersive environment. The projector displays a rendered virtual scene on a flat 2D surface (floor or table) based on the user’s viewpoint to create a head coupled perspective. The projected image is view-dependent which changes and deforms relative to user’s position in space. The nature of perspective projection is distorted and anamorphic such that the deformations in the image give an illusion of a virtual three dimensional holographic scene in which the objects are popping out or floating above the projection plane like real 3D objects. Also, the user can manipulate and interact with 3D objects in a virtual environment by controlling the position and orientation of 3D models, interacting with GUI incorporated in virtual scene and can view, move, manipulate and observe the details of objects from any angle naturally by using his hands. The head and hand tracking is achieved by a low cost 3D depth sensor ‘Kinect’. We describe the implementation of the system in OpenGL and Unity3D game engine. Stereoscopic 3D along with other enhancements are also introduced which further improves the 3D perception. The approach does not require head mounted displays or expensive 3D hologram projectors as it is based on perspective projection technique. Our experiments show the potential of the system providing users a powerful, realistic illusion of 3D.


Author(s):  
Peter Demian ◽  
Kirti Ruikar ◽  
Anne Morris

The 3DIR project investigated the use of 3D visualization to formulate queries, compute the relevance of information items, and visualize search results. Workshops identified the user needs. Based on these, a graph theoretic formulation was created to inform the emerging system architecture. A prototype was developed. This enabled relationships between 3D objects to be used to widen a search. An evaluation of the prototype demonstrated that a tight coupling between text-based retrieval and 3D models could enhance information retrieval but add an extra layer of complexity.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Jerzy Montusiewicz ◽  
Marek Miłosz ◽  
Jacek Kęsik ◽  
Kamil Żyła

AbstractHistorical costumes are part of cultural heritage. Unlike architectural monuments, they are very fragile, which exacerbates the problems of their protection and popularisation. A big help in this can be the digitisation of their appearance, preferably using modern techniques of three-dimensional representation (3D). The article presents the results of the search for examples and methodologies of implementing 3D scanning of exhibited historical clothes as well as the attendant problems. From a review of scientific literature it turns out that so far practically no one in the world has made any methodical attempts at scanning historical clothes using structured-light 3D scanners (SLS) and developing an appropriate methodology. The vast majority of methods for creating 3D models of clothes used photogrammetry and 3D modelling software. Therefore, an innovative approach was proposed to the problem of creating 3D models of exhibited historical clothes through their digitalisation by means of a 3D scanner using structural light technology. A proposal for the methodology of this process and concrete examples of its implementation and results are presented. The problems related to the scanning of 3D historical clothes are also described, as well as a proposal how to solve them or minimise their impact. The implementation of the methodology is presented on the example of scanning elements of the Emir of Bukhara's costume (Uzbekistan) from the end of the nineteenth century, consisting of the gown, turban and shoes. Moreover, the way of using 3D models and information technologies to popularise cultural heritage in the space of digital resources is also discussed.


2021 ◽  
Vol 11 (12) ◽  
pp. 5321
Author(s):  
Marcin Barszcz ◽  
Jerzy Montusiewicz ◽  
Magdalena Paśnikowska-Łukaszuk ◽  
Anna Sałamacha

In the era of the global pandemic caused by the COVID-19 virus, 3D digitisation of selected museum artefacts is becoming more and more frequent practice, but the vast majority is performed by specialised teams. The paper presents the results of comparative studies of 3D digital models of the same museum artefacts from the Silk Road area generated by two completely different technologies: Structure from Motion (SfM)—a method belonging to the so-called low-cost technologies—and by Structured-light 3D Scanning (3D SLS). Moreover, procedural differences in data acquisition and their processing to generate three-dimensional models are presented. Models built using a point cloud were created from data collected in the Afrasiyab museum in Samarkand (Uzbekistan) during “The 1st Scientific Expedition of the Lublin University of Technology to Central Asia” in 2017. Photos for creating 3D models in SfM technology were taken during a virtual expedition carried out under the “3D Digital Silk Road” program in 2021. The obtained results show that the quality of the 3D models generated with SfM differs from the models from the technology (3D SLS), but they may be placed in the galleries of the vitrual museum. The obtained models from SfM do not have information about their size, which means that they are not fully suitable for archiving purposes of cultural heritage, unlike the models from SLS.


Sign in / Sign up

Export Citation Format

Share Document