Handy-Potter: Rapid 3D Shape Exploration Through Natural Hand Motions

Author(s):  
Vinayak ◽  
Sundar Murugappan ◽  
Cecil Piya ◽  
Karthik Ramani

We present the paradigm of natural and exploratory shape modeling by introducing novel 3D interactions for creating, modifying and manipulating 3D shapes using arms and hands. Though current design tools provide complex modeling functionalities, they remain non-intuitive and require significant training since they segregate 3D shapes into hierarchical 2D inputs, thus binding the user to stringent procedural steps and making modifications cumbersome. In addition the designer knows what to design when they go to CAD systems and the creative exploration in design is lost. We present a shape creation paradigm as an exploration of creative imagination and externalization of shapes, particularly in the early phases of design. We integrate the capability of humans to express 3D shapes via hand-arm motions with traditional sweep surface representation to demonstrate rapid exploration of a rich variety of fairly complex 3D shapes. We track the skeleton of users using the depth data provided by low-cost depth sensing camera (Kinect™). Our modeling tool is configurable to provide a variety of implicit constraints for shape symmetry and resolution based on the position, orientation and speed of the arms. Intuitive strategies for coarse and fine shape modifications are also proposed. We conclusively demonstrate the creation of a wide variety of product concepts and show an average modeling time of a only few seconds while retaining the intuitiveness of communicating the design intent.

Author(s):  
Evangelos Alevizos ◽  
Athanasios V Argyriou ◽  
Dimitris Oikonomou ◽  
Dimitrios D Alexakis

Shallow bathymetry inversion algorithms have long been applied in various types of remote sensing imagery with relative success. However, this approach requires that imagery with increased radiometric resolution in the visible spectrum is available. The recent developments in drones and camera sensors allow for testing current inversion techniques on new types of datasets. This study explores the bathymetric mapping capabilities of fused RGB and multispectral imagery, as an alternative to costly hyperspectral sensors. Combining drone-based RGB and multispectral imagery into a single cube dataset, provides the necessary radiometric detail for shallow bathymetry inversion applications. This technique is based on commercial and open-source software and does not require input of reference depth measurements in contrast to other approaches. The robustness of this method was tested on three different coastal sites with contrasting seafloor types. The use of suitable end-member spectra which are representative of the seafloor types of the study area and the sun zenith angle are important parameters in model tuning. The results of this study show good correlation (R2>0.7) and less than half a meter error when they are compared with sonar depth data. Consequently, integration of various drone-based imagery may be applied for producing centimetre resolution bathymetry maps at low cost for small-scale shallow areas.


Author(s):  
X. Fischer ◽  
C. Merlo ◽  
J. Legardeur ◽  
L. Zimmer ◽  
A. Anglada

Most of the time, starting new design projects based on innovative product concepts is a strategic but complicated process. Individual initiatives and the development of new ideas take place within conflicting contexts combining technical, economical and social aspects. During theses phases actors have to formalize new ideas, to exchange them and to collaborate to promote them. Traditional tools do not support such activities. We propose in this paper a new approach dedicated to the product development process from the early phases to the embodiment design phases. Metamodeling techniques and new tools (ID2 - Innovation Development and Diffusion - and CE - Constraint Explorer -) are proposed in order to support those phases ensuring the collaboration and the interaction between design actors, the knowledge and information management, the development of innovative ideas, and the improvement of embodiment design solutions. More over we propose to link our tools to a PLM environment to improve the sharing and the management of information, documents and design solutions in order to foster collaboration. The main objective of our implementation is to foster innovation during design process by improving sharing and reuse of innovative ideas and allowing the organization to identify rapidly best consensus for design solutions.


Author(s):  
Jinmiao Huang ◽  
Rahul Rai

We introduce an intuitive gesture-based interaction technique for creating and manipulating simple three-dimensional (3D) shapes. Specifically, the developed interface utilizes low-cost depth camera to capture user's hand gesture as the input, maps different gestures to system commands and generates 3D models from midair 3D sketches (as opposed to traditional two-dimensional (2D) sketches). Our primary contribution is in the development of an intuitive gesture-based interface that enables novice users to rapidly construct conceptual 3D models. Our development extends current works by proposing both design and technical solutions to the challenges of the gestural modeling interface for conceptual 3D shapes. The preliminary user study results suggest that the developed framework is intuitive to use and able to create a variety of 3D conceptual models.


Author(s):  
Vanel Lazcano ◽  
Felipe Calderero ◽  
Coloma Ballester

This paper discussed an anisotropic interpolation model that filling in-depth data in a largely empty region of a depth map. We consider an image with an anisotropic metric gi⁢j that incorporates spatial and photometric data. We propose a numerical implementation of our model based on the “eikonal” operator, which compute the solution of a degenerated partial differential equation (the biased Infinity Laplacian or biased Absolutely Minimizing Lipschitz Extension). This equation’s solution creates exponential cones based on the available data, extending the available depth data and completing the depth map image. Because of this, this operator is better suited to interpolating smooth surfaces. To perform this task, we assume we have at our disposal a reference color image and a depth map. We carried out an experimental comparison of the AMLE and bAMLE using various metrics with square root, absolute value, and quadratic terms. In these experiments, considered color spaces were sRGB, XYZ, CIE-L*⁢a*⁢b*, and CMY. In this document, we also presented a proposal to extend the AMLE and bAMLE to the time domain. Finally, in the parameter estimation of the model, we compared EHO and PSO. The combination of sRGB and square root metric produces the best results, demonstrating that our bAMLE model outperforms the AMLE model and other contemporary models in the KITTI depth completion suite dataset. This type of model, such as AMLE and bAMLE, is simple to implement and represents a low-cost implementation option for similar applications.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 21
Author(s):  
Tiago Custódio ◽  
Cristiano Alves ◽  
Pedro Silva ◽  
Jorge Silva ◽  
Carlos Rodrigues ◽  
...  

The current design paradigm of car cabin components assumes seats aligned with the driving direction. All passengers are aligned with the driver that, until recently, was the only element in charge of controlling the vehicle. The new paradigm of self-driving cars eliminates several of those requirements, releasing the driver from control duties and creating new opportunities for entertaining the passengers during the trip. This creates the need for controlling functionalities that must be closer to each user, namely on the seat. This work proposes the use of low-cost capacitive touch sensors for controlling car functions, multimedia controls, seat orientation, door windows, and others. In the current work, we have reached a proof of concept that is functional, as shown for several cabin functionalities. The proposed concept can be adopted by current car manufacturers without changing the automobile construction pipeline. It is flexible and can adopt a variety of new functionalities, mostly software-based, added by the manufacturer, or customized by the end-user. Moreover, the newly proposed technology uses a smaller number of plastic parts for producing the component, which implies savings in terms of production cost and energy, while increasing the life cycle of the component.


2020 ◽  
Vol 71 (06) ◽  
pp. 530-537
Author(s):  
HAKAN YÜKSEL ◽  
MELIHA OKTAV BULUT

Sensors can capture and scan many objects in real time for military, security, health and industrial applications. Sensorscan be made smaller, cheaper and more energy efficient due to rapid changes in technology. Low-cost sensors areattractive alternatives to high cost laser scanners in recent years. The Kinect sensor can measure depth data with lowcost and high resolution by scanning the environment. In this study, this sensor collected data on users in front of ascanner, and the depth data results were tested. The process was repeated with four different body positions, and theresults were analysed. The sensor data was reliable versus real measurements. When compared the depth data takenby the sensor with the real measures, the reliability rate is found significance. The difference between the depth imagedata of different users, different positions and different body measures and real data is 0.35 to 1.15 cm. This shows thatthe sensor’s results are close to real data. When the accuracy of the sensor against real measurements is examined,it is seen that these values are between 98.46 % and 99.6 %. Thus, this depth image sensor is reliable and can be usedas an alternative and cheaper way for body measurements.


2020 ◽  
Vol 71 (06) ◽  
pp. 530-537
Author(s):  
HAKAN YÜKSEL ◽  
MELIHA OKTAV BULUT

Sensors can capture and scan many objects in real time for military, security, health and industrial applications. Sensorscan be made smaller, cheaper and more energy efficient due to rapid changes in technology. Low-cost sensors areattractive alternatives to high cost laser scanners in recent years. The Kinect sensor can measure depth data with lowcost and high resolution by scanning the environment. In this study, this sensor collected data on users in front of ascanner, and the depth data results were tested. The process was repeated with four different body positions, and theresults were analysed. The sensor data was reliable versus real measurements. When compared the depth data takenby the sensor with the real measures, the reliability rate is found significance. The difference between the depth imagedata of different users, different positions and different body measures and real data is 0.35 to 1.15 cm. This shows thatthe sensor’s results are close to real data. When the accuracy of the sensor against real measurements is examined,it is seen that these values are between 98.46 % and 99.6 %. Thus, this depth image sensor is reliable and can be usedas an alternative and cheaper way for body measurements.


Author(s):  
Mingshao Zhang ◽  
Zhou Zhang ◽  
El-Sayed Aziz ◽  
Sven K. Esche ◽  
Constantin Chassapis

The Microsoft Kinect is part of a wave of new sensing technologies. Its RGB-D camera is capable of providing high quality synchronized video of both color and depth data. Compared to traditional 3-D tracking techniques that use two separate RGB cameras’ images to calculate depth data, the Kinect is able to produce more robust and reliable results in object recognition and motion tracking. Also, due to its low cost, the Kinect provides more opportunities for use in many areas compared to traditional more expensive 3-D scanners. In order to use the Kinect as a range sensor, algorithms must be designed to first recognize objects of interest and then track their motions. Although a large number of algorithms for both 2-D and 3-D object detection have been published, reliable and efficient algorithms for 3-D object motion tracking are rare, especially using Kinect as a range sensor. In this paper, algorithms for object recognition and tracking that can make use of both RGB and depth data in different scenarios are introduced. Subsequently, efficient methods for scene segmentation including background and noise filtering are discussed. Taking advantage of those two kinds of methods, a prototype system that is capable of working efficiently and stably in various applications related to educational laboratories is presented.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1360 ◽  
Author(s):  
Martin Schätz ◽  
Aleš Procházka ◽  
Jiří Kuchyňka ◽  
Oldřich Vyšata

This paper is devoted to proving two goals, to show that various depth sensors can be used to record breathing rate with the same accuracy as contact sensors used in polysomnography (PSG), in addition to proving that breathing signals from depth sensors have the same sensitivity to breathing changes as in PSG records. The breathing signal from depth sensors can be used for classification of sleep apnea events with the same success rate as with PSG data. The recent development of computational technologies has led to a big leap in the usability of range imaging sensors. New depth sensors are smaller, have a higher sampling rate, with better resolution, and have bigger precision. They are widely used for computer vision in robotics, but they can be used as non-contact and non-invasive systems for monitoring breathing and its features. The breathing rate can be easily represented as the frequency of a recorded signal. All tested depth sensors (MS Kinect v2, RealSense SR300, R200, D415 and D435) are capable of recording depth data with enough precision in depth sensing and sampling frequency in time (20–35 frames per second (FPS)) to capture breathing rate. The spectral analysis shows a breathing rate between 0.2 Hz and 0.33 Hz, which corresponds to the breathing rate of an adult person during sleep. To test the quality of breathing signal processed by the proposed workflow, a neural network classifier (simple competitive NN) was trained on a set of 57 whole night polysomnographic records with a classification of sleep apneas by a sleep specialist. The resulting classifier can mark all apnea events with 100% accuracy when compared to the classification of a sleep specialist, which is useful to estimate the number of events per hour. When compared to the classification of polysomnographic breathing signal segments by a sleep specialist, which is used for calculating length of the event, the classifier has an F 1 score of 92.2% Accuracy of 96.8% (sensitivity 89.1% and specificity 98.8%). The classifier also proves successful when tested on breathing signals from MS Kinect v2 and RealSense R200 with simulated sleep apnea events. The whole process can be fully automatic after implementation of automatic chest area segmentation of depth data.


Sign in / Sign up

Export Citation Format

Share Document