scholarly journals CNN Training Using 3D Virtual Models for Assisted Assembly with Mixed Reality and Collaborative Robots

2021 ◽  
Vol 11 (9) ◽  
pp. 4269
Author(s):  
Kamil Židek ◽  
Ján Piteľ ◽  
Michal Balog ◽  
Alexander Hošovský ◽  
Vratislav Hladký ◽  
...  

The assisted assembly of customized products supported by collaborative robots combined with mixed reality devices is the current trend in the Industry 4.0 concept. This article introduces an experimental work cell with the implementation of the assisted assembly process for customized cam switches as a case study. The research is aimed to design a methodology for this complex task with full digitalization and transformation data to digital twin models from all vision systems. Recognition of position and orientation of assembled parts during manual assembly are marked and checked by convolutional neural network (CNN) model. Training of CNN was based on a new approach using virtual training samples with single shot detection and instance segmentation. The trained CNN model was transferred to an embedded artificial processing unit with a high-resolution camera sensor. The embedded device redistributes data with parts detected position and orientation into mixed reality devices and collaborative robot. This approach to assisted assembly using mixed reality, collaborative robot, vision systems, and CNN models can significantly decrease assembly and training time in real production.

2021 ◽  
Vol 7 (2) ◽  
pp. 61-66
Author(s):  
Jozef Husár ◽  
Lucia Knapčíková

The presented article points to the combination of mixed reality with advanced robotics and manipulators. It is a current trend and synonymous with the word industry 5.0, where human-machine interaction is an important element. This element is collaborative robots in cooperation with intelligent smart glasses. In the article, we gradually defined the basic elements of the investigated system. We showed how to operate them to control a collaborative robot online and offline using mixed reality. We pointed out the software and hardware side of a specific design. In the practical part, we provided illustrative examples of a robotic workplace, which was displayed using smart glasses Microsoft HoloLens 2. In conclusion, we can say that the current trends in industry 4.0 significantly affect and accelerate activities in manufacturing companies. Therefore, it is necessary to prepare for the arrival of Industry 5.0, which will focus primarily on collaborative robotics.


Materials ◽  
2020 ◽  
Vol 14 (1) ◽  
pp. 67
Author(s):  
Rodrigo Pérez Ubeda ◽  
Santiago C. Gutiérrez Rubert ◽  
Ranko Zotovic Stanisic ◽  
Ángel Perles Ivars

The rise of collaborative robots urges the consideration of them for different industrial tasks such as sanding. In this context, the purpose of this article is to demonstrate the feasibility of using collaborative robots in processing operations, such as orbital sanding. For the demonstration, the tools and working conditions have been adjusted to the capacity of the robot. Materials with different characteristics have been selected, such as aluminium, steel, brass, wood, and plastic. An inner/outer control loop strategy has been used, complementing the robot’s motion control with an outer force control loop. After carrying out an explanatory design of experiments, it was observed that it is possible to perform the operation in all materials, without destabilising the control, with a mean force error of 0.32%. Compared with industrial robots, collaborative ones can perform the same sanding task with similar results. An important outcome is that unlike what might be thought, an increase in the applied force does not guarantee a better finish. In fact, an increase in the feed rate does not produce significant variation in the finish—less than 0.02 µm; therefore, the process is in a “saturation state” and it is possible to increase the feed rate to increase productivity.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-22
Author(s):  
David Langerman ◽  
Alan George

High-resolution, low-latency apps in computer vision are ubiquitous in today’s world of mixed-reality devices. These innovations provide a platform that can leverage the improving technology of depth sensors and embedded accelerators to enable higher-resolution, lower-latency processing for 3D scenes using depth-upsampling algorithms. This research demonstrates that filter-based upsampling algorithms are feasible for mixed-reality apps using low-power hardware accelerators. The authors parallelized and evaluated a depth-upsampling algorithm on two different devices: a reconfigurable-logic FPGA embedded within a low-power SoC; and a fixed-logic embedded graphics processing unit. We demonstrate that both accelerators can meet the real-time requirements of 11 ms latency for mixed-reality apps. 1


Author(s):  
Robert Bogue

Purpose – This paper aims to provide a European perspective on the collaborative robot business and to consider the factors governing future market development. Design/methodology/approach – Following an introduction, this first describes the collaborative robots launched recently by European manufacturers and their applications. It then discusses major European research activities and finally considers the factors stimulating the market. Findings – This article shows that collaborative robots are being commercialised by the major European robot manufacturers as well as by several smaller specialists. Although most have low payload capacities they are inexpensive and offer a number of operational benefits, making them well suited to a range of existing and emerging applications. Europe has a strong research base and several EU-funded programmes aim to stimulate collaborative robot development and use. Rapid market development is anticipated, driven in the main by applications in electronic product manufacture and assembly; new applications in the automotive industry; uses by small to medium-sized manufacturers; and companies seeking robots to support agile production methods. Originality/value – This paper provides a timely review of the rapidly developing European collaborative robot industry.


2021 ◽  
Author(s):  
Tsukasa Koike ◽  
Taichi Kin ◽  
Shota Tanaka ◽  
Katsuya Sato ◽  
Tatsuya Uchida ◽  
...  

Abstract BACKGROUND Image-guided systems improve the safety, functional outcome, and overall survival of neurosurgery but require extensive equipment. OBJECTIVE To develop an image-guided surgery system that combines the brain surface photographic texture (BSP-T) captured during surgery with 3-dimensional computer graphics (3DCG) using projection mapping. METHODS Patients who underwent initial surgery with brain tumors were prospectively enrolled. The texture of the 3DCG (3DCG-T) was obtained from 3DCG under similar conditions as those when capturing the brain surface photographs. The position and orientation at the time of 3DCG-T acquisition were used as the reference. The correct position and orientation of the BSP-T were obtained by aligning the BSP-T with the 3DCG-T using normalized mutual information. The BSP-T was combined with and displayed on the 3DCG using projection mapping. This mixed-reality projection mapping (MRPM) was used prospectively in 15 patients (mean age 46.6 yr, 6 males). The difference between the centerlines of surface blood vessels on the BSP-T and 3DCG constituted the target registration error (TRE) and was measured in 16 fields of the craniotomy area. We also measured the time required for image processing. RESULTS The TRE was measured at 158 locations in the 15 patients, with an average of 1.19 ± 0.14 mm (mean ± standard error). The average image processing time was 16.58 min. CONCLUSION Our MRPM method does not require extensive equipment while presenting information of patients’ anatomy together with medical images in the same coordinate system. It has the potential to improve patient safety.


Author(s):  
Fahad Iqbal Khawaja ◽  
Akira Kanazawa ◽  
Jun Kinugawa ◽  
Kazuhiro Kosuge

Human-Robot Interaction (HRI) for collaborative robots has become an active research topic recently. Collaborative robots assist the human workers in their tasks and improve their efficiency. But the worker should also feel safe and comfortable while interacting with the robot. In this paper, we propose a human-following motion planning and control scheme for a collaborative robot which supplies the necessary parts and tools to a worker in an assembly process in a factory. In our proposed scheme, a 3-D sensing system is employed to measure the skeletal data of the worker. At each sampling time of the sensing system, an optimal delivery position is estimated using the real-time worker data. At the same time, the future positions of the worker are predicted as probabilistic distributions. A Model Predictive Control (MPC) based trajectory planner is used to calculate a robot trajectory that supplies the required parts and tools to the worker and follows the predicted future positions of the worker. We have installed our proposed scheme in a collaborative robot system with a 2-DOF planar manipulator. Experimental results show that the proposed scheme enables the robot to provide anytime assistance to a worker who is moving around in the workspace while ensuring the safety and comfort of the worker.


2020 ◽  
Vol 10 (20) ◽  
pp. 7301
Author(s):  
Daniel Octavian Melinte ◽  
Ana-Maria Travediu ◽  
Dan N. Dumitriu

This paper presents an extensive research carried out for enhancing the performances of convolutional neural network (CNN) object detectors applied to municipal waste identification. In order to obtain an accurate and fast CNN architecture, several types of Single Shot Detectors (SSD) and Regional Proposal Networks (RPN) have been fine-tuned on the TrashNet database. The network with the best performances is executed on one autonomous robot system, which is able to collect detected waste from the ground based on the CNN feedback. For this type of application, a precise identification of municipal waste objects is very important. In order to develop a straightforward pipeline for waste detection, the paper focuses on boosting the performance of pre-trained CNN Object Detectors, in terms of precision, generalization, and detection speed, using different loss optimization methods, database augmentation, and asynchronous threading at inference time. The pipeline consists of data augmentation at the training time followed by CNN feature extraction and box predictor modules for localization and classification at different feature map sizes. The trained model is generated for inference afterwards. The experiments revealed better performances than all other Object Detectors trained on TrashNet or other garbage datasets with a precision of 97.63% accuracy for SSD and 95.76% accuracy for Faster R-CNN, respectively. In order to find the optimal higher and lower bounds of our learning rate where the network is actually learning, we trained our model for several epochs, updating the learning rate after each epoch, starting from 1 × 10−10 and decreasing it until reaching 1 × 10−1.


2019 ◽  
Vol 299 ◽  
pp. 02008 ◽  
Author(s):  
Miriam Matúšová ◽  
Marcela Bučányová ◽  
Erika Hrušková

Rapidly changing user requirements, improving of quality of life or increased safety at work are allarguments for introducing flexible automation that replaces strenuous or dangerous work. Industrial robots with adaptive directing are now deployed to most industries due to their large range of uses. Theirmain addition for manufacturing is to eliminate downtime of complete operating and manipulating production process, to make easier all particular operation in accordance with ergonomics. The paper describescomparing between conventional industrial robot and collaborative robot.


2014 ◽  
Vol 658 ◽  
pp. 678-683 ◽  
Author(s):  
Cristian Pop ◽  
Sanda Margareta Grigorescu ◽  
Erwin Christian Lovasz

This paper presents a robot vision application, implemented in MATLAB working environment, developed for feature-based object recognition, object sorting and manipulation, based on shape classification and its pose calculus for proper positioning. The application described in this article, designed to detect, identify, classify and manipulate objects is based on previous robot vision applications that are presented in more detail in [1]. The idea underlying the mentioned applications is to determine the type, position and orientation of the work pieces (in those cases different types of bearings). Taking it further, in the presented application, objects that show shape with a gradual level of complexity are used. For this reason pattern recognition are discriminated by training a two layers neural network. The network is presented and also the input and output vectors.


Sign in / Sign up

Export Citation Format

Share Document