scholarly journals SambotII: A New Self-Assembly Modular Robot Platform Based on Sambot

2018 ◽  
Vol 8 (10) ◽  
pp. 1719 ◽  
Author(s):  
Wenshuai Tan ◽  
Hongxing Wei ◽  
Bo Yang

A new self-assembly modular robot (SMR) SambotII is developed based on SambotI, which is a previously-built hybird type SMR that is capable of autonomous movement and self-assembly. As is known, SambotI only has limited abilities of environmental perception and target recognition, because its STM-32 processor cannot handle heavy work, like image processing and path planning. To improve the computing ability, an x86 dual-core CPU is applied and a hierarchical software architecture with five layers is designed. In addition, to enhance its perception abilities, a laser-camera unit and a LED-camera unit are employed to obtain the distance and angle information, respectively, and the color-changeable LED lights are used to identify different passive docking surfaces during the docking process. Finally, the performances of SambotII are verified by docking experiments.

2002 ◽  
Vol 114 (4) ◽  
pp. 674-676 ◽  
Author(s):  
Rustem F. Ismagilov ◽  
Alexander Schwartz ◽  
Ned Bowden ◽  
George M. Whitesides

2020 ◽  
Vol 17 (6) ◽  
pp. 172988142096907
Author(s):  
Changxin Li

In the process of strawberry easily broken fruit picking, in order to reduce the damage rate of the fruit, improves accuracy and efficiency of picking robot, field put forward a motion capture system based on international standard badminton edge feature detection and capture automation algorithm process of night picking robot badminton motion capture techniques training methods. The badminton motion capture system can analyze the game video in real time and obtain the accuracy rate of excellent badminton players and the technical characteristics of badminton motion capture through motion capture. The purpose of this article is to apply the high-precision motion capture vision control system to the design of the vision control system of the robot in the night picking process, so as to effectively improve the observation and recognition accuracy of the robot in the night picking process, so as to improve the degree of automation of the operation. This paper tests the reliability of the picking robot vision system. Taking the environment of picking at night as an example, image processing was performed on the edge features of the fruits picked by the picking robot. The results show that smooth and enhanced image processing can successfully extract edge features of fruit images. The accuracy of the target recognition rate and the positioning ability of the vision system of the picking robot were tested by the edge feature test. The results showed that the accuracy of the target recognition rate and the positioning ability of the motion edge of the vision system were far higher than 91%, satisfying the automation demand of the picking robot operation with high precision.


2002 ◽  
Vol 28 (7-8) ◽  
pp. 967-993 ◽  
Author(s):  
F.J. Seinstra ◽  
D. Koelma ◽  
J.M. Geusebroek

Author(s):  
Ankush Rai ◽  
R. Jagadeesh Kannan

Learning visual models of object classes conventionally require hundreds or a large number of training samples. Conventional gradient-based approaches for target recognition require lot of data to be trained on and require exhaustive training with high computational expense. Hence, when a new condition or untrained data is encountered, such systems inadequately misconfigure newly learned feature sets in the trained model. This misconfigures the structure of re-learned features and is then carried out in subsequent recognition stages. Thus, a development in this scenario with low training time will allow us to fend of this disadvantage. This study presents a new automatic target recognition framework that gives the enhanced performance of target-recognition system when several imaging sensors are connected with one another. This is in contrast with traditional automatic target recognition frameworks, which utilizes one-on-one computational model over synthetic-aperture radar image-processing systems. The work comprises of a learning-based classifications strategy when dealing with sharing of learned parameters over the network to discern critical changes in target-recognition performance by utilizing a novel one-shot learning-based reconfigurable learning network for image processing platform. This upgrades the networked connected CCTV and multiview synthetic-aperture radar image object identification and recognition process.


Sign in / Sign up

Export Citation Format

Share Document