Presentation of Realistic Motion to the Operator in Operating a Tele-operated Construction Robot

2002 ◽  
Vol 14 (2) ◽  
pp. 98-104 ◽  
Author(s):  
Dingxuan Zhao ◽  
◽  
Yupeng Xia ◽  
Hironao Yamada ◽  
Takayoshi Muto

In this study, a tele-operated construction robotic system was developed. An important problem to be solved in such a system is how the operator can feel the presence of working fields in a high quality. To solve the problem, in this paper, a new control method of employing the 3-DOF motion base is investigated to generate such a realistic motion as produced by a 6-DOF motion base. In order to confirm the validity of this control method, an evaluation experiment has been preformed. According to the experimental result, with both visual information and the motion of the robot, the operator on the 3-DOF motion base could feel the motion of not only roll, pitch and heave, but also surge, sway and yaw as well.

2003 ◽  
Vol 15 (4) ◽  
pp. 361-368 ◽  
Author(s):  
Dingxuan Zhao ◽  
◽  
Yupeng Xia ◽  
Hironao Yamada ◽  
Takayoshi Muto

In this study, we developed a construction tele-robotic system, which can be widely used, for example, for restoration works in damaged areas. The system consists of a servo-controlled construction robot, two joysticks for operations of the robot from a remote place and a 3-degree-of-freedom (DOF) parallel mechanism. An important problem to be solved in such a system is how to convey adequate presence of working area in a high quality to the operator. In this paper, we propose a control method of a 3-DOF parallel link mechanism to simulate the motion of the construction robot by using three acceleration sensors. The validity of this method has been confirmed experimentally. According to the experimental result, each motion of roll, pitch and heave of the construction robot can be simulated accurately by the 3-DOF parallel mechanisms.


Author(s):  
Yue Ai ◽  
Bo Pan ◽  
Yili Fu ◽  
Shuguo Wang

Purpose Robot-assisted system for minimally invasive surgery (MIS) has been attracting more and more attentions. Compared with a traditional MIS, the robot-assisted system for MIS is able to overcome or reduce defects, such as poor hand-eye coordination, heavy labour intensity and limited motion of the instrument. The purpose of this paper is to design a novel robotic system for MIS applications. Design/methodology/approach A robotic system with three separate slave arms for MIS has been designed. In the proposed robot, a new mechanism was designed as the remote centre motion (RCM) mechanism to restrain the movement of instrument or laparoscope around the incision. Moreover, an improved instrument without coupling motion between wrist and grippers was developed to enhance its manipulability. A control system architecture was also developed, and an intuitive control method was applied to realize hand-eye coordination of the operator. Findings For the RCM mechanism, the workspace was analyzed and the positioning accuracy of the remote centre point was tested. The results show that the RCM mechanism can be applied to MIS. Furthermore, the master-slave trajectory tracking experiments reveal that slave robots are able to follow the movement of the master manipulators well. Finally, the feasibility of the robot-assisted system for MIS is proved by performing animal experiments successfully. Originality/value This paper offers a novel robotic system for MIS. It can accomplish the anticipated results.


Author(s):  
Michaela Regneri ◽  
Marcus Rohrbach ◽  
Dominikus Wetzel ◽  
Stefan Thater ◽  
Bernt Schiele ◽  
...  

Recent work has shown that the integration of visual information into text-based models can substantially improve model predictions, but so far only visual information extracted from static images has been used. In this paper, we consider the problem of grounding sentences describing actions in visual information extracted from videos. We present a general purpose corpus that aligns high quality videos with multiple natural language descriptions of the actions portrayed in the videos, together with an annotation of how similar the action descriptions are to each other. Experimental results demonstrate that a text-based model of similarity between actions improves substantially when combined with visual information from videos depicting the described actions.


Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 905 ◽  
Author(s):  
Joga Dharma Setiawan ◽  
Mochammad Ariyanto ◽  
M. Munadi ◽  
Muhammad Mutoha ◽  
Adam Glowacz ◽  
...  

This study proposes a data-driven control method of extra robotic fingers to assist a user in bimanual object manipulation that requires two hands. The robotic system comprises two main parts, i.e., robotic thumb (RT) and robotic fingers (RF). The RT is attached next to the user’s thumb, while the RF is located next to the user’s little finger. The grasp postures of the RT and RF are driven by bending angle inputs of flex sensors, attached to the thumb and other fingers of the user. A modified glove sensor is developed by attaching three flex sensors to the thumb, index, and middle fingers of a wearer. Various hand gestures are then mapped using a neural network. The input data of the robotic system are the bending angles of thumb and index, read by flex sensors, and the outputs are commanded servo angles for the RF and RT. The third flex sensor is attached to the middle finger to hold the extra robotic finger’s posture. Two force-sensitive resistors (FSRs) are attached to the RF and RT for the haptic feedback when the robot is worn to take and grasp a fragile object, such as an egg. The trained neural network is embedded into the wearable extra robotic fingers to control the robotic motion and assist the human fingers in bimanual object manipulation tasks. The developed extra fingers are tested for their capacity to assist the human fingers and perform 10 different bimanual tasks, such as holding a large object, lifting and operate an eight-inch tablet, and lifting a bottle, and opening a bottle cap at the same time.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 59-59
Author(s):  
J M Zanker ◽  
M P Davey

Visual information processing in primate cortex is based on a highly ordered representation of the surrounding world. In addition to the retinotopic mapping of the visual field, systematic variations of the orientation tuning of neurons are described electrophysiologically for the first stages of the visual stream. On the way to understanding the relation of position and orientation representation, in order to give an adequate account of cortical architecture, it will be an essential step to define the minimum spatial requirements for detection of orientation. We addressed the basic question of spatial limits for detecting orientation by comparing computer simulations of simple orientation filters with psychophysical experiments in which the orientation of small lines had to be detected at various positions in the visual field. At sufficiently high contrast levels, the minimum physical length of a line whose orientation can just be resolved is not constant when presented at various eccentricities, but covaries inversely with the cortical magnification factor. A line needs to span less than 0.2 mm on the cortical surface in order to be recognised as oriented, independently of the actual eccentricity at which the stimulus is presented. This seems to indicate that human performance for this task approaches the physical limits, requiring hardly more than approximately three input elements to be activated, in order to detect the orientation of a highly visible line segment. Combined with the estimates for receptive field sizes of orientation-selective filters derived from computer simulations, this experimental result may nourish speculations of how the rather local elementary process underlying orientation detection in the human visual system can be assembled to form much larger receptive fields of the orientation-sensitive neurons known to exist in the primate visual system.


Author(s):  
Hui Li ◽  
Ruiqin Li ◽  
Jianwei Zhang

Controlling an underactuated robot is always an important research and engineering issue, especially when the robot is suffering from multiple sources of uncertainties, such as unmodeled dynamics, external disturbance, and parameter uncertainties. To cope with these uncertainties in such uncertain nonlinear systems which is not fully-actuated, this paper proposes a control method that can actively estimate these uncertainties via the extended state observer (ESO), under the scheme of output-feedback control, the lumped uncertainties can be online estimated and actively compensated. Every joint of the underactuated robotic system can robustly reach the pre-given state in finite-time even though there are only fewer joints than the actual number of joints that can be controlled directly. The experimental results demonstrate the control process and validate that the proposed method is feasible for the studied underactuated robotic system.


2012 ◽  
Vol 20 (2) ◽  
Author(s):  
C. Weng ◽  
H. Tso ◽  
S. Wang

AbstractIn this paper, we propose a stenography scheme based on predictive differencing to embed data in a grey-image. In order to promote the embedding capacity of pixel-value differencing (PVD), we use differencing between a predictive value and an input pixel as the predictive differencing to embed the message where a predictive value is calculated by using various predictors. If the predictive differencing is large, then it means that the input pixel is located in the edge area and, thus, has a larger embedding capacity than the pixel in a smooth area. The experimental result shows that our proposed scheme is capable of providing greater embedding capacity and high quality of stego-images then previous works. Furthermore, we have also applied various predictors to evaluate our proposed scheme.


Sign in / Sign up

Export Citation Format

Share Document