Motion Analysis of Human Lifting Works with Heavy Objects

2005 ◽  
Vol 17 (6) ◽  
pp. 628-635 ◽  
Author(s):  
Nobutomo Matsunaga ◽  
◽  
Shigeyasu Kawaji

Advances in robot development involves autonomous work in the real world, where robots may lift or carry heavy objects. Motion control of autonomous robots is an important issue, in which configurations and motion differ depending on the robot and the object. Isaka et al. analyzed that lifting configuration is important in realizing efficient lifting minimizing the burden on the lower back, but their analysis was limited to weight lifting of a fixed object. Biped robot control requires analyzing different lifting in diverse situations. Thus, motion analysis is important in clarifying control strategy. We analyzed dynamics of human lifting of barbells in different situations, and found that lifting can be divided into four motions.

Author(s):  
Philip Kurrek ◽  
Mark Jocas ◽  
Firas Zoghlami ◽  
Martin Stoelen ◽  
Vahid Salehi

AbstractCurrent robotic solutions are able to manage specialized tasks, but they cannot perform intelligent actions which are based on experience. Autonomous robots that are able to succeed in complex environments like production plants need the ability to customize their capabilities. With the usage of artificial intelligence (AI) it is possible to train robot control policies without explicitly programming how to achieve desired goals. We introduce AI Motion Control (AIMC) a generic approach to develop control policies for diverse robots, environments and manipulation tasks. For safety reasons, but also to save investments and development time, motion control policies can first be trained in simulation and then transferred to real applications. This work uses the descriptive study I according to Blessing and Chakrabarti and is about the identification of this research gap. We combine latest motion control and reinforcement learning results and show the potential of AIMC for robotic technologies with industrial use cases.


2001 ◽  
Vol 4 (1) ◽  
pp. 87-116 ◽  
Author(s):  
Paul Vogt

In this paper an experiment is presented in which two mobile robots develop a shared lexicon of which the meanings are grounded in the real world. The robots start without a lexicon nor shared meanings and play language games in which they generate new meanings and negotiate words for these meanings. The experiment tries to find the minimal conditions under which verbal communication may begin to evolve. The robots are autonomous in terms of computing and cognition, but they are otherwise far simpler than most, if not all animals. It is demonstrated that a lexicon nevertheless can be made to emerge even though there are strong limits on the size and stability of this lexicon.


Author(s):  
Kazi Tanvir Ahmed Siddiqui ◽  
David Feil-Seifer ◽  
Tianyi Jiang ◽  
Sonu Jose ◽  
Siming Liu ◽  
...  

Simulation environments for Unmanned Aerial Vehicles (UAVs) can be very useful for prototyping user interfaces and training personnel that will operate UAVs in the real world. The realistic operation of such simulations will only enhance the value of such training. In this paper, we present the integration of a model-based waypoint navigation controller into the Reno Rescue Simulator for the purposes of providing a more realistic user interface in simulated environments. We also present potential uses for such simulations, even for real-world operation of UAVs.


2020 ◽  
Vol 10 (5) ◽  
pp. 1555
Author(s):  
Naijun Liu ◽  
Yinghao Cai ◽  
Tao Lu ◽  
Rui Wang ◽  
Shuo Wang

Compared to traditional data-driven learning methods, recently developed deep reinforcement learning (DRL) approaches can be employed to train robot agents to obtain control policies with appealing performance. However, learning control policies for real-world robots through DRL is costly and cumbersome. A promising alternative is to train policies in simulated environments and transfer the learned policies to real-world scenarios. Unfortunately, due to the reality gap between simulated and real-world environments, the policies learned in simulated environments often cannot be generalized well to the real world. Bridging the reality gap is still a challenging problem. In this paper, we propose a novel real–sim–real (RSR) transfer method that includes a real-to-sim training phase and a sim-to-real inference phase. In the real-to-sim training phase, a task-relevant simulated environment is constructed based on semantic information of the real-world scenario and coordinate transformation, and then a policy is trained with the DRL method in the built simulated environment. In the sim-to-real inference phase, the learned policy is directly applied to control the robot in real-world scenarios without any real-world data. Experimental results in two different robot control tasks show that the proposed RSR method can train skill policies with high generalization performance and significantly low training costs.


1998 ◽  
Vol 13 (2) ◽  
pp. 143-146 ◽  
Author(s):  
GEORGE A. BEKEY

Autonomous robots are the intelligent agents par excellence. We frequently define a robot as a machine that senses, thinks and acts, i.e., an agent. They are distinguished from software agents in that robots are embodied agents, situated in the real world. As such, they are subject both to the joys and sorrows of the world. They can be touched and seen and heard (sometimes even smelled!), they have physical dimensions, and they can exert force on other objects. These objects can be like a ball in the RoboCup or Mirosot robot soccer games, they can be parts to be assembled, airplanes to be washed, carpets to be vacuumed, terrain to be traversed or cameras to be aimed. On the other hand, since robots are agents in the world they are also subject to its physical laws, they have mass and inertia, their moving parts encounter friction and hence heat, no two parts are precisely alike, measurements are corrupted by noise, and alas, parts break. Of course, robots also contain computers, and hence they are also subject to the slings and arrows of computer misfortunes, both in hardware and software. Finally, the world into which we place these robots keeps changing, it is non-stationary and unstructured, so that we cannot predict its features accurately in advance.


2010 ◽  
Vol 20 (3) ◽  
pp. 100-105 ◽  
Author(s):  
Anne K. Bothe

This article presents some streamlined and intentionally oversimplified ideas about educating future communication disorders professionals to use some of the most basic principles of evidence-based practice. Working from a popular five-step approach, modifications are suggested that may make the ideas more accessible, and therefore more useful, for university faculty, other supervisors, and future professionals in speech-language pathology, audiology, and related fields.


2006 ◽  
Vol 40 (7) ◽  
pp. 47
Author(s):  
LEE SAVIO BEERS
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document