Design of an armband type contact-free space input device for human-machine interface

Author(s):  
Dongwan Ryoo ◽  
Junseok Park
Author(s):  
Tobias Rehrl ◽  
Alexander Bannat ◽  
Jürgen Gast ◽  
Gerhard Rigoll ◽  
Frank Wallhoff

2008 ◽  
Vol 20 (2) ◽  
pp. 260-272 ◽  
Author(s):  
Kazuhiro Taniguchi ◽  
◽  
Atsushi Nishikawa ◽  
Seiichiro Kawanishi ◽  
Fumio Miyazaki

A wearable computing system plays a leading role in the ubiquitous computing era, in which computers are used at any place and at any time. Now the mobile multimedia communication technology based devices, such as mobile phone, handy-type PC, etc., have come to be used in such a broad range of areas, the features of wearable hands-free computing system, which people can constantly use in their daily life or workplace while doing some other job, are highly valued more than ever. However, the wearable computing system has not yet spread so widely owing to various factors. Among such factors is the delay in the development of human machine interface, which is applicable to the wearable computing system. Conventional technologies that require either manual manipulation of keyboard, mouse, touch panel, etc., or a large equipment to make use of electroencephalogram, eyeball movements, etc. for realizing hands-free interface, are not suitable for the wearable computing system. We, therefore, developed a human-machine interface for the wearable computing system. This interface makes it easy to manipulate machine with intentional movements of temple. User can constantly use machine with no interference, as well as with hands free. It is compact and lightweight, permitting ease of manufacturing at a low cost. It does not react to daily actions like conversation, diet, etc., other than movements intended to control the machine. This interface consists of one optical distance sensor mounted in the vicinity of the left and right temples each and of one single-chip microcomputer.


2011 ◽  
Vol 16 (4) ◽  
pp. 83-86
Author(s):  
V.G. Abakumov ◽  
O.YU. Lomakіna ◽  
O.B. YArovenko

This article deals with the possibility of using hand gestures as an input device for human-machine interface. Investigation and a brief analysis of the proposed gesture control system are considered in detail


2012 ◽  
Vol 220-223 ◽  
pp. 1217-1220
Author(s):  
Xin Xin ◽  
Guo Dong Wang ◽  
Ju Liang Xiao ◽  
Gang Wang

Traditional developments of intelligent terminal system could control only one robot and most were based on WinCE platform, which have poor versatility and expansibility. The robot hand intelligent terminal system proposed by this paper was an embedded equipment based on AT91 series chip. It had good human-machine interface with Linux system and Qt4 as software support. It adopted touch-screen technology and I2C keyboard as input device. It had two kinds of programming method, manual teaching-programming and off-line programming. The test shows that its volume is small and its weight is light to easy to operate robot handily.


1999 ◽  
Vol 8 (1) ◽  
pp. 65-85
Author(s):  
R. L. Andersson ◽  
D. C. Gibbon ◽  
R. P. Lyons

BULLSEYE is a flexible, computer input device that operates by generating and analyzing the image of known “props” at 60 Hz to determine position, orientation, and user-controlled internal state variables. The compact device, containing both the camera and processing engine, has been built and fielded. This paper provides an overview of the concept and hardware, then examines the problems and design strategies associated with the interrelated areas of the environment, prop design, and color detection.


1990 ◽  
Author(s):  
B. Bly ◽  
P. J. Price ◽  
S. Park ◽  
S. Tepper ◽  
E. Jackson ◽  
...  

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 687
Author(s):  
Jinzhen Dou ◽  
Shanguang Chen ◽  
Zhi Tang ◽  
Chang Xu ◽  
Chengqi Xue

With the development and promotion of driverless technology, researchers are focusing on designing varied types of external interfaces to induce trust in road users towards this new technology. In this paper, we investigated the effectiveness of a multimodal external human–machine interface (eHMI) for driverless vehicles in virtual environment, focusing on a two-way road scenario. Three phases of identifying, decelerating, and parking were taken into account in the driverless vehicles to pedestrian interaction process. Twelve eHMIs are proposed, which consist of three visual features (smile, arrow and none), three audible features (human voice, warning sound and none) and two physical features (yielding and not yielding). We conducted a study to gain a more efficient and safer eHMI for driverless vehicles when they interact with pedestrians. Based on study outcomes, in the case of yielding, the interaction efficiency and pedestrian safety in multimodal eHMI design was satisfactory compared to the single-modal system. The visual modality in the eHMI of driverless vehicles has the greatest impact on pedestrian safety. In addition, the “arrow” was more intuitive to identify than the “smile” in terms of visual modality.


Author(s):  
Saverio Trotta ◽  
Dave Weber ◽  
Reinhard W. Jungmaier ◽  
Ashutosh Baheti ◽  
Jaime Lien ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document