scholarly journals Particle Filter Tracking without Dynamics

2007 ◽  
Vol 4 (4) ◽  
pp. 169-177 ◽  
Author(s):  
Jaime Ortegon-Aguilar ◽  
Eduardo Bayro-Corrochano

People tracking is an interesting topic in computer vision. It has applications in industrial areas such as surveillance or human-machine interaction. Particle Filters is a common algorithm for people tracking; challenging situations occur when the target's motion is poorly modelled or with unexpected motions. In this paper, an alternative to address people tracking is presented. The proposed algorithm is based in particle filters, but instead of using a dynamical model, it uses background subtraction to predict future locations of particles. The algorithm is able to track people in omnidirectional sequences with a low frame rate (one or two frames per second). Our approach can tackle unexpected discontinuities and changes in the direction of the motion. The main goal of the paper is to track people from laboratories, but it has applications in surveillance, mainly in controlled environments.

2011 ◽  
Vol 2 (2) ◽  
pp. 1
Author(s):  
Carlos Hitoshi Morimoto ◽  
Flávio Coutinho ◽  
Jefferson Silva ◽  
Silvia Ghirotti ◽  
Thiago Santos

This paper introduces the Laboratory of Technologies for Interaction(LaTIn) and briefly describes its current main projects. The mainfocus of LaTInhas been developing new ways of human-machineinteraction using computer vision techniques. The projects arecathegorized according to the distance between the human user and themachine being operated. For close distances, appropriate forinteraction with desktop computers for example, we have developed eye-gazebased interfaces. We have also built hand and body gestures interfacesappropriate for kiosks and virtual reality settings and, for largedistances, we have developed novel multiple people tracking techniquesthat have been used for surveillance and monitoring applications.


Author(s):  
Nagaraja N Poojary ◽  
Dr. Shivakumar G S ◽  
Akshath Kumar B.H

Language is human's most important communication and speech is basic medium of communication. Emotion plays a crucial role in social interaction. Recognizing the emotion in a speech is important as well as challenging because here we are dealing with human machine interaction. Emotion varies from person to person were same person have different emotions all together has different way express it. When a person express his emotion each will be having different energy, pitch and tone variation are grouped together considering upon different subject. Therefore the speech emotion recognition is a future goal of computer vision. The aim of our project is to develop the smart emotion recognition speech based on the convolutional neural network. Which uses different modules for emotion recognition and the classifier are used to differentiate emotion such as happy sad angry surprise. The machine will convert the human speech signals into waveform and process its routine at last it will display the emotion. The data is speech sample and the characteristics are extracted from the speech sample using librosa package. We are using RAVDESS dataset which are used as an experimental dataset. This study shows that for our dataset all classifiers achieve an accuracy of 68%.


Leonardo ◽  
2020 ◽  
Vol 53 (4) ◽  
pp. 429-433
Author(s):  
Yi-Chin Lee ◽  
Daniel Cardoso Llach

This paper presents Hybrid Embroidery, a framework for interactive fabrication that leverages computational methods to broaden the possibilities of the craft of embroidery. Combining embroidery techniques, generative design methods, computer vision and a computerized embroidery machine, we show how this framework elicits a variety of innovative fabrication experiences that emphasize open-ended exploration, improvisation and play. The paper documents this framework, a series of sample results, challenges and next steps. It further outlines some of its implications for supporting creative exploration through real-time and direct manipulation of materials and close human-machine interaction.


Author(s):  
Qiuhong Ke ◽  
Jun Liu ◽  
Mohammed Bennamoun ◽  
Senjian An ◽  
Ferdous Sohel ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document