Computer vision for active and assisted living

2020 ◽  
Vol 10 (1) ◽  
pp. 374 ◽  
Author(s):  
Marco Buzzelli ◽  
Alessio Albé ◽  
Gianluigi Ciocca

Assisted living technologies can be of great importance for taking care of elderly people and helping them to live independently. In this work, we propose a monitoring system designed to be as unobtrusive as possible, by exploiting computer vision techniques and visual sensors such as RGB cameras. We perform a thorough analysis of existing video datasets for action recognition, and show that no single dataset can be considered adequate in terms of classes or cardinality. We subsequently curate a taxonomy of human actions, derived from different sources in the literature, and provide the scientific community with considerations about the mutual exclusivity and commonalities of said actions. This leads us to collecting and publishing an aggregated dataset, called ALMOND (Assisted Living MONitoring Dataset), which we use as the training set for a vision-based monitoring approach.We rigorously evaluate our solution in terms of recognition accuracy using different state-of-the-art architectures, eventually reaching 97% on inference of basic poses, 83% on alerting situations, and 71% on daily life actions. We also provide a general methodology to estimate the maximum allowed distance between camera and monitored subject. Finally, we integrate the defined actions and the trained model into a computer-vision-based application, specifically designed for the objective of monitoring elderly people at their homes.


2021 ◽  
Vol 7 ◽  
pp. e442
Author(s):  
Audrius Kulikajevas ◽  
Rytis Maskeliunas ◽  
Robertas Damaševičius

Human posture detection allows the capture of the kinematic parameters of the human body, which is important for many applications, such as assisted living, healthcare, physical exercising and rehabilitation. This task can greatly benefit from recent development in deep learning and computer vision. In this paper, we propose a novel deep recurrent hierarchical network (DRHN) model based on MobileNetV2 that allows for greater flexibility by reducing or eliminating posture detection problems related to a limited visibility human torso in the frame, i.e., the occlusion problem. The DRHN network accepts the RGB-Depth frame sequences and produces a representation of semantically related posture states. We achieved 91.47% accuracy at 10 fps rate for sitting posture recognition.


Author(s):  
Sara Colantonio ◽  
Giuseppe Coppini ◽  
Daniela Giorgi ◽  
Maria-Aurora Morales ◽  
Maria A. Pascali

1985 ◽  
Vol 30 (1) ◽  
pp. 47-47
Author(s):  
Herman Bouma
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document