scholarly journals Combining Pervasive Computing with Activity Recognition and Learning

Author(s):  
Patrice C. ◽  
Bruno Bouchard ◽  
Abdenour Bouzouane ◽  
Sylvain Giroux
Author(s):  
Francisco J. Ballestero ◽  
Enrique Soriano ◽  
Gorka Guardiola

There are some important requirements to build effective smart spaces, like human aspects, sensing, activity recognition, context awareness, etc. However, all of them require adequate system support to build systems that work in practice. In this chapter, we discuss system level support services that are necessary to build working smart spaces. We also include a full discussion of system abstractions for pervasive computing taking in account naming, protection, modularity, communication, and programmability issues.


2017 ◽  
Vol 13 (2) ◽  
pp. 58-78 ◽  
Author(s):  
Samaneh Zolfaghari ◽  
Mohammad Reza Keyvanpour ◽  
Raziyeh Zall

New advancements in pervasive computing technology have turned smart homes into a daily living monitoring tool increasingly used for elderly. Recently, using knowledge driven approaches such as ontology to introduce semantic smart homes has received attention due to their flexibility, reasoning and knowledge representation. Due to the vast number of ontological human activity recognition methods, the proposed ontological human activity recognition framework can be effective in analyzing and evaluating different methods in different applications and dealing with various challenges. Also, due to numerous challenges involved in different aspects of ontology-based human activity recognition in smart homes, this paper offers a classification for challenges in human activity recognition in ontology based systems. Then the proposed ontological human activity recognition framework is evaluated based on the proposed classification and ontology-based techniques which are thought to solve some of the challenges are examined and analyzed.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4119 ◽  
Author(s):  
Alexander Diete ◽  
Heiner Stuckenschmidt

In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users’ arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F 1 -measure of up to 79.6%.


2014 ◽  
Vol 134 (3) ◽  
pp. 332-337 ◽  
Author(s):  
Jun Goto ◽  
Takuya Kidokoro ◽  
Tomohiro Ogura ◽  
Satoshi Suzuki

Author(s):  
Arijit Chowdhury ◽  
Taniya Das ◽  
Smriti Rani ◽  
Anwesha Khasnobish ◽  
Tapas Chakravarty

Sign in / Sign up

Export Citation Format

Share Document