scholarly journals The potential of fusing computer vision and depth sensing for accurate distance estimation

Author(s):  
Jakub Dostal ◽  
Per Ola Kristensson ◽  
Aaron Quigley
2019 ◽  
Author(s):  
David Herzig ◽  
Christos T Nakas ◽  
Janine Stalder ◽  
Christophe Kosinski ◽  
Céline Laesser ◽  
...  

BACKGROUND Quantification of dietary intake is key to the prevention and management of numerous metabolic disorders. Conventional approaches are challenging, laborious, and, suffer from lack of accuracy. The recent advent of depth-sensing smartphones in conjunction with computer vision has the potential to facilitate reliable quantification of food intake. OBJECTIVE To evaluate the accuracy of a novel smartphone application combining depth-sensing hardware with computer vision to quantify meal macronutrient content. METHODS The application ran on a smartphone with built-in depth sensor applying structured light (iPhone X) and estimated weight, macronutrient (carbohydrate, protein, fat) and energy content of 48 randomly chosen meals (type of meals: breakfast, cooked meals, snacks) encompassing 128 food items. Reference weight was generated by weighing individual food items using a precision scale. The study endpoints were fourfold: i) error of estimated meal weight; ii) error of estimated meal macronutrient content and energy content; iii) segmentation performance; and iv) processing time. RESULTS Mean±SD absolute error of the application’s estimate was 35.1±42.8g (14.0±12.2%) for weight, 5.5±5.1g (14.8±10.9%) for carbohydrate content, 2.4±5.6g (13.0±13.8%), 1.3±1.7g (12.3±12.8%) for fat content and 41.2±42.5kcal (12.7±10.8%) for energy content. While estimation accuracy was not affected by the viewing angle, the type of meal mattered with slightly worse performance for cooked meals compared to breakfast and snack. Segmentation required adjustment for 7 out of 128 items. Mean±SD processing time across all meals was 22.9±8.6s. CONCLUSIONS The present study evaluated the accuracy of a novel smartphone application with integrated depth-sensing camera and found a high accuracy in food estimation across all macronutrients. This was paralleled by a high segmentation performance and low processing time corroborating the high usability of this system.


2015 ◽  
Vol 3 (1) ◽  
Author(s):  
Junyang Chen ◽  
James F. Cremer ◽  
Kasra Zarei ◽  
Alberto M. Segre ◽  
Philip M. Polgreen

Abstract Background.  We determined the feasibility of using computer vision and depth sensing to detect healthcare worker (HCW)-patient contacts to estimate both hand hygiene (HH) opportunities and personal protective equipment (PPE) adherence. Methods.  We used multiple Microsoft Kinects to track the 3-dimensional movement of HCWs and their hands within hospital rooms. We applied computer vision techniques to recognize and determine the position of fiducial markers attached to the patient's bed to determine the location of the HCW's hands with respect to the bed. To measure our system's ability to detect HCW-patient contacts, we counted each time a HCW's hands entered a virtual rectangular box aligned with a patient bed. To measure PPE adherence, we identified the hands, torso, and face of each HCW on room entry, determined the color of each body area, and compared it with the color of gloves, gowns, and face masks. We independently examined a ground truth video recording and compared it with our system's results. Results.  Overall, for touch detection, the sensitivity was 99.7%, with a positive predictive value of 98.7%. For gowned entrances, sensitivity was 100.0% and specificity was 98.15%. For masked entrances, sensitivity was 100.0% and specificity was 98.75%; for gloved entrances, the sensitivity was 86.21% and specificity was 98.28%. Conclusions.  Using computer vision and depth sensing, we can estimate potential HH opportunities at the bedside and also estimate adherence to PPE. Our fine-grained estimates of how and how often HCWs interact directly with patients can inform a wide range of patient-safety research.


10.2196/15294 ◽  
2019 ◽  
Author(s):  
David Herzig ◽  
Christos T Nakas ◽  
Janine Stalder ◽  
Christophe Kosinski ◽  
Céline Laesser ◽  
...  

Author(s):  
Shubhada Mone ◽  
Nihar Salunke ◽  
Omkar Jadhav ◽  
Arjun Barge ◽  
Nikhil Magar

With the easy availability of technology, smartphones are playing an important role in every person’s life. Also, with the advancements in computer vision based research, Automatic Driving cars, Object Recognition, Depth Map Prediction, Object Distance Estimation, have reached commendable levels of intelligence and accuracy. Combining the research and technological advancements, we can be hopeful in creating a computer vision based mobile-application which will help guide visually disabled people in performing their day to day tasks with easily available mobile applications. With our study, the visually disabled can perform simple tasks like outdoor/indoor navigation without encountering obstacles, also they can avoid accidental collisions with objects in their surroundings. Currently, there are very few applications which provide the same assistance to the visually impaired. Using physical tools like sticks is a very common practice when it comes to avoiding obstacles in a visually disabled person’s path. Our study will be focused on object detection and depth estimation techniques- two of the most popular and advanced fields in Intelligent Computer vision studies. We have explored more on the traditional challenges and future hopes of incorporating these techniques on embedded devices.


Author(s):  
Mulia Pratama ◽  
Giambattista Gruosso ◽  
Widodo Budi Santoso ◽  
Achmad Praptijanto

This research was implementing vehicle networking using WIFI connection and computer vision to measure the distance of vehicles in front of a driver. In particular, this works aimed to improve a safe driving environment thus supporting the current technology concept being developed for inter-vehicular networking, VANET, especially in its safety application such as Overtaking Assistance System. Moreover, it can wirelessly share useful visual information such as hazard area of a road accident. In accordance with Vehicle-to-Vehicle (V2V) concept, a vehicle required to be able to conduct networking via a wireless connection. Useful data and video were the objects to be sent over the network established. The distance of a vehicle to other vehicles towards it is measured and sent via WIFI together with a video stream of the scenery experienced by the front vehicle. Haar Cascade Classifier is chosen to perform the detection. For distance estimation, at least three methods have been compared in this research and found evidence that, for measuring 5 meters, the iterative methods shows 5.80. This method performs well up to 15 meters. For measuring 20 meters, P3P method shows a better result with only 0.71 meters to the ground truth. To provide a physical implementation for both the detection and distance estimation mechanism, those methods were applied in a compact small-sized vehicle-friendly computer device the Raspberry Pi. The performance of the built system then analyzed in terms of streaming latency and accuracy of distance estimation and shows a good result in measuring distance up to 20 meters.


2015 ◽  
Vol 2 (1) ◽  
pp. e11 ◽  
Author(s):  
Cecily Morrison ◽  
Marcus D'Souza ◽  
Kit Huckvale ◽  
Jonas F Dorn ◽  
Jessica Burggraaff ◽  
...  

1985 ◽  
Vol 30 (1) ◽  
pp. 47-47
Author(s):  
Herman Bouma
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document