scholarly journals Automated Quantification of Macronutrients using Computer Vision on a Depth-Sensing Smartphone (Preprint)

10.2196/15294 ◽  
2019 ◽  
Author(s):  
David Herzig ◽  
Christos T Nakas ◽  
Janine Stalder ◽  
Christophe Kosinski ◽  
Céline Laesser ◽  
...  
Plant Methods ◽  
2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Raul Masteling ◽  
Lodewijk Voorhoeve ◽  
Joris IJsselmuiden ◽  
Francisco Dini-Andreote ◽  
Wietse de Boer ◽  
...  

2019 ◽  
Author(s):  
David Herzig ◽  
Christos T Nakas ◽  
Janine Stalder ◽  
Christophe Kosinski ◽  
Céline Laesser ◽  
...  

BACKGROUND Quantification of dietary intake is key to the prevention and management of numerous metabolic disorders. Conventional approaches are challenging, laborious, and, suffer from lack of accuracy. The recent advent of depth-sensing smartphones in conjunction with computer vision has the potential to facilitate reliable quantification of food intake. OBJECTIVE To evaluate the accuracy of a novel smartphone application combining depth-sensing hardware with computer vision to quantify meal macronutrient content. METHODS The application ran on a smartphone with built-in depth sensor applying structured light (iPhone X) and estimated weight, macronutrient (carbohydrate, protein, fat) and energy content of 48 randomly chosen meals (type of meals: breakfast, cooked meals, snacks) encompassing 128 food items. Reference weight was generated by weighing individual food items using a precision scale. The study endpoints were fourfold: i) error of estimated meal weight; ii) error of estimated meal macronutrient content and energy content; iii) segmentation performance; and iv) processing time. RESULTS Mean±SD absolute error of the application’s estimate was 35.1±42.8g (14.0±12.2%) for weight, 5.5±5.1g (14.8±10.9%) for carbohydrate content, 2.4±5.6g (13.0±13.8%), 1.3±1.7g (12.3±12.8%) for fat content and 41.2±42.5kcal (12.7±10.8%) for energy content. While estimation accuracy was not affected by the viewing angle, the type of meal mattered with slightly worse performance for cooked meals compared to breakfast and snack. Segmentation required adjustment for 7 out of 128 items. Mean±SD processing time across all meals was 22.9±8.6s. CONCLUSIONS The present study evaluated the accuracy of a novel smartphone application with integrated depth-sensing camera and found a high accuracy in food estimation across all macronutrients. This was paralleled by a high segmentation performance and low processing time corroborating the high usability of this system.


2015 ◽  
Vol 3 (1) ◽  
Author(s):  
Junyang Chen ◽  
James F. Cremer ◽  
Kasra Zarei ◽  
Alberto M. Segre ◽  
Philip M. Polgreen

Abstract Background.  We determined the feasibility of using computer vision and depth sensing to detect healthcare worker (HCW)-patient contacts to estimate both hand hygiene (HH) opportunities and personal protective equipment (PPE) adherence. Methods.  We used multiple Microsoft Kinects to track the 3-dimensional movement of HCWs and their hands within hospital rooms. We applied computer vision techniques to recognize and determine the position of fiducial markers attached to the patient's bed to determine the location of the HCW's hands with respect to the bed. To measure our system's ability to detect HCW-patient contacts, we counted each time a HCW's hands entered a virtual rectangular box aligned with a patient bed. To measure PPE adherence, we identified the hands, torso, and face of each HCW on room entry, determined the color of each body area, and compared it with the color of gloves, gowns, and face masks. We independently examined a ground truth video recording and compared it with our system's results. Results.  Overall, for touch detection, the sensitivity was 99.7%, with a positive predictive value of 98.7%. For gowned entrances, sensitivity was 100.0% and specificity was 98.15%. For masked entrances, sensitivity was 100.0% and specificity was 98.75%; for gloved entrances, the sensitivity was 86.21% and specificity was 98.28%. Conclusions.  Using computer vision and depth sensing, we can estimate potential HH opportunities at the bedside and also estimate adherence to PPE. Our fine-grained estimates of how and how often HCWs interact directly with patients can inform a wide range of patient-safety research.


2015 ◽  
Vol 2 (1) ◽  
pp. e11 ◽  
Author(s):  
Cecily Morrison ◽  
Marcus D'Souza ◽  
Kit Huckvale ◽  
Jonas F Dorn ◽  
Jessica Burggraaff ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7598
Author(s):  
Kazimieras Buškus ◽  
Evaldas Vaičiukynas ◽  
Antanas Verikas ◽  
Saulė Medelytė ◽  
Andrius Šiaulys ◽  
...  

Underwater video surveys play a significant role in marine benthic research. Usually, surveys are filmed in transects, which are stitched into 2D mosaic maps for further analysis. Due to the massive amount of video data and time-consuming analysis, the need for automatic image segmentation and quantitative evaluation arises. This paper investigates such techniques on annotated mosaic maps containing hundreds of instances of brittle stars. By harnessing a deep convolutional neural network with pre-trained weights and post-processing results with a common blob detection technique, we investigate the effectiveness and potential of such segment-and-count approach by assessing the segmentation and counting success. Discs could be recommended instead of full shape masks for brittle stars due to faster annotation among marker variants tested. Underwater image enhancement techniques could not improve segmentation results noticeably, but some might be useful for augmentation purposes.


1985 ◽  
Vol 30 (1) ◽  
pp. 47-47
Author(s):  
Herman Bouma
Keyword(s):  

1983 ◽  
Vol 2 (5) ◽  
pp. 130
Author(s):  
J.A. Losty ◽  
P.R. Watkins

Sign in / Sign up

Export Citation Format

Share Document