scholarly journals Accuracy of Consumer Wearable Heart Rate Measurement During an Ecologically Valid 24-Hour Period: Intraindividual Validation Study (Preprint)

Author(s):  
Benjamin W Nelson ◽  
Nicholas B Allen

BACKGROUND Wrist-worn smart watches and fitness monitors (ie, wearables) have become widely adopted by consumers and are gaining increased attention from researchers for their potential contribution to naturalistic digital measurement of health in a scalable, mobile, and unobtrusive way. Various studies have examined the accuracy of these devices in controlled laboratory settings (eg, treadmill and stationary bike); however, no studies have investigated the heart rate accuracy of wearables during a continuous and ecologically valid 24-hour period of actual consumer device use conditions. OBJECTIVE The aim of this study was to determine the heart rate accuracy of 2 popular wearable devices, the Apple Watch 3 and Fitbit Charge 2, as compared with the gold standard reference method, an ambulatory electrocardiogram (ECG), during consumer device use conditions in an individual. Data were collected across 5 daily conditions, including sitting, walking, running, activities of daily living (ADL; eg, chores, brushing teeth), and sleeping. METHODS One participant, (first author; 29-year-old Caucasian male) completed a 24-hour ecologically valid protocol by wearing 2 popular wrist wearable devices (Apple Watch 3 and Fitbit Charge 2). In addition, an ambulatory ECG (Vrije Universiteit Ambulatory Monitoring System) was used as the gold standard reference method, which resulted in the collection of 102,740 individual heartbeats. A single-subject design was used to keep all variables constant except for wearable devices while providing a rapid response design to provide initial assessment of wearable accuracy for allowing the research cycle to keep pace with technological advancements. Accuracy of these devices compared with the gold standard ECG was assessed using mean error, mean absolute error, and mean absolute percent error. These data were supplemented with Bland-Altman analyses and concordance class correlation to assess agreement between devices. RESULTS The Apple Watch 3 and Fitbit Charge 2 were generally highly accurate across the 24-hour condition. Specifically, the Apple Watch 3 had a mean difference of −1.80 beats per minute (bpm), a mean absolute error percent of 5.86%, and a mean agreement of 95% when compared with the ECG across 24 hours. The Fitbit Charge 2 had a mean difference of −3.47 bpm, a mean absolute error of 5.96%, and a mean agreement of 91% when compared with the ECG across 24 hours. These findings varied by condition. CONCLUSIONS The Apple Watch 3 and the Fitbit Charge 2 provided acceptable heart rate accuracy (<±10%) across the 24 hour and during each activity, except for the Apple Watch 3 during the daily activities condition. Overall, these findings provide preliminary support that these devices appear to be useful for implementing ambulatory measurement of cardiac activity in research studies, especially those where the specific advantages of these methods (eg, scalability, low participant burden) are particularly suited to the population or research question.

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3719
Author(s):  
Aoxin Ni ◽  
Arian Azarang ◽  
Nasser Kehtarnavaz

The interest in contactless or remote heart rate measurement has been steadily growing in healthcare and sports applications. Contactless methods involve the utilization of a video camera and image processing algorithms. Recently, deep learning methods have been used to improve the performance of conventional contactless methods for heart rate measurement. After providing a review of the related literature, a comparison of the deep learning methods whose codes are publicly available is conducted in this paper. The public domain UBFC dataset is used to compare the performance of these deep learning methods for heart rate measurement. The results obtained show that the deep learning method PhysNet generates the best heart rate measurement outcome among these methods, with a mean absolute error value of 2.57 beats per minute and a mean square error value of 7.56 beats per minute.


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Mauricio Villarroel ◽  
Sitthichok Chaichulee ◽  
João Jorge ◽  
Sara Davis ◽  
Gabrielle Green ◽  
...  

AbstractThe implementation of video-based non-contact technologies to monitor the vital signs of preterm infants in the hospital presents several challenges, such as the detection of the presence or the absence of a patient in the video frame, robustness to changes in lighting conditions, automated identification of suitable time periods and regions of interest from which vital signs can be estimated. We carried out a clinical study to evaluate the accuracy and the proportion of time that heart rate and respiratory rate can be estimated from preterm infants using only a video camera in a clinical environment, without interfering with regular patient care. A total of 426.6 h of video and reference vital signs were recorded for 90 sessions from 30 preterm infants in the Neonatal Intensive Care Unit (NICU) of the John Radcliffe Hospital in Oxford. Each preterm infant was recorded under regular ambient light during daytime for up to four consecutive days. We developed multi-task deep learning algorithms to automatically segment skin areas and to estimate vital signs only when the infant was present in the field of view of the video camera and no clinical interventions were undertaken. We propose signal quality assessment algorithms for both heart rate and respiratory rate to discriminate between clinically acceptable and noisy signals. The mean absolute error between the reference and camera-derived heart rates was 2.3 beats/min for over 76% of the time for which the reference and camera data were valid. The mean absolute error between the reference and camera-derived respiratory rate was 3.5 breaths/min for over 82% of the time. Accurate estimates of heart rate and respiratory rate could be derived for at least 90% of the time, if gaps of up to 30 seconds with no estimates were allowed.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Mauricio Villarroel ◽  
João Jorge ◽  
David Meredith ◽  
Sheera Sutherland ◽  
Chris Pugh ◽  
...  

Abstract A clinical study was designed to record a wide range of physiological values from patients undergoing haemodialysis treatment in the Renal Unit of the Churchill Hospital in Oxford. Video was recorded for a total of 84 dialysis sessions from 40 patients during the course of 1 year, comprising an overall video recording time of approximately 304.1 h. Reference values were provided by two devices in regular clinical use. The mean absolute error between the heart rate estimates from the camera and the average from two reference pulse oximeters (positioned at the finger and earlobe) was 2.8 beats/min for over 65% of the time the patient was stable. The mean absolute error between the respiratory rate estimates from the camera and the reference values (computed from the Electrocardiogram and a thoracic expansion sensor—chest belt) was 2.1 breaths/min for over 69% of the time for which the reference signals were valid. To increase the robustness of the algorithms, novel methods were devised for cancelling out aliased frequency components caused by the artificial light sources in the hospital, using auto-regressive modelling and pole cancellation. Maps of the spatial distribution of heart rate and respiratory rate information were developed from the coefficients of the auto-regressive models. Most of the periods for which the camera could not produce a reliable heart rate estimate lasted under 3 min, thus opening the possibility to monitor heart rate continuously in a clinical environment.


Proceedings ◽  
2020 ◽  
Vol 49 (1) ◽  
pp. 52
Author(s):  
Shogo Asanuma ◽  
Yuta Kamibayashi ◽  
Masahito Nagamori ◽  
Hisashi Uchiyama ◽  
Akira Shionoya

Recently, in the field of sports, studies have been actively conducted to collect and analyze human behavior data from various sensors for assisting exercise. However, there are very few studies targeting disabled subjects. The purpose of this study was to suggest a model for heart rate estimation in driving a wheel-chair using a wearable device and to assist the exercise of wheel-chair users. The suggested model estimated the heart rate transformed from the data of 6-axis sensors (accelerations and angular velocities) using machine learning. The sensors were attached to the undercarriage of the wheel-chair. Input to the suggested model were acceleration toward a driving direction, angle of slope and oxygen intake. The suggested model estimated the heart rate every 12 s. When the suggested model was applied to heart rate estimation during normal driving of the wheel-chair, it was confirmed that estimation was possible within 9.34 bpm mean absolute error.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2955 ◽  
Author(s):  
Saif Saad Fakhrulddin ◽  
Sadik Kamel Gharghan ◽  
Ali Al-Naji ◽  
Javaan Chahl

For elderly persons, a fall can cause serious injuries such as a hip fracture or head injury. Here, an advanced first aid system is proposed for monitoring elderly patients with heart conditions that puts them at risk of falling and for providing first aid supplies using an unmanned aerial vehicle. A hybridized fall detection algorithm (FDB-HRT) is proposed based on a combination of acceleration and a heart rate threshold. Five volunteers were invited to evaluate the performance of the heartbeat sensor relative to a benchmark device, and the extracted data was validated using statistical analysis. In addition, the accuracy of fall detections and the recorded locations of fall incidents were validated. The proposed FDB-HRT algorithm was 99.16% and 99.2% accurate with regard to heart rate measurement and fall detection, respectively. In addition, the geolocation error of patient fall incidents based on a GPS module was evaluated by mean absolute error analysis for 17 different locations in three cities in Iraq. Mean absolute error was 1.08 × 10−5° and 2.01 × 10−5° for latitude and longitude data relative to data from the GPS Benchmark system. In addition, the results revealed that in urban areas, the UAV succeeded in all missions and arrived at the patient’s locations before the ambulance, with an average time savings of 105 s. Moreover, a time saving of 31.81% was achieved when using the UAV to transport a first aid kit to the patient compared to an ambulance. As a result, we can conclude that when compared to delivering first aid via ambulance, our design greatly reduces delivery time. The proposed advanced first aid system outperformed previous systems presented in the literature in terms of accuracy of heart rate measurement, fall detection, and information messages and UAV arrival time.


2021 ◽  
Author(s):  
Carlos Morgado Areia ◽  
Mauro Santos ◽  
Sarah Vollam ◽  
Marco Pimentel ◽  
Louise Young ◽  
...  

BACKGROUND Early warning scores in general wards are commonly limited by intermittent manual measurements; these are recognised as being time consuming and for impacting monitoring frequency. Wearable devices may support healthcare staff, improve patient safety and promote early deterioration detection. However available ambulatory monitoring devices need to be tested and validated before clinical implementation. OBJECTIVE The objective of this study is to determine the agreement between a chest-worn patch (VitalPatch®) and gold standard reference device for heart rate (HR) and respiratory rate (RR) measurements during movement and during gradual de-saturation in a controlled environment. METHODS After both VitalPatch and gold standard device (Philips MX450) were placed, participants performed 7 different movements: At rest, Sit-to-Stand, Tapping, Rubbing, Drinking, Turning Pages, and Using a Tablet. In a controlled environment. Participants were then made hypoxic gradually down to 80% peripheral oxygen saturations. The primary outcome measures were the accuracy, defined as the mean absolute error (MAE) of the VitalPatch estimates when compared with their gold-standards. We defined these to be clinical acceptable if within 5 beats per minute (bpm) for HR and 3 respirations per minute (rpm) for RR. RESULTS We acquired complete datasets of 29 participants. In the movement phase, HR estimates were within the pre-specified limits for all movements. For RR, estimates were also inside the acceptable range, with the exception of the Sit-to-Stand and Turning Pages movements, showing a MAE (95% CI) of 3.05 (2.48, 3.58) rpm and 3.45 (2.71, 4.11) rpm, respectively. For the hypoxia phase, these were always within the limits with an overall MAE for HR and RR of 0.72 (0.66, 0.78) bpm and 1.89 (1.75, 2.03) rpm, respectively. There were no significant differences in the VitalPatch performance across a range of oxygen desaturations. CONCLUSIONS The VitalPatch was highly accurate throughout movement tests except for its RR estimation during two movements. This device was reliable throughout the hypoxia stages, with no significant accuracy differences in normoxia (≥ 90%), mild (89.9 - 85%) and severe hypoxia (< 85%). CLINICALTRIAL ISRCTN61535692


Author(s):  
Antti Vehkaoja ◽  
Mikko Peltokangas ◽  
Jarmo Verho ◽  
Jukka Lekkala

An unobtrusive bed integrated system for monitoring physiological parameters during sleep is presented and evaluated. The system uses textile electrodes attached to a bed sheet for measuring multiple channels of electrocardiogram. The channels are also combined in order to form several additional ECG leads. One lead at a time is selected for beat-to-beat-interval detection. The system also includes force sensors located under a bed post for detecting respiration and movements. The movement information is also used to assist in heart rate detection and combining the ECG derived respiration information with respiration information derived from force sensors, is investigated. The authors tested the system with ten subjects in one hour recordings and achieved an average of 95.9% detection coverage and 99 percentile absolute error of 3.47 ms for the BB-interval signal. The relative mean absolute error of the detected respiration cycle lengths was 2.1%.


2021 ◽  
Author(s):  
Sunil K Yadav ◽  
Rahele Kafieh ◽  
Hanna G Zimmermann ◽  
Josef Kauer-Bonin ◽  
Kouros Nouri-Mahdavi ◽  
...  

Intraretinal layer segmentation on macular optical coherence tomography (OCT) images generates non invasive biomarkers querying neuronal structures with near cellular resolution. While first deep learning methods have delivered promising results with high computing power demands, a reliable, power efficient and reproducible intraretinal layer segmentation is still an unmet need. We propose a cascaded two-stage network for intraretinal layer segmentation, with both networks being compressed versions of U-Net (CCU-INSEG). The first network is responsible for retinal tissue segmentation from OCT B-scans. The second network segments 8 intraretinal layers with high fidelity. By compressing U-Net, we achieve 392- and 26-time reductions in model size and parameters in the first and second network, respectively. Still, our method delivers almost similar accuracy compared to U-Net without additional constraints of computation and memory resources. At the post-processing stage, we introduce Laplacian-based outlier detection with layer surface hole filling by adaptive non-linear interpolation. We trained our method using 17,458 B-scans from patients with autoimmune optic neuropathies, i.e. multiple sclerosis, and healthy controls. Voxel-wise comparison against manual segmentation produces a mean absolute error of 2.3mu, which is 2.5x better than the device's own segmentation. Voxel-wise comparison against external multicenter data leads to a mean absolute error of 2.6mu for glaucoma data using the same gold standard segmentation approach, and 3.7mu mean absolute error compared against an externally segmented reference data set. In 20 macular volume scans from patients with severe disease, 3.5% of B-scan segmentation results were rejected by an experienced grader, whereas this was the case in 41.4% of B-scans segmented with a graph-based reference method.


2009 ◽  
Vol 79 (1) ◽  
pp. 150-157 ◽  
Author(s):  
April A. Brown ◽  
William C. Scarfe ◽  
James P. Scheetz ◽  
Anibal M. Silveira ◽  
Allan G. Farman

Abstract Objective: To compare the in vitro reliability and accuracy of linear measurements between cephalometric landmarks on cone beam computed tomography (CBCT) 3D volumetric images with varying basis projection images to direct measurements on human skulls. Materials and Methods: Sixteen linear dimensions between 24 anatomic sites marked on 19 human skulls were directly measured. The skulls were imaged with CBCT (i-CAT, Imaging Sciences International, Hatfield, Pa) at three settings: (a) 153 projections, (b) 306 projections, and (c) 612 projections. The mean absolute error and modality mean (± SD) of linear measurements between landmarks on volumetric renderings were compared to the anatomic truth using repeated measures general linear model (P ≤ .05). Results: No difference in mean absolute error between the scan settings was found for almost all measurements. The average skull absolute error between marked reference points was less than the distances between unmarked reference sites. CBCT resulted in lower measurements for nine dimensions (mean difference range: 3.1 mm ± 0.12 mm to 0.56 mm ± 0.07 mm) and a greater measurement for one dimension (mean difference 3.3 mm ± 0.12 mm). No differences were detected between CBCT scan sequences. Conclusions: CBCT measurements were consistent between scan sequences and for direct measurements between marked reference points. Reducing the number of projections for 3D reconstruction did not lead to reduced dimensional accuracy and potentially provides reduced patient radiation exposure. Because the fiducial landmarks on the skulls were not radio-opaque, the inaccuracies found in measurement could be due to the methods applied rather than to innate inaccuracies in the CBCT scan reconstructions or 3D software employed.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7233
Author(s):  
Jayroop Ramesh ◽  
Zahra Solatidehkordi ◽  
Raafat Aburukba ◽  
Assim Sagahyroon

Atrial fibrillation (AF) is a type of cardiac arrhythmia affecting millions of people every year. This disease increases the likelihood of strokes, heart failure, and even death. While dedicated medical-grade electrocardiogram (ECG) devices can enable gold-standard analysis, these devices are expensive and require clinical settings. Recent advances in the capabilities of general-purpose smartphones and wearable technology equipped with photoplethysmography (PPG) sensors increase diagnostic accessibility for most populations. This work aims to develop a single model that can generalize AF classification across the modalities of ECG and PPG with a unified knowledge representation. This is enabled by approximating the transformation of signals obtained from low-cost wearable PPG sensors in terms of Pulse Rate Variability (PRV) to temporal Heart Rate Variability (HRV) features extracted from medical-grade ECG. This paper proposes a one-dimensional deep convolutional neural network that uses HRV-derived features for classifying 30-s heart rhythms as normal sinus rhythm or atrial fibrillation from both ECG and PPG-based sensors. The model is trained with three MIT-BIH ECG databases and is assessed on a dataset of unseen PPG signals acquired from wrist-worn wearable devices through transfer learning. The model achieved the aggregate binary classification performance measures of accuracy: 95.50%, sensitivity: 94.50%, and specificity: 96.00% across a five-fold cross-validation strategy on the ECG datasets. It also achieved 95.10% accuracy, 94.60% sensitivity, 95.20% specificity on an unseen PPG dataset. The results show considerable promise towards seamless adaptation of gold-standard ECG trained models for non-ambulatory AF detection with consumer wearable devices through HRV-based knowledge transfer.


Sign in / Sign up

Export Citation Format

Share Document