scholarly journals Measuring Caloric Intake at the Population Level (NOTION): Protocol for an Experimental Study (Preprint)

2018 ◽  
Author(s):  
Elisa Fuscà ◽  
Anna Bolzon ◽  
Alessia Buratin ◽  
Mariangela Ruffolo ◽  
Paola Berchialla ◽  
...  

BACKGROUND The monitoring of caloric intake is an important challenge for the maintenance of individual and public health. The instruments used so far for dietary monitoring (eg, food frequency questionnaires, food diaries, and telephone interviews) are inexpensive and easy to implement but show important inaccuracies. Alternative methods based on wearable devices and wrist accelerometers have been proposed, yet they have limited accuracy in predicting caloric intake because analytics are usually not well suited to manage the massive sets of data generated from these types of devices. OBJECTIVE This study aims to develop an algorithm using recent advances in machine learning methodology, which provides a precise and stable estimate of caloric intake. METHODS The study will capture four individual eating activities outside the home over 2 months. Twenty healthy Italian adults will be recruited from the University of Padova in Padova, Italy, with email, flyers, and website announcements. The eligibility requirements include age 18 to 66 years and no eating disorder history. Each participant will be randomized to one of two menus to be eaten on weekdays in a predefined cafeteria in Padova (northeastern Italy). Flows of raw data will be accessed and downloaded from the wearable devices given to study participants and associated with anthropometric and demographic characteristics of the user (with their written permission). These massive data flows will provide a detailed picture of real-life conditions and will be analyzed through an up-to-date machine learning approach with the aim to accurately predict the caloric contribution of individual eating activities. Gold standard evaluation of the energy content of eaten foods will be obtained using calorimetric assessments made at the Laboratory of Dietetics and Nutraceutical Research of the University of Padova. RESULTS The study will last 14 months from July 2017 with a final report by November 2018. Data collection will occur from October to December 2017. From this study, we expect to obtain a series of relevant data that, opportunely filtered, could allow the construction of a prototype algorithm able to estimate caloric intake through the recognition of food type and the number of bites. The algorithm should work in real time, be embedded in a wearable device, and able to match bite-related movements and the corresponding caloric intake with high accuracy. CONCLUSIONS Building an automatic calculation method for caloric intake, independent on the black-box processing of the wearable devices marketed so far, has great potential both for clinical nutrition (eg, for assessing cardiovascular compliance or for the prevention of coronary heart disease through proper dietary control) and public health nutrition as a low-cost monitoring tool for eating habits of different segments of the population. INTERNATIONAL REGISTERED REPOR DERR1-10.2196/12116

Nutrients ◽  
2019 ◽  
Vol 11 (5) ◽  
pp. 1170
Author(s):  
Giulia Lorenzoni ◽  
Daniele Bottigliengo ◽  
Danila Azzolina ◽  
Dario Gregori

The present study aimed to assess the feasibility and reliability of an a3utomatic food intake measurement device in estimating energy intake from energy-dense foods. Eighteen volunteers aged 20–36 years were recruited from the University of Padova. The device used in the present study was the Bite Counter (Bite Technologies, Pendleton, USA). The rationale of the device is that the wrist movements occurring in the act of bringing food to the mouth present unique patterns that are recognized and recorded by the Bite Counter. Subjects were asked to wear the Bite Counter on the wrist of the dominant hand, to turn the device on before the first bite and to turn it off once he or she finished his or her meal. The accuracy of caloric intake was significantly different among the methods used. In addition, the device’s accuracy in estimating energy intake varied according to the type and amount of macronutrients present, and the difference was independent of the number of bites recorded. Further research is needed to overcome the current limitations of wearable devices in estimating caloric intake, which is not independent of the food being eaten.


2021 ◽  
Author(s):  
Anna Goldenberg ◽  
Bret Nestor ◽  
Jaryd Hunter ◽  
Raghu Kainkaryam ◽  
Erik Drysdale ◽  
...  

Abstract Commercial wearable devices are surfacing as an appealing mechanism to detect COVID-19 and potentially other public health threats, due to their widespread use. To assess the validity of wearable devices as population health screening tools, it is essential to evaluate predictive methodologies based on wearable devices by mimicking their real-world deployment. Several points must be addressed to transition from statistically significant differences between infected and uninfected cohorts to COVID-19 inferences on individuals. We demonstrate the strengths and shortcomings of existing approaches on a cohort of 32,198 individuals who experience influenza like illness (ILI), 204 of which report testing positive for COVID-19. We show that, despite commonly made design mistakes resulting in overestimation of performance, when properly designed wearables can be effectively used as a part of the detection pipeline. For example, knowing the week of year, combined with naive randomised test set generation leads to substantial overestimation of COVID-19 classification performance at 0.73 AUROC. However, an average AUROC of only 0.55 +/- 0.02 would be attainable in a simulation of real-world deployment, due to the shifting prevalence of COVID-19 and non-COVID-19 ILI to trigger further testing. In this work we show how to train a machine learning model to differentiate ILI days from healthy days, followed by a survey to differentiate COVID-19 from influenza and unspecified ILI based on symptoms. In a forthcoming week, models can expect a sensitivity of 0.50 (0-0.74, 95% CI), while utilising the wearable device to reduce the burden of surveys by 35%. The corresponding false positive rate is 0.22 (0.02-0.47, 95% CI). In the future, serious consideration must be given to the design, evaluation, and reporting of wearable device interventions if they are to be relied upon as part of frequent COVID-19 or other public health threat testing infrastructures.


2021 ◽  
Author(s):  
Bret Nestor ◽  
Jaryd Hunter ◽  
Raghu Kainkaryam ◽  
Erik Drysdale ◽  
Jeffrey B Inglis ◽  
...  

Commercial wearable devices are surfacing as an appealing mechanism to detect COVID-19 and potentially other public health threats, due to their widespread use. To assess the validity of wearable devices as population health screening tools, it is essential to evaluate predictive methodologies based on wearable devices by mimicking their real-world deployment. Several points must be addressed to transition from statistically significant differences between infected and uninfected cohorts to COVID-19 inferences on individuals. We demonstrate the strengths and shortcomings of existing approaches on a cohort of 32,198 individuals who experience influenza like illness (ILI), 204 of which report testing positive for COVID-19. We show that, despite commonly made design mistakes resulting in overestimation of performance, when properly designed wearables can be effectively used as a part of the detection pipeline. For example, knowing the week of year, combined with naive randomised test set generation leads to substantial overestimation of COVID-19 classification performance at 0.73 AUROC. However, an average AUROC of only 0.55 ± 0.02 would be attainable in a simulation of real-world deployment, due to the shifting prevalence of COVID-19 and non-COVID-19 ILI to trigger further testing. In this work we show how to train a machine learning model to differentiate ILI days from healthy days, followed by a survey to differentiate COVID-19 from influenza and unspecified ILI based on symptoms. In a forthcoming week, models can expect a sensitivity of 0.50 (0-0.74, 95% CI), while utilising the wearable device to reduce the burden of surveys by 35%. The corresponding false positive rate is 0.22 (0.02-0.47, 95% CI). In the future, serious consideration must be given to the design, evaluation, and reporting of wearable device interventions if they are to be relied upon as part of frequent COVID-19 or other public health threat testing infrastructures.


Author(s):  
Kunal Parikh ◽  
Tanvi Makadia ◽  
Harshil Patel

Dengue is unquestionably one of the biggest health concerns in India and for many other developing countries. Unfortunately, many people have lost their lives because of it. Every year, approximately 390 million dengue infections occur around the world among which 500,000 people are seriously infected and 25,000 people have died annually. Many factors could cause dengue such as temperature, humidity, precipitation, inadequate public health, and many others. In this paper, we are proposing a method to perform predictive analytics on dengue’s dataset using KNN: a machine-learning algorithm. This analysis would help in the prediction of future cases and we could save the lives of many.


Mousaion ◽  
2019 ◽  
Vol 37 (1) ◽  
Author(s):  
Olefhile Mosweu

Most curriculum components of archival graduate programmes consist of contextual knowledge, archival knowledge, complementary knowledge, practicum, and scholarly research. The practicum, now commonly known as experiential learning in the global hub, is now widely accepted in library and information studies (LIS) education as necessary and important. It is through experiential learning that, over and above the theoretical aspects of a profession, students are provided with the opportunity to learn by doing in a workplace environment. The University of Botswana’s Master’s in Archives and Records Management (MARM) programme has a six weeks experiential learning programme whose purpose is to expose prospective archivists and/or records managers to the real archival world in terms of practice as informed by archival theory. The main objective of the study was to determine the extent to which the University of Botswana’s experiential learning component exposes students to real-life archival work to put into practice theoretical aspects learnt in the classroom as intended by the university guidelines. This study adopted a qualitative research design and collected data through interviews from participants selected through purposive and snowball sampling strategies. Documentary review supplemented the interviews. The data collected were analysed thematically in line with research objectives. The study determined that experiential learning does indeed expose students to the real world of work. It thus helps to bridge the gap between archival theory and practice for students without archives and records management work experience. For those with prior archival experience, experiential learning does not add value. This study recommends that students with prior archives and records management experience should rather, as an alternative to experiential learning, undertake supervised research, and write a research essay in a chosen thematic area in archives and records management.


2021 ◽  
Vol 14 (3) ◽  
pp. 1-21
Author(s):  
Roy Abitbol ◽  
Ilan Shimshoni ◽  
Jonathan Ben-Dov

The task of assembling fragments in a puzzle-like manner into a composite picture plays a significant role in the field of archaeology as it supports researchers in their attempt to reconstruct historic artifacts. In this article, we propose a method for matching and assembling pairs of ancient papyrus fragments containing mostly unknown scriptures. Papyrus paper is manufactured from papyrus plants and therefore portrays typical thread patterns resulting from the plant’s stems. The proposed algorithm is founded on the hypothesis that these thread patterns contain unique local attributes such that nearby fragments show similar patterns reflecting the continuations of the threads. We posit that these patterns can be exploited using image processing and machine learning techniques to identify matching fragments. The algorithm and system which we present support the quick and automated classification of matching pairs of papyrus fragments as well as the geometric alignment of the pairs against each other. The algorithm consists of a series of steps and is based on deep-learning and machine learning methods. The first step is to deconstruct the problem of matching fragments into a smaller problem of finding thread continuation matches in local edge areas (squares) between pairs of fragments. This phase is solved using a convolutional neural network ingesting raw images of the edge areas and producing local matching scores. The result of this stage yields very high recall but low precision. Thus, we utilize these scores in order to conclude about the matching of entire fragments pairs by establishing an elaborate voting mechanism. We enhance this voting with geometric alignment techniques from which we extract additional spatial information. Eventually, we feed all the data collected from these steps into a Random Forest classifier in order to produce a higher order classifier capable of predicting whether a pair of fragments is a match. Our algorithm was trained on a batch of fragments which was excavated from the Dead Sea caves and is dated circa the 1st century BCE. The algorithm shows excellent results on a validation set which is of a similar origin and conditions. We then tried to run the algorithm against a real-life set of fragments for which we have no prior knowledge or labeling of matches. This test batch is considered extremely challenging due to its poor condition and the small size of its fragments. Evidently, numerous researchers have tried seeking matches within this batch with very little success. Our algorithm performance on this batch was sub-optimal, returning a relatively large ratio of false positives. However, the algorithm was quite useful by eliminating 98% of the possible matches thus reducing the amount of work needed for manual inspection. Indeed, experts that reviewed the results have identified some positive matches as potentially true and referred them for further investigation.


Author(s):  
Dhruvil Shah ◽  
Devarsh Patel ◽  
Jainish Adesara ◽  
Pruthvi Hingu ◽  
Manan Shah

AbstractAlthough the education sector is improving more quickly than ever with the help of advancing technologies, there are still many areas yet to be discovered, and there will always be room for further enhancements. Two of the most disruptive technologies, machine learning (ML) and blockchain, have helped replace conventional approaches used in the education sector with highly technical and effective methods. In this study, a system is proposed that combines these two radiant technologies and helps resolve problems such as forgeries of educational records and fake degrees. The idea here is that if these technologies can be merged and a system can be developed that uses blockchain to store student data and ML to accurately predict the future job roles for students after graduation, the problems of further counterfeiting and insecurity in the student achievements can be avoided. Further, ML models will be used to train and predict valid data. This system will provide the university with an official decentralized database of student records who have graduated from there. In addition, this system provides employers with a platform where the educational records of the employees can be verified. Students can share their educational information in their e-portfolios on platforms such as LinkedIn, which is a platform for managing professional profiles. This allows students, companies, and other industries to find approval for student data more easily.


Author(s):  
Anil Babu Payedimarri ◽  
Diego Concina ◽  
Luigi Portinale ◽  
Massimo Canonico ◽  
Deborah Seys ◽  
...  

Artificial Intelligence (AI) and Machine Learning (ML) have expanded their utilization in different fields of medicine. During the SARS-CoV-2 outbreak, AI and ML were also applied for the evaluation and/or implementation of public health interventions aimed to flatten the epidemiological curve. This systematic review aims to evaluate the effectiveness of the use of AI and ML when applied to public health interventions to contain the spread of SARS-CoV-2. Our findings showed that quarantine should be the best strategy for containing COVID-19. Nationwide lockdown also showed positive impact, whereas social distancing should be considered to be effective only in combination with other interventions including the closure of schools and commercial activities and the limitation of public transportation. Our findings also showed that all the interventions should be initiated early in the pandemic and continued for a sustained period. Despite the study limitation, we concluded that AI and ML could be of help for policy makers to define the strategies for containing the COVID-19 pandemic.


Sign in / Sign up

Export Citation Format

Share Document