scholarly journals Trading-Off Machine Learning Algorithms towards Data-Driven Administrative-Socio-Economic Population Health Management

Computers ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 4
Author(s):  
Silvia Panicacci ◽  
Massimiliano Donati ◽  
Francesco Profili ◽  
Paolo Francesconi ◽  
Luca Fanucci

Together with population ageing, the number of people suffering from multimorbidity is increasing, up to more than half of the population by 2035. This part of the population is composed by the highest-risk patients, who are, at the same time, the major users of the healthcare systems. The early identification of this sub-population can really help to improve people’s quality of life and reduce healthcare costs. In this paper, we describe a population health management tool based on state-of-the-art intelligent algorithms, starting from administrative and socio-economic data, for the early identification of high-risk patients. The study refers to the population of the Local Health Unit of Central Tuscany in 2015, which amounts to 1,670,129 residents. After a trade-off on machine learning models and on input data, Random Forest applied to 1-year of historical data achieves the best results, outperforming state-of-the-art models. The most important variables for this model, in terms of mean minimal depth, accuracy decrease and Gini decrease, result to be age and some group of drugs, such as high-ceiling diuretics. Thanks to the low inference time and reduced memory usage, the resulting model allows for real-time risk prediction updates whenever new data become available, giving General Practitioners the possibility to early adopt personalised medicine.

2017 ◽  
Author(s):  
◽  
Lincoln Sheets

Risk analysis and population health management can improve health outcomes, but improved risk stratification is needed to manage healthcare costs. Analysis of 157 publications on translational implementations of "risk stratification in population health management of chronic disease" showed a consensus that population health management and risk stratification can improve outcomes, but found uncertainty over best methods for risk prediction and controversy over the cost savings. The consensus of another 85 publications on the methodologies of "data mining for predictive healthcare analytics" was that clinically interpretable machine learning techniques are more appropriate than "black box" techniques for structured big data sources in healthcare, and the "area under the curve" of a prediction model's sensitivity versus one-minus-specificity is a standard and reliable way to measure the model's discrimination. This study used clinically interpretable machine-learning algorithms, combined with simple but powerful data analytic techniques such as cost analysis and data visualization, to evaluate and improve risk stratification for a managed patient population. This study retrospectively observed 10,000 mid-Missouri Medicare and Medicaid patients between 2012 and 2014. Cost and utilization analyses, statistical clustering, contrast mining, and logistic regression were used to identify patients within a managed population at risk for higher healthcare costs, demonstrate longitudinal changes in risk stratification, and characterize detailed differences between high-risk and low-risk patients. The two highest risk stratification tiers comprised only 21% of patients but accounted for 43% of prospective charges. Patients in the most expensive sub-cluster of the most expensive risk tier were nearly twice as costly as high-risk patients on average. Combining contrast mining with logistic regression predicted the most expensive 5% of patients with 84% accuracy, as measured by area under the curve. All the strategies used in this study, from the simplest to the most sophisticated, produced useful insights. By predicting the small number of patients who will incur the majority of healthcare expenses in terms that are clinically interpretable, these methods can support population health managers in focusing preventive and longitudinal care more effectively. These models, and similar models developed by integrating diverse informatics strategies, could improve health outcomes, delivery, and costs.


Algorithms ◽  
2020 ◽  
Vol 13 (4) ◽  
pp. 102 ◽  
Author(s):  
Fernando López-Martínez ◽  
Edward Rolando Núñez-Valdez ◽  
Vicente García-Díaz ◽  
Zoran Bursac

Big data and artificial intelligence are currently two of the most important and trending pieces for innovation and predictive analytics in healthcare, leading the digital healthcare transformation. Keralty organization is already working on developing an intelligent big data analytic platform based on machine learning and data integration principles. We discuss how this platform is the new pillar for the organization to improve population health management, value-based care, and new upcoming challenges in healthcare. The benefits of using this new data platform for community and population health include better healthcare outcomes, improvement of clinical operations, reducing costs of care, and generation of accurate medical information. Several machine learning algorithms implemented by the authors can use the large standardized datasets integrated into the platform to improve the effectiveness of public health interventions, improving diagnosis, and clinical decision support. The data integrated into the platform come from Electronic Health Records (EHR), Hospital Information Systems (HIS), Radiology Information Systems (RIS), and Laboratory Information Systems (LIS), as well as data generated by public health platforms, mobile data, social media, and clinical web portals. This massive volume of data is integrated using big data techniques for storage, retrieval, processing, and transformation. This paper presents the design of a digital health platform in a healthcare organization in Colombia to integrate operational, clinical, and business data repositories with advanced analytics to improve the decision-making process for population health management.


2019 ◽  
Vol 156 (6) ◽  
pp. S-1275-S-1276
Author(s):  
David A. Jacob ◽  
Vera Yakovchenko ◽  
Linda Chia ◽  
Andrew Himsel ◽  
Diana Ruiz ◽  
...  

2020 ◽  
Vol 26 (3) ◽  
pp. 212
Author(s):  
Deborah Davies

Primary Sense is a new data extraction, analysis and reporting tool that the Gold Coast Primary Health Network (GCPHN) has developed to enable practical and effective population health management in general practice and also on a regional level. Once installed, the tool de-identifies data within the practice before running it through various clinical risk algorithms to create practical information that can easily be actioned within the general practice business model in at least two ways. The first is to generate up-to-date reports of patients who are most likely to benefit from specific interventions or occasions of service. The second is to identify potentially serious medication safety issues, alerting clinicians in real time at point of prescribing. Formal live testing of the system was completed in nine practices where 22 managers and nurses and 42 GPs used the tool over a 5-month period in 2019. The live test monitored the use of reports and alerts, and regular feedback from users enabled small but important improvements to the tool. Practice teams successfully used the reports to target specific groups of patients with outstanding care needs or who were at greatest risk of adverse health outcomes. The results of the live test showed that users found Primary Sense to be easy to use and beneficial to general practice. The next phase of this project is now underway to further trial the scalability and change management requirements for full implementation of Primary Sense. As more and more practices adopt the tool, the aggregated data will increasingly help to support population health planning, commissioning of local services, active health surveillance and other related activities.


2014 ◽  
Author(s):  
Sarah Klein Klein ◽  
Douglas McCarthy McCarthy ◽  
Alexander Cohen Cohen

Iproceedings ◽  
2016 ◽  
Vol 2 (1) ◽  
pp. e17
Author(s):  
Sashi Padarthy ◽  
Cristina Crespo ◽  
Keri Rich ◽  
Nagaraja Srivatsan

2020 ◽  
Author(s):  
Joseph Prinable ◽  
Peter Jones ◽  
David Boland ◽  
Alistair McEwan ◽  
Cindy Thamrin

BACKGROUND The ability to continuously monitor breathing metrics may have indications for general health as well as respiratory conditions such as asthma. However, few studies have focused on breathing due to a lack of available wearable technologies. OBJECTIVE Examine the performance of two machine learning algorithms in extracting breathing metrics from a finger-based pulse oximeter, which is amenable to long-term monitoring. METHODS Pulse oximetry data was collected from 11 healthy and 11 asthma subjects who breathed at a range of controlled respiratory rates. UNET and Long Short-Term memory (LSTM) algorithms were applied to the data, and results compared against breathing metrics derived from respiratory inductance plethysmography measured simultaneously as a reference. RESULTS The UNET vs LSTM model provided breathing metrics which were strongly correlated with those from the reference signal (all p<0.001, except for inspiratory:expiratory ratio). The following relative mean bias(95% confidence interval) were observed: inspiration time 1.89(-52.95, 56.74)% vs 1.30(-52.15, 54.74)%, expiration time -3.70(-55.21, 47.80)% vs -4.97(-56.84, 46.89)%, inspiratory:expiratory ratio -4.65(-87.18, 77.88)% vs -5.30(-87.07, 76.47)%, inter-breath intervals -2.39(-32.76, 27.97)% vs -3.16(-33.69, 27.36)%, and respiratory rate 2.99(-27.04 to 33.02)% vs 3.69(-27.17 to 34.56)%. CONCLUSIONS Both machine learning models show strongly correlation and good comparability with reference, with low bias though wide variability for deriving breathing metrics in asthma and health cohorts. Future efforts should focus on improvement of performance of these models, e.g. by increasing the size of the training dataset at the lower breathing rates. CLINICALTRIAL Sydney Local Health District Human Research Ethics Committee (#LNR\16\HAWKE99 ethics approval).


Author(s):  
Xabier Rodríguez-Martínez ◽  
Enrique Pascual-San-José ◽  
Mariano Campoy-Quiles

This review article presents the state-of-the-art in high-throughput computational and experimental screening routines with application in organic solar cells, including materials discovery, device optimization and machine-learning algorithms.


Sign in / Sign up

Export Citation Format

Share Document