Predicting battery life with early cyclic data by machine learning

2019 ◽  
Vol 1 (6) ◽  
Author(s):  
Shan Zhu ◽  
Naiqin Zhao ◽  
Junwei Sha
Author(s):  
Ahmed Imteaj ◽  
M. Hadi Amini

Federated Learning (FL) is a recently invented distributed machine learning technique that allows available network clients to perform model training at the edge, rather than sharing it with a centralized server. Unlike conventional distributed machine learning approaches, the hallmark feature of FL is to allow performing local computation and model generation on the client side, ultimately protecting sensitive information. Most of the existing FL approaches assume that each FL client has sufficient computational resources and can accomplish a given task without facing any resource-related issues. However, if we consider FL for a heterogeneous Internet of Things (IoT) environment, a major portion of the FL clients may face low resource availability (e.g., lower computational power, limited bandwidth, and battery life). Consequently, the resource-constrained FL clients may give a very slow response, or may be unable to execute expected number of local iterations. Further, any FL client can inject inappropriate model during a training phase that can prolong convergence time and waste resources of all the network clients. In this paper, we propose a novel tri-layer FL scheme, Federated Proximal, Activity and Resource-Aware 31 Lightweight model (FedPARL), that reduces model size by performing sample-based pruning, avoids misbehaved clients by examining their trust score, and allows partial amount of work by considering their resource-availability. The pruning mechanism is particularly useful while dealing with resource-constrained FL-based IoT (FL-IoT) clients. In this scenario, the lightweight training model will consume less amount of resources to accomplish a target convergence. We evaluate each interested client's resource-availability before assigning a task, monitor their activities, and update their trust scores based on their previous performance. To tackle system and statistical heterogeneities, we adapt a re-parameterization and generalization of the current state-of-the-art Federated Averaging (FedAvg) algorithm. The modification of FedAvg algorithm allows clients to perform variable or partial amounts of work considering their resource-constraints. We demonstrate that simultaneously adapting the coupling of pruning, resource and activity awareness, and re-parameterization of FedAvg algorithm leads to more robust convergence of FL in IoT environment.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Prakash Venugopal ◽  
S Siva Shankar ◽  
C Phillip Jebakumar ◽  
Rishab Agarwal ◽  
Hassan Haes Alhelou ◽  
...  

Author(s):  
Andrea K McIntosh ◽  
Abram Hindle

Machine learning is a popular method of learning functions from data to represent and to classify sensor inputs, multimedia, emails, and calendar events. Smartphone applications have been integrating more and more intelligence in the form of machine learning. Machine learning functionality now appears on most smartphones as voice recognition, spell checking, word disambiguation, face recognition, translation, spatial reasoning, and even natural language summarization. Excited app developers who want to use machine learning on mobile devices face one serious constraint that they did not face on desktop computers or cloud virtual machines: the end-user’s mobile device has limited battery life, thus computationally intensive tasks can harm end-user’s phone availability by draining batteries of their stored energy. How can developers use machine learning and respect the limited battery life of mobile devices? Currently there are few guidelines for developers who want to employ machine learning on mobile devices yet are concerned about software energy consumption of their applications. In this paper we combine empirical measurements of many different machine learning algorithms with complexity theory to provide concrete and theoretically grounded recommendations to developers who want to employ machine learning on smartphones.


2016 ◽  
Author(s):  
Andrea K McIntosh ◽  
Abram Hindle

Machine learning is a popular method of learning functions from data to represent and to classify sensor inputs, multimedia, emails, and calendar events. Smartphone applications have been integrating more and more intelligence in the form of machine learning. Machine learning functionality now appears on most smartphones as voice recognition, spell checking, word disambiguation, face recognition, translation, spatial reasoning, and even natural language summarization. Excited app developers who want to use machine learning on mobile devices face one serious constraint that they did not face on desktop computers or cloud virtual machines: the end-user’s mobile device has limited battery life, thus computationally intensive tasks can harm end-user’s phone availability by draining batteries of their stored energy. How can developers use machine learning and respect the limited battery life of mobile devices? Currently there are few guidelines for developers who want to employ machine learning on mobile devices yet are concerned about software energy consumption of their applications. In this paper we combine empirical measurements of many different machine learning algorithms with complexity theory to provide concrete and theoretically grounded recommendations to developers who want to employ machine learning on smartphones.


Author(s):  
Scott Small ◽  
Sara Khalid ◽  
Paula Dhiman ◽  
Shing Chan ◽  
Dan Jackson ◽  
...  

Purpose: Lowering the sampling rate of accelerometers in physical activity research can dramatically increase study monitoring periods through longer battery life; however, the effect of reduced sampling rate on activity metric validity is poorly documented. We therefore aimed to assess the effect of reduced sampling rate on measuring physical activity both overall and by specific behavior types. Methods: Healthy adults wore sets of two Axivity AX3 accelerometers on the dominant wrist and hip for 24 hr. At each location one accelerometer recorded at 25 Hz and the other at 100 Hz. Overall acceleration magnitude, time in moderate to vigorous activity, and behavioral activities were calculated and processed using both linear and nearest neighbor resampling. Correlation between acceleration magnitude and activity classifications at both sampling rates was calculated and linear regression was performed. Results: Of the 54 total participants, 45 contributed >20 hr of hip wear time and 51 contributed >20 hr of wrist wear time. Strong correlation was observed between 25- and 100-Hz sampling rates in overall activity measurement (r = .97–.99), yet consistently lower activity was observed in data collected at 25 Hz (3.1%–13.9%). Reduced sleep and light activity and increased sedentary time was classified in 25-Hz data by machine learning models. Discrepancies were greater when linear interpolation resampling was used in postprocessing. Conclusions: The 25- and 100-Hz accelerometer data are highly correlated with predictable differences, which can be accounted for in interstudy comparisons. Sampling rate and resampling methods should be consistently reported in physical activity studies, carefully considered in study design, and tailored to the outcome of interest.


2020 ◽  
Author(s):  
Scott R Small ◽  
Sara Khalid ◽  
Paula Dhiman ◽  
Shing Chan ◽  
Dan Jackson ◽  
...  

AbstractPurposeLowering the sampling rate of accelerometer devices can dramatically increase study monitoring periods through longer battery life, however the validity of its output is poorly documented. We therefore aimed to assess the effect of reduced sampling rate on measuring physical activity both overall and by specific behaviour types.MethodsHealthy adults wore two Axivity AX3 accelerometers on the dominant wrist and two on the hip for 24 hours. At each location one accelerometer recorded at 25 Hz and the other at 100 Hz. Overall acceleration magnitude, time in moderate-to-vigorous activity, and behavioural activities were calculated using standard methods. Correlation between acceleration magnitude and activity classifications at both sampling rates was calculated and linear regression was performed.Results54 participants wore both hip and wrist monitors, with 45 of the participants contributing >20 hours of wear time at the hip and 51 contributing >20 hours of wear time at the wrist. Strong correlation was observed between 25 Hz and 100 Hz sampling rates in overall activity measurement (r = 0.962 to 0.991), yet consistently lower overall acceleration was observed in data collected at 25 Hz (12.3% to 12.8%). Excellent agreement between sampling rates was observed in all machine learning classified activities (r = 0.850 to 0.952). Wrist-worn vector magnitude measured at 25 Hz (Acc25) can be compared to 100 Hz (Acc100) data using the transformation, Acc100 = 1.038*Acc25 + 3.310.Conclusions25 Hz and 100 Hz accelerometer data are highly correlated with predictable differences which can be accounted for in inter-study comparisons. Sampling rate should be consistently reported in physical activity studies, carefully considered in study design, and tailored to the outcome of interest.


2021 ◽  
Author(s):  
Surawich Kasempong ◽  
Niyom Kanokwareerat ◽  
Boonyalit Tangjitkongpittaya ◽  
Sarunyoo Setakornnukul ◽  
Apinan Laipanich ◽  
...  

Abstract In PTTEP's offshore fields, more than 100 units of hybrid Solar-TEG power with individual VRLA battery banks are installed on wellhead platforms for powering the process. With large number of various equipment in remote locations, traditional maintenance approach highly consuming resources is, thus, not cost effective especially in oil price crisis. In 2019, "Hybrid Power Solar-TEG Predictive Maintenance" project is established to develop a predictive model and transform maintenance process to total predictive maintenance. The project was begun with three platforms as a pilot project. The operating model was built by machine learning using various historical data recorded in PI system, records of maintenance data and other relevant information such as manufacturer manual, international standard and related white papers. The modelled algorithm was embedded in an application which was developed by Python to predict the ageing and performance of battery banks on pilot wellhead platforms. In 2020, the project continues to build the model of Thermo-Electric Generator (TEG) and extend the coverage location for additional thirty-seven (37) platforms. Lower Depth of Discharge (DoD), higher ambient temperature and lower charging performance are signs of battery’s deterioration while lower supplied current from power source is sign of their underperformance. All parameters were ingested to conduct pattern recognition to make algorithm be able to predict the remaining life of the key equipment. The Eyeball method is conducted to train algorithm the various charging patterns by the developers with aim to evaluate the DoD of battery bank. Apart from battery life prediction, DoD is employed to determine the energy left in battery from night operation to indicate the remaining run time duration. By leveraging machine learning, all failure patterns are recognized. The application is operating real?time and provide early alarm to all person-in-charge when failure potential is realized. The results are visualized on PowerBI to provide the latest status of power units of each platforms. From above, the maintenance approach is thus completely converted from Run-to-Failure to Predictive Maintenance. The long lead spare parts e.g. battery cells could be procured in advance. Spare inventory can be optimized per actual demand. In addition, the offshore supervisor could accurately identify the defective battery banks and proactively recover them in time to minimize unplanned shutdown. The modelled algorithm was in-house developed based on technical information and maintenance records. Although the system goes live, the preventive maintenance according to IEEE1188 is still retained for further collecting more field data to improve accuracy of the model. In addition, the model’s analyzed information, such as battery run time and DoD, has revealed the hidden actual design margin of power system. The platform CAPEX can be thus deducted by removing such excess margin.


Sign in / Sign up

Export Citation Format

Share Document