scholarly journals Vulnerability Prediction Models: A Case Study on the Linux Kernel

Author(s):  
Matthieu Jimenez ◽  
Mike Papadakis ◽  
Yves Le Traon
2014 ◽  
Vol 69 (3) ◽  
pp. 443-450 ◽  
Author(s):  
Louis Anthony (Tony) Cox ◽  
Douglas Popken ◽  
M. Sue Marty ◽  
J. Craig Rowlands ◽  
Grace Patlewicz ◽  
...  

2018 ◽  
Vol 2018 ◽  
pp. 1-21 ◽  
Author(s):  
Xiaomei Xu ◽  
Zhirui Ye ◽  
Jin Li ◽  
Mingtao Xu

Bicycle-sharing systems (BSSs) have become a prominent feature of the transportation network in many cities. Along with the boom of BSSs, cities face the challenge of bicycle unavailability and dock shortages. It is essential to conduct rebalancing operations, the success of which largely depend on users’ demand prediction. The objective of this study is to develop users’ demand prediction models based on the rental data, which will serve rebalancing operations. First, methods to collect and process the relevant data are presented. Bicycle usage patterns are then examined from both trip-based aspect and station-based aspect to provide some guidance for users’ demand prediction. After that, the methodology combining cluster analysis, a back-propagation neural network (BPNN), and comparative analysis is proposed to predict users’ demand. Cluster analysis is used to identify different service types of stations, the BPNN method is utilized to establish the demand prediction models for different service types of stations, and comparative analysis is employed to determine if the accuracy of the prediction models is improved by making a distinction among stations and working/nonworking days. Finally, a case study is conducted to evaluate the performance of the proposed methodology. Results indicate that making a distinction among stations and working/nonworking days when predicting users’ demand can improve the accuracy of prediction models.


2019 ◽  
Vol 26 (12) ◽  
pp. 1448-1457 ◽  
Author(s):  
Sharon E Davis ◽  
Robert A Greevy ◽  
Christopher Fonnesbeck ◽  
Thomas A Lasko ◽  
Colin G Walsh ◽  
...  

Abstract Objective Clinical prediction models require updating as performance deteriorates over time. We developed a testing procedure to select updating methods that minimizes overfitting, incorporates uncertainty associated with updating sample sizes, and is applicable to both parametric and nonparametric models. Materials and Methods We describe a procedure to select an updating method for dichotomous outcome models by balancing simplicity against accuracy. We illustrate the test’s properties on simulated scenarios of population shift and 2 models based on Department of Veterans Affairs inpatient admissions. Results In simulations, the test generally recommended no update under no population shift, no update or modest recalibration under case mix shifts, intercept correction under changing outcome rates, and refitting under shifted predictor-outcome associations. The recommended updates provided superior or similar calibration to that achieved with more complex updating. In the case study, however, small update sets lead the test to recommend simpler updates than may have been ideal based on subsequent performance. Discussion Our test’s recommendations highlighted the benefits of simple updating as opposed to systematic refitting in response to performance drift. The complexity of recommended updating methods reflected sample size and magnitude of performance drift, as anticipated. The case study highlights the conservative nature of our test. Conclusions This new test supports data-driven updating of models developed with both biostatistical and machine learning approaches, promoting the transportability and maintenance of a wide array of clinical prediction models and, in turn, a variety of applications relying on modern prediction tools.


Sign in / Sign up

Export Citation Format

Share Document