probabilistic average
Recently Published Documents


TOTAL DOCUMENTS

4
(FIVE YEARS 1)

H-INDEX

1
(FIVE YEARS 0)

2021 ◽  
pp. 000313482110111
Author(s):  
Sydney N. Char ◽  
Joshua A. Bloom ◽  
Danielle DeMarco ◽  
Abhishek Chatterjee

Background Surgical options for breast cancer are numerous and span multiple surgical disciplines. Decision analyses aid surgeons in making the most cost-effective choice, thus reducing health care expenditure while maximizing patient outcome. In this study, we aimed to evaluate existing breast surgery cost-effectiveness literature against the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) validated scoring system. Methods A PRISMA search was performed for cost-effectiveness within breast surgery. Articles were scored with CHEERS criteria on a 0-24 scale and qualitative data were collected. Subgroup analysis was performed comparing pre-CHEERS (published in 2013 or earlier) and post-CHEERS (published in 2014 or later) cohorts. Chi-squared analysis was performed to compare where studies lost points between cohorts. Results Of 2279 articles screened, 46 articles were included. The average CHEERS score was 18.18. Points were most often lost for characterizing heterogeneity, followed by discount rate, incremental costs and outcomes, and abstract. Quality-adjusted life year was the most commonly used health outcome, with visual model or analog scales as the most commonly used measure of effectiveness obtained primarily from surgeons or physicians. Most articles characterized uncertainty by deterministic sensitivity analysis, followed by both deterministic and probabilistic, then probabilistic. Average CHEERS scores were similar between pre- and post-CHEERS cohorts (17.67 vs. 18.40, P > .05) There were several significant differences in where articles lost points between pre- and post-CHEERS cohorts. Discussion In order to standardize the reporting of results, cost-effectiveness studies in breast surgery should adhere to the current CHEERS criteria and aim to better characterize heterogeneity in their analyses.


2018 ◽  
Vol 25 (1) ◽  
pp. 123-134 ◽  
Author(s):  
Nodari Vakhania

AbstractThe computational complexity of an algorithm is traditionally measured for the worst and the average case. The worst-case estimation guarantees a certain worst-case behavior of a given algorithm, although it might be rough, since in “most instances” the algorithm may have a significantly better performance. The probabilistic average-case analysis claims to derive an average performance of an algorithm, say, for an “average instance” of the problem in question. That instance may be far away from the average of the problem instances arising in a given real-life application, and so the average case analysis would also provide a non-realistic estimation. We suggest that, in general, a wider use of probabilistic models for a more accurate estimation of the algorithm efficiency could be possible. For instance, the quality of the solutions delivered by an approximation algorithm may also be estimated in the “average” probabilistic case. Such an approach would deal with the estimation of the quality of the solutions delivered by the algorithm for the most common (for a given application) problem instances. As we illustrate, the probabilistic modeling can also be used to derive an accurate time complexity performance measure, distinct from the traditional probabilistic average-case time complexity measure. Such an approach could, in particular, be useful when the traditional average-case estimation is still rough or is not possible at all.


Sign in / Sign up

Export Citation Format

Share Document