judgmental forecasting
Recently Published Documents


TOTAL DOCUMENTS

57
(FIVE YEARS 0)

H-INDEX

17
(FIVE YEARS 0)

Omega ◽  
2019 ◽  
Vol 87 ◽  
pp. 46-56 ◽  
Author(s):  
Clint L.P. Pennings ◽  
Jan van Dalen ◽  
Laurens Rook

2019 ◽  
Vol 57 (7) ◽  
pp. 1695-1711
Author(s):  
Hyo Young Kim ◽  
Yun Shin Lee ◽  
Duk Bin Jun

Purpose Forecasting processes in organizational settings largely rely on human judgment, which makes it important to examine ways to improve the accuracy of these judgmental forecasts. The purpose of this paper is to test the effect of providing relative performance feedback on judgmental forecasting accuracy. Design/methodology/approach This paper is based on a controlled laboratory experiment. Findings The authors show that feedback that ranks the forecasting performance of participants improves their accuracy compared with the forecasting accuracy of participants who do not get such feedback. The authors also find that the effectiveness of such relative performance feedback depends on the content of the feedback information as well as on whether accurate forecasting performance is linked to additional financial rewards. Relative performance feedback becomes more effective when subjects are told they rank behind other participants than when they are told they rank higher than other participants. This finding is consistent with loss aversion: low-ranked individuals view their performance as a loss and work harder to avoid it. By contrast, top performers tend to slack off. Finally, the authors find that the addition of monetary rewards for top performers reduces the effectiveness of relative performance feedback, particularly for individuals whose performance ranks near the bottom. Originality/value One way to improve forecasting accuracy when forecasts rely on human judgment is to design an effective incentive system. Despite the crucial role of judgmental forecasts in organizations, little attention has been devoted to this topic. The aim of this study is to add to the literature in this field.


2019 ◽  
Vol 3 (3) ◽  
pp. 35 ◽  
Author(s):  
Gruetzemacher

In this paper we describe a holistic AI forecasting framework which draws on a broad body of literature from disciplines such as forecasting, technological forecasting, futures studies and scenario planning. A review of this literature leads us to propose a new class of scenario planning techniques that we call scenario mapping techniques. These techniques include scenario network mapping, cognitive maps and fuzzy cognitive maps, as well as a new method we propose that we refer to as judgmental distillation mapping. This proposed technique is based on scenario mapping and judgmental forecasting techniques, and is intended to integrate a wide variety of forecasts into a technological map with probabilistic timelines. Judgmental distillation mapping is the centerpiece of the holistic forecasting framework in which it is used to inform a strategic planning process as well as for informing future iterations of the forecasting process. Together, the framework and new technique form a holistic rethinking of how we forecast AI. We also include a discussion of the strengths and weaknesses of the framework, its implications for practice and its implications on research priorities for AI forecasting researchers.


2019 ◽  
Vol 32 (5) ◽  
pp. 536-549
Author(s):  
Paul Goodwin ◽  
Sinan Gönül ◽  
Dilek Önkal ◽  
Ayşe Kocabıyıkoğlu ◽  
Celile Itır Göğüş

2018 ◽  
Vol 25 (3) ◽  
pp. 402-424
Author(s):  
Vera Shanshan Lin

This study aims to evaluate the accuracy of different judgmental forecasting tasks, compare the judgmental forecasting behaviour of tourism researchers and practitioners and explore the validity of experts’ judgmental behaviour by using the Hong Kong visitor arrivals forecasts over the period 2011Q2−2015Q4. Delphi-based judgmental forecasting procedure was employed through the Hong Kong Tourism Demand Forecasting System, an online forecasting support system, to collect and combine experts’ adjusted forecasts. This study evaluates forecasting performance and explores the characteristics of judgmental adjustment behaviour through the use of a group of error measures and statistical tests. The findings suggest a positive correlation between forecast accuracy and the level of data variability, and that experts’ adjustments are more beneficial in terms of achieving higher accuracy for series with higher variability. Industry practitioners’ forecasts outperformed academic researchers, particularly in making short-term forecasts. However, no significant difference was found between the two panels in making directionally correct forecasts. Experts’ judgmental intervention was found most useful for those series most in need of adjustment. The size of adjustment was found to have a strong and significantly positive association with the direction of forecast adjustment, but no statistically significant evidence was found regarding the relationship between accuracy improvement and adjustment size.


2016 ◽  
Vol 32 (1) ◽  
pp. 44-60 ◽  
Author(s):  
Zoe Theocharis ◽  
Nigel Harvey

2015 ◽  
Vol 36 (1) ◽  
pp. 33-45 ◽  
Author(s):  
Matthias Seifert ◽  
Enno Siemsen ◽  
Allègre L. Hadida ◽  
Andreas B. Eisingerich

Sign in / Sign up

Export Citation Format

Share Document