Explaining variations in the findings of presenteeism research: A meta-analytic investigation into the moderating effects of construct operationalizations and chronic health.

2018 ◽  
Vol 23 (4) ◽  
pp. 584-601 ◽  
Author(s):  
Alisha McGregor ◽  
Rajeev Sharma ◽  
Christopher Magee ◽  
Peter Caputi ◽  
Donald Iverson
2013 ◽  
Vol 2013 (1) ◽  
pp. 14765
Author(s):  
Peihua Fan ◽  
Qiaozhuan Liang ◽  
Heng Liu ◽  
Mingjun Hou

2013 ◽  
Vol 27 (4) ◽  
pp. 283-293 ◽  
Author(s):  
Lars Behrmann ◽  
Elmar Souvignier

Single studies suggest that the effectiveness of certain instructional activities depends on teachers' judgment accuracy. However, sufficient empirical data is still lacking. In this longitudinal study (N = 75 teachers and 1,865 students), we assessed if the effectiveness of teacher feedback was moderated by judgment accuracy in a standardized reading program. For the purpose of a discriminant validation, moderating effects of teachers' judgment accuracy on their classroom management skills were examined. As expected, multilevel analyses revealed larger reading comprehension gains when teachers provided students with a high number of feedbacks and simultaneously demonstrated high judgment accuracy. Neither interactions nor main effects were found for classroom management skills on reading comprehension. Moreover, no significant interactions with judgment accuracy but main effects were found for both feedback and classroom management skills concerning reading strategy knowledge gains. The implications of the results are discussed.


Methodology ◽  
2016 ◽  
Vol 12 (3) ◽  
pp. 89-96 ◽  
Author(s):  
Tyler Hamby ◽  
Robert A. Peterson

Abstract. Using two meta-analytic datasets, we investigated the effect that two scale-item characteristics – number of item response categories and item response-category label format – have on the reliability of multi-item rating scales. The first dataset contained 289 reliability coefficients harvested from 100 samples that measured Big Five traits. The second dataset contained 2,524 reliability coefficients harvested from 381 samples that measured a wide variety of constructs in psychology, marketing, management, and education. We performed moderator analyses on the two datasets with the two item characteristics and their interaction. As expected, as the number of item response categories increased, so did reliability, but more importantly, there was a significant interaction between the number of item response categories and item response-category label format. Increasing the number of response categories increased reliabilities for scale-items with all response categories labeled more so than for other item response-category label formats. We explain that the interaction may be due to both statistical and psychological factors. The present results help to explain why findings on the relationships between the two scale-item characteristics and reliability have been mixed.


Sign in / Sign up

Export Citation Format

Share Document