How satisfied are you with your job? Estimating the reliability of scores on a single‐item job satisfaction measure

2020 ◽  
Vol 28 (3) ◽  
pp. 297-309
Author(s):  
Jisoo Ock
2021 ◽  
Author(s):  
◽  
Seth Woods

Teacher stress has been studied for decades and the negative outcomes of too much stress are well known, such as burnout and lack of teacher retention. The present study focuses on the relationship between teacher stress and teacher job satisfaction. The Transactional Model of stress specifies that coping must be accounted for when considering a person's stress reaction, as a person's coping capacity or resources are what determines whether stress reaction will occur. The present study seeks to answer the question: Does coping moderate the relationship between teacher stress and job satisfaction? Moderation analysis was conducted using data from randomized trials examining a leadership training program. The results showed that coping moderated the relationship between stress and job satisfaction. Adding to the importance of this study for practitioners is that all three constructs (stress, coping, and job satisfaction) were measured using single items, making it easy for practitioners to measure among their staff.


2016 ◽  
Author(s):  
Heather N. Odle-Dusseau ◽  
Leslie B. Hammer ◽  
Tori L. Crain ◽  
Todd E. Bodner

2005 ◽  
Vol 19 (3) ◽  
pp. 194-198 ◽  
Author(s):  
Christyn L. Dolbier ◽  
Judith A. Webster ◽  
Katherine T. McCalister ◽  
Mark W. Mallon ◽  
Mary A. Steinhardt

1993 ◽  
Author(s):  
Beryl Hesketh ◽  
Dianne Gardner

2017 ◽  
Vol 10 (2) ◽  
pp. 234-257 ◽  
Author(s):  
Jeffrey M. Cucina ◽  
Philip T. Walmsley ◽  
Ilene F. Gast ◽  
Nicholas R. Martin ◽  
Patrick Curtin

One of the typical roles of industrial–organizational (I-O) psychologists working as practitioners is administering employee surveys measuring job satisfaction/engagement. Traditionally, this work has involved developing (or choosing) the items for the survey, administering the items to employees, analyzing the data, and providing stakeholders with summary results (e.g., percentages of positive responses, item means). In recent years, I-O psychologists moved into uncharted territory via the use of survey key driver analysis (SKDA), which aims to identify the most critical items in a survey for action planning purposes. Typically, this analysis involves correlating (or regressing) a self-report criterion item (e.g., “considering everything, how satisfied are you with your job”) with (or on) each of the remaining survey items in an attempt to identify which items are “driving” job satisfaction/engagement. It is also possible to use an index score (i.e., a scale score formed from several items) as the criterion instead of a single item. That the criterion measure (regardless of being a single item or an index) is internal to the survey from which predictors are drawn distinguishes this practice from linkage research. This methodology is not widely covered in survey methodology coursework, and there are few peer-reviewed articles on it. Yet, a number of practitioners are marketing this service to their clients. In this focal article, a group of practitioners with extensive applied survey research experience uncovers several methodological issues with SKDA. Data from a large multiorganizational survey are used to back up claims about these issues. One issue is that SKDA ignores the psychometric reality that item standard deviations impact which items are chosen as drivers. Another issue is that the analysis ignores the factor structure of survey item responses. Furthermore, conducting this analysis each time a survey is administered conflicts with the lack of situational and temporal specificity. Additionally, it is problematic to imply causal relationships from the correlational data seen in most surveys. Most surprisingly, randomly choosing items out of a hat yields validities similar to those from conducting the analysis. Thus, we recommend that survey providers stop conducting SKDA until they can produce science that backs up this practice. These issues, in concert with the lack of literature examining the practice, make rigorous evaluations of SKDA a timely inquiry.


1996 ◽  
Vol 78 (2) ◽  
pp. 631-634 ◽  
Author(s):  
John P. Wanous ◽  
Arnon E. Reichers

Single-item measures of employees' attitudes and beliefs are generally discouraged because their (internal consistency) reliability cannot be estimated. This results in the concern that reliability may be unacceptably low, particularly when compared to scales used to measure the same construct. A method for estimating the reliability of a single-item measure is demonstrated on original data that included both a single-item and a multiple-item measure of three constructs, namely, Over-all Job Satisfaction, Perceived Amount of Participation, and Desired Amount of Participation in decision-making. The average minimum estimated reliability for these single-item measures is .57; however, a realistic yet conservative estimate of their likely minimum reliability is at least .70.


1997 ◽  
Vol 82 (2) ◽  
pp. 247-252 ◽  
Author(s):  
John P. Wanous ◽  
Arnon E. Reichers ◽  
Michael J. Hudy

Sign in / Sign up

Export Citation Format

Share Document