The Evaluator Effect in Usability Studies: Problem Detection and Severity Judgments

Author(s):  
Niels Ebbe Jacobsen ◽  
Morten Hertzum ◽  
Bonnie E. John

Usability studies are commonly used in industry and applied in research as a yardstick for other usability evaluation methods. Though usability studies have been studied extensively, one potential threat to their reliability has been left virtually untouched: the evaluator effect. In this study, four evaluators individually analyzed four videotaped usability test sessions. Only 20% of the 93 detected problems were detected by all evaluators, and 46% were detected by only a single evaluator. From the total set of 93 problems the evaluators individually selected the ten problems they considered most severe. None of the selected severe problems appeared on all four evaluators' top-10 lists, and 4 of the 11 problems that were considered severe by more than one evaluator were only detected by one or two evaluators. Thus, both detection of usability problems and selection of the most severe problems are subject to considerable individual variability.

2016 ◽  
Vol 24 (e1) ◽  
pp. e55-e60 ◽  
Author(s):  
Reza Khajouei ◽  
Misagh Zahiri Esfahani ◽  
Yunes Jahani

Objective: There are several user-based and expert-based usability evaluation methods that may perform differently according to the context in which they are used. The objective of this study was to compare 2 expert-based methods, heuristic evaluation (HE) and cognitive walkthrough (CW), for evaluating usability of health care information systems. Materials and methods: Five evaluators independently evaluated a medical office management system using HE and CW. We compared the 2 methods in terms of the number of identified usability problems, their severity, and the coverage of each method. Results: In total, 156 problems were identified using the 2 methods. HE identified a significantly higher number of problems related to the “satisfaction” attribute (P = .002). The number of problems identified using CW concerning the “learnability” attribute was significantly higher than those identified using HE (P = .005). There was no significant difference between the number of problems identified by HE, based on different usability attributes (P = .232). Results of CW showed a significant difference between the number of problems related to usability attributes (P < .0001). The average severity of problems identified using CW was significantly higher than that of HE (P < .0001). Conclusion: This study showed that HE and CW do not differ significantly in terms of the number of usability problems identified, but they differ based on the severity of problems and the coverage of some usability attributes. The results suggest that CW would be the preferred method for evaluating systems intended for novice users and HE for users who have experience with similar systems. However, more studies are needed to support this finding.


Author(s):  
Regina Bernhaupt ◽  
Kristijan Mihalic ◽  
Marianna Obrist

Evaluating mobile applications and devices is particularly challenging given the variability of users, uses, and environments involved. This chapter introduces usability evaluation methods (UEMs) for mobile applications. Over the past decades various usability evaluation methods have been developed and implemented to improve and assure easy-to-use user interfaces and systems. Since most of the so-called ‘classical’ methods have demonstrated shortcomings when used in the field of mobile applications, they were broadened, varied, and changed to meet the demands of testing usability for mobile applications. This chapter presents a selection of these ‘classical’ methods and introduces some methodological variations for testing usability in the area of mobile devices and applications. It argues for a combination of both field evaluation methods and traditional laboratory testing to cover different phases in the user-centered design and development process.


Author(s):  
Merle Conyer

<span>Usability evaluation is the analysis of the design of a product or system in order to evaluate the match between users and a product or system within a particular context. Usability evaluation is a dynamic process throughout the life cycle of a product or system. Conducting evaluation both with and without end-users significantly improves the chances of success. Six usability evaluation methods and six data collection techniques are discussed, including advantages and limitations of each. Recommendations are made regarding the selection of particular evaluation methods and recording techniques to evaluate different elements of usability.</span>


Author(s):  
Hannu Haapala ◽  
Piia Nurkka ◽  
Kim Kaustell ◽  
Tiina Mattila ◽  
Juha Suutarinen

One of the primary goals of research activities has always been to apply the newest potential results into practise. This is the case especially in engineering research. Recently, as productivity has gained importance as a quality measure for research, rapid application of the results has become even more important. Consequently, applicability has been lifted as a major criterion in the application for public funding thus promoting product development as an integrated part of research projects. No product (or research result) does have any impact if it is not taken into practical use. The end customers of research are supposed to take the developed products into active practical use. Without that phase all our efforts are put in vain. Usability is an important part of user acceptance. According to Nielsen (1993) system acceptability includes a social and a practical part. To be practically acceptable the product should be e.g. economical, compatible, reliable and useful. To be useful the product should be functionally suitable for the user’s tasks and usable. Usability includes easy learn ability, efficiency in use, remember ability, lack of errors in operation and subjective pleasure. In agricultural research there are distinct problems in usability when considering the phase oftaking the results into practical use. Of course, there are challenges also in the initial phase of research such as choosing research topics and later in the initial phase of product development. Usability, however, by far dictates user experience and thus decides if the product is taken into wide use or not. Consequently, MTT Agricultural Engineering Research has set usability and acceptability research as an important research topic. Usability in agricultural engineering is a complex issue since the context of use is variable. Mobile work is typical to agricultural producers.In this study, an example of usability evaluation is presented. Mobile work in the context of electronic control of precision combined drilling is evaluated. The research themes were:1. How great a challenge is usability in Precision Agriculture (PA)?- is it the cause for poor market penetration?2. Which usability evaluation methods are applicable to PA?- are there special issues in PA, or agricultural engineering generally, which limit the feasibilityof some methods?3. Which kind of usability problems can be detected with one selected method (heuristicevaluation)?- a demonstration of usability evaluation: the Human-Machine Interface (HMI) of a precisioncombined drillAccording to literature usability issues have not been a central issue in electronics development in agriculture. Poor experiences of unacceptable operation could be one reason for the customers not relying on new electronic control systems such as those of PA. There are multiple potential usability evaluation methods for agricultural engineering. The results from the case study performed show that heuristic evaluation is a suitable method for detecting design deficiencies in electronic control of mobile PA. To get a wider picture, further studies with other methods and applications should be done.


Author(s):  
Regina Bernhaupt

In order to develop easy-to-use multimodal interfaces for mobile applications, effective usability evaluation methods (UEMs) are an essential component of the development process. Over the past decades various usability evaluation methods have been developed and implemented to improve and assure easyto- use user interfaces and systems. However, most of the so-called ‘classical’ methods exhibit shortcomings when used in the field of mobile applications, especially when addressing multimodal interaction (MMI). Hence, several ‘classical’ methods were broadened, varied, and changed to meet the demands of testing usability for multimodal interfaces and mobile applications. This chapter presents a selection of these ‘classical’ methods, and introduces some newly developed methods for testing usability in the area of multimodal interfaces. The chapter concludes with a summary on currently available methods for usability evaluation of multimodal interfaces for mobile devices.


2021 ◽  
Author(s):  
Mehrdad Farzandipour ◽  
Ehsan Nabovati ◽  
Hamidreza Tadayon ◽  
Monireh Sadeqi Jabali

Abstract Background There are some inconsistencies regarding the selection of the most appropriate usability evaluation method. The present study aimed to compare two expert-based evaluation methods in a nursing module, as the most widely used module of a Hospital Information System (HIS). Methods The Heuristic Evaluation (HE) and Cognitive Walkthrough (CW) methods were used by five independent evaluators to evaluate the nursing module of Shafa HIS. In this regard, the number, severity and ratio of the recognized problems according to the usability attributes were compared using two evaluation methods. Results The use of the HE and CW evaluation methods resulted in the identification of 104 and 24 unique problems, respectively. The average severity of the recognized problems was 2.32 in the HE method and 2.77 in the CW evaluation method; however, there was a significant difference between the number and severity of recognized usability problems by these methods (P < 0.001). Some problems, which were associated with effectiveness, satisfaction and error, were better recognized by the HE method; however, CW evaluation method was more successful in recognizing problems of learnability, efficiency and memorability. Conclusion The HE method recognized more problems with a lower average severity. On the other hand, CW could recognize fewer problems with a higher average severity. Regarding the evaluation goal, HE method would be used to improve effectiveness, increase satisfaction and decrease the number of errors. Furthermore, CW evaluation method is recommended to be used to improve the learnability, efficiency and memorability of the system.


Author(s):  
Niels Ebbe Jacobsen ◽  
Anker Helms Jørgensen

In the 1980's researchers and practitioners developed usability evaluation methods (UEMs) that aimed at identifying usability problems in technological artifacts. Among the most known UEMs are Usability Test, Heuristic Evaluation and Cognitive Walkthrough. The development of UEMs was followed by research activities aiming at evaluating and further developing these UEMs. As the methods in addition have gained wide acceptance in design practice, the field of UEMs seems to have matured considerably. However, closer inspection reveals that there is still considerable lack of coherence and agreement. The publication of a controversial paper by Gray & Salzman (1998) underscored this point in that they questioned the methodological validity of five previously published experimental UEM studies. In addition, ten distinguished UEM researchers' very different opinions based on the Gray & Salzman paper left the impression that research in the field of UEM is far from being coherent. In order to throw light upon the current state of art of science in the field of UEMs, this paper analyzes the maturity of the field based on Thomas Kuhn's theory of scientific revolutions. We find that the field is currently in the first of three stages, the pre-paradigmatic stage, as it is lacking a general conceptual framework, as basic terms are ill-defined, and as researchers “facts-gather” almost randomly in the absence of a reason for seeking some particular form of more recondite information.


2016 ◽  
Vol 2016 ◽  
pp. 1-16 ◽  
Author(s):  
Andrés Solano ◽  
César A. Collazos ◽  
Cristian Rusu ◽  
Habib M. Fardoun

Usability is a fundamental quality characteristic for the success of an interactive system. It is a concept that includes a set of metrics and methods in order to obtain easy-to-learn and easy-to-use systems. Usability Evaluation Methods, UEM, are quite diverse; their application depends on variables such as costs, time availability, and human resources. A large number of UEM can be employed to assess interactive software systems, but questions arise when deciding which method and/or combination of methods gives more (relevant) information. We proposeCollaborative Usability Evaluation Methods, CUEM, following the principles defined by the Collaboration Engineering. This paper analyzes a set of CUEM conducted on different interactive software systems. It proposes combinations of CUEM that provide more complete and comprehensive information about the usability of interactive software systems than those evaluation methods conducted independently.


Sign in / Sign up

Export Citation Format

Share Document