The State of the Art in Automating Usability Evaluation of User Interfaces

The State of the Art in Automating Usability Evaluation of User Interfaces

Vol. 33, No. 4, December 2001 | MELODY Y. IVORY AND MARTI A. HEARST
The article "Automating Usability Evaluation of User Interfaces" by Melody Y. Ivory and Marti A. Hearst from the University of California, Berkeley, discusses the importance of usability evaluation in user interface design and explores the potential of automation to enhance this process. The authors present a comprehensive survey of usability evaluation methods, organized into a new taxonomy that emphasizes the role of automation. They analyze existing techniques, identify areas where automation can be beneficial, and suggest ways to expand current approaches to better support usability evaluation. The taxonomy categorizes usability evaluation methods based on method class, method type, automation type, and effort level. Method classes include testing, inspection, inquiry, analytical modeling, and simulation. Method types describe how the evaluation is conducted, such as thinking-aloud protocols or surveys. Automation types cover capture, analysis, and critique activities. Effort levels indicate the level of human effort required, ranging from minimal to model development. The survey covers 132 methods applied to both WIMP (Windows, Icons, Pointers, and Mouse) interfaces and Web interfaces. It highlights the underexplored nature of automation in usability evaluation, with only 33% of methods offering some form of automation. The article discusses various methods for automating capture, analysis, and critique activities, including performance measurement, remote testing, log file analysis, and guideline review. The authors conclude that while automation has significant potential to reduce the cost and time required for usability evaluation, it should complement, not replace, existing evaluation techniques. They emphasize the need for further research to develop more effective and efficient automated methods for usability evaluation.The article "Automating Usability Evaluation of User Interfaces" by Melody Y. Ivory and Marti A. Hearst from the University of California, Berkeley, discusses the importance of usability evaluation in user interface design and explores the potential of automation to enhance this process. The authors present a comprehensive survey of usability evaluation methods, organized into a new taxonomy that emphasizes the role of automation. They analyze existing techniques, identify areas where automation can be beneficial, and suggest ways to expand current approaches to better support usability evaluation. The taxonomy categorizes usability evaluation methods based on method class, method type, automation type, and effort level. Method classes include testing, inspection, inquiry, analytical modeling, and simulation. Method types describe how the evaluation is conducted, such as thinking-aloud protocols or surveys. Automation types cover capture, analysis, and critique activities. Effort levels indicate the level of human effort required, ranging from minimal to model development. The survey covers 132 methods applied to both WIMP (Windows, Icons, Pointers, and Mouse) interfaces and Web interfaces. It highlights the underexplored nature of automation in usability evaluation, with only 33% of methods offering some form of automation. The article discusses various methods for automating capture, analysis, and critique activities, including performance measurement, remote testing, log file analysis, and guideline review. The authors conclude that while automation has significant potential to reduce the cost and time required for usability evaluation, it should complement, not replace, existing evaluation techniques. They emphasize the need for further research to develop more effective and efficient automated methods for usability evaluation.
Reach us at info@study.space