Abstract
This paper presents one of our practices in conducting usability testing. Accredited with ISO/IEC 17025:2005 software testing laboratory, we consider ISO usability sub-characteristics as the metrics for the usability evaluation.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Practically, there are many different ways to apply by user experience practitioners to evaluate product’s usability; depending on the type of the product; be it a website, a system, a standalone program, hardware devices and many more. We have our own approach to elicit usability of certain product, which depends on several conditions such as number of evaluators/resources available, project schedule, cost, product cycle stage, etc. By referring to [1] which defines usability as a subset of quality in use model consisting of effectiveness, efficiency and satisfaction for consistency with its established meaning, we present one of our usability testing methods we used which incorporates the combination of these three important metrics.
2 Usability Testing Practice
2.1 Setting up Task to Measure Effectiveness, Efficiency, Satisfaction
In order to evaluate each usability metrics in a more practical manner, setting up a task requires certain technique [2]. For example, to evaluate effectiveness of a product usage, the task will be setup as of how successful the user completed the task, how often the user produces errors and how easy the user can recover. Meanwhile, to evaluate efficiency, the task will be setup with enough repetitions of typical tasks to create realistic work rhythm, or by observing the users at their daily work to look for situations that interrupt or slow them down. For satisfaction, interview or survey will normally be part of the evaluation, or by performing a comparative preference testing.
2.2 Giving Score
Effectiveness and efficiency are measured by the successful completion of criteria breakdown from a scenario or task. For a task that matches totally to the set criteria, the moderator will mark the score to ‘Yes’. A success mark is given the full credit of 100 %. Criteria that does not match at all will be given a ‘No’ mark, with credit of 0 %. ‘No’ is normally given for unsuccessful task criteria which may include events such as user giving up, user requires a lot of assistance from moderator, user incompletes the task etc. While partial credit is given based on moderator’s discretion, for e.g. moderator decides that the mistake should be given at least 50 % (partial) rather than 0 % (no) mark. Measures of satisfaction are taken using post questionnaires with users. The questions will appear each time the user completed or abandons the pre-setup task.
2.3 Calculating Individual Metrics and Usability Score
Following are the calculations for each metric score:
-
Effectiveness (%) = (yes + (partial × 0.5))/total × 100 %
-
Efficiency (%) = (yes + (partial × 0.5))/total × 100 %
-
Satisfaction (%) = answer point/total point × 100 %
The way we calculate the overall usability score is as follows:
-
Usability (%) = (effectiveness % + efficiency % + satisfaction %)/3
The total usability score is the sum of the three metrics scores divided by three, i.e. the average.
3 Conclusion
We have applied the method discussed in this paper for many different case studies and tested usability for many different types of products. Comfortable with the method, we have developed and configure our own software tool called Mi-UXLab to evaluate usability which incorporates the discussed method.
References
Systems and software engineering – Systems and software Quality Requirements and Evaluation (SQuaRE) – System and software quality models, ref. no.: MS ISO/IEC 25010 (2011)
Quesenbery, W.: Balancing the 5Es of usability. Cutter IT J. 17(2), 4–11 (2004)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 IFIP International Federation for Information Processing
About this paper
Cite this paper
Abdollah, N., Sivaji, A., Ghazali, M. (2015). Usability Testing Practice at MIMOS Usability Lab. In: Abascal, J., Barbosa, S., Fetter, M., Gross, T., Palanque, P., Winckler, M. (eds) Human-Computer Interaction – INTERACT 2015. INTERACT 2015. Lecture Notes in Computer Science(), vol 9299. Springer, Cham. https://doi.org/10.1007/978-3-319-22723-8_78
Download citation
DOI: https://doi.org/10.1007/978-3-319-22723-8_78
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-22722-1
Online ISBN: 978-3-319-22723-8
eBook Packages: Computer ScienceComputer Science (R0)