Lab Manual - CSC356 - HCI - 3
Lab Manual - CSC356 - HCI - 3
Lab Manual - CSC356 - HCI - 3
Purpose
The purpose of this lab is to learn the technique of evaluating interactive system design
through expert analysis where experts give their feedback directly, based on their expertise
or a set of design heuristics.
Outcomes
After completing this lab, students will be able to apply this technique to evaluate
interactive system design. They will have a good insight about:
▪ How to evaluate design with experts. What are the different phases involved?
▪ When to get design critique from experts.
▪ Benefits and drawbacks of evaluation with experts
Introduction
In this lab, we are going to apply an evaluation technique called heuristic evaluation to
evaluate the design of an interactive system. Heuristic Evaluation was created by Jakob
Nielsen and colleagues (Section 9.3.2, Human–Computer Interaction, Third Edition, Alan
Dix et al), about twenty years ago. The basic idea of heuristic evaluation is to provide a
set of people — often other stakeholders on the design team or outside design experts —
with a set of design heuristics or principles, and ask them to use those to look for
problems in our design. Each of them independently walks through a variety of tasks using
our design to look for the bugs. Doing independently is important since different
evaluators are going to find different problems. At the end of the process, they’re going to
get back together and talk about what they found. This “independent first, gather
afterwards” is how we get a “wisdom of crowds” benefit in having multiple evaluators.
One of the reasons that we’re talking about this early in the lab is that it is a technique
that you can use, either on a working user interface or on sketches of user interfaces. So,
heuristic evaluation works really well in conjunction with paper prototypes and other
rapid, low fidelity techniques enabling us to get our design ideas out quick and fast.
1- First, it’s really valuable to get peer critique before user testing, because that
helps you not waste your users on stuff that’s just going to get picked up
automatically.
2- The rich qualitative feedback that peer critique provides can also be really
valuable before redesigning your application, because what it can do is it can
show you what parts of your app you probably want to keep, and what are other
parts that are more problematic and deserve redesign.
To explain how the evaluation takes place, let us consider NTS’s website
(www.nts.org.pk) and apply this technique to evaluate it.
During this activity, we are going to walk through the NTS website just to get used to its
flow, to get feel of it, and to get the idea of features it offers.
Description: Figure 3.1 shows one of the pages visited on the website. The system provides
no clue to user to figure out his/her way through the site, and to find the way back if he/she
accidentally clicks on the wrong link. There are no breadcrumbs which offer freedom to
users to "jump" to previous categories in the sequence without using the Back key, other
navigation bars, or the search engine. There is no highlighting of top menu item to make
user aware the menu category current page belongs to.
Figure 3.1: Information Page about Law Practitioners Test on NTS Website
Violation 2
Issue: Inconsistent color scheme
Severity: 1
Description: Figure 3.2 shows two different pages visited on the website. Both follows
different color scheme leading to inconsistent design.
Violation 3
Issue: NTS logo is not clickable on many pages.
Severity: 2
Description: The NTS logo, though clickable when on home page (www.nts.org.pk), is not
clickable on many pages.
Activity 3: Aggregation
Once every evaluator has performed the evaluation individually and independently,
next step is to get together, aggregate the findings and compile a report, of all distinct
usability issues, that can be presented and discussed with the design team. Since
multiple evaluators may find same problems, so rather than presenting a redundant set
of identified problems, it would be better to include them once in the report. As a
designer, we should ask evaluators to include in this report is the Issues-Evaluators
matrix (Figure-3), where rows represent evaluators and columns represent the issues
identified. This matrix would help designers in differentiating between a very good
evaluator and a not very good evaluator.
Home Activities
The student should work in a group of 3-5 for the following activities. Activity 2 and
Activity 3 must be performed individually while Activity 1 and Activity 4 in group.
Assignment Deliverables
Students need to submit a report containing following items:
▪ List of all the unique issues found and Issues-Evaluators Matrix, where rows
represent evaluators and columns represent the usability issues