[go: up one dir, main page]

0% found this document useful (0 votes)
32 views28 pages

Human Computer Interaction 001

The seminar report on Human-Computer Interaction (HCI) discusses the interaction between users and computers, emphasizing the importance of usability and user-centered design. It outlines the goals, benefits, and frameworks of HCI, as well as evaluation methods like cognitive walkthroughs and heuristic evaluations. The report highlights the significance of understanding user needs and improving system design to enhance user experience and efficiency.

Uploaded by

meethirapra94
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views28 pages

Human Computer Interaction 001

The seminar report on Human-Computer Interaction (HCI) discusses the interaction between users and computers, emphasizing the importance of usability and user-centered design. It outlines the goals, benefits, and frameworks of HCI, as well as evaluation methods like cognitive walkthroughs and heuristic evaluations. The report highlights the significance of understanding user needs and improving system design to enhance user experience and efficiency.

Uploaded by

meethirapra94
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Human computer interaction

A Seminar Report on

“HUMAN COMPUTER INTERACTION”


At

“Bhagwan Mahavir College of Computer Application”,


Bharthana-Vesu, Surat
As A Partial Fulfilment for The Degree Of
Bachelor of Computer Application

2024-25

Guided By: Submitted By:


Asst. Prof. Abhishek
Kaushal Mr. Ayush Akbari

Bhagwan Mahavir College of Computer


Application Bharthana-Vesu, Surat
Constituent College of

[Type here]
Human computer interaction

l
Acknowledgement

The reason of completing the project work successfully is not just our efforts but
efforts of many people. The people, who trusted us, guided us and encouraged us with
every means. Guide is a person who provides you the direction towards success, so I
feel great pleasure to express our gratitude to our guides, our faculty members as well
as every person who helped us directly or indirectly with our project.
We are also indebted to our Professor Asst. Prof. Dr. Hetal Modi who provided
constant encouragement, support & valuable guidance before and during our project. It
was her effort who led us to this place for project work. Her guidance and suggestions
were valuable.
We are also thankful to our all Faculties Members and specially to Our Principal
Dr. Vikram Kaushik , to give us opportunity to make us this project. Thank you
very much,

2202020101008

[Type here]
Human computer interaction

Sr. Page
TOPIC
No. no
1 Definition 5
2 Goals of HCI 8
3 HCI Benefits 12
4 Why HCI is important? 16
5 HCI Frameworks/ models 18
6 Introduction to Evaluation 19
7 Goals of Evaluation 20
8 Predictive Evaluation 23
9 Predictive Evaluation methods 25
10 Advantage//Disadvantage 26
11 Cognitive Models 12>Predictive 27
12 Evaluation 28
13 Keystroke-Level Model(KLM)

[Type here]
Human computer interaction

Human computer analysis


Definition
• Human–computer interaction is the study of interaction between humans (users) and computers.
• "Human-computer interaction is a discipline concerned with the design, evaluation and
implementation of interactive computing systems for human use and with the study of major
phenomena surrounding them."

• Human means an individual user, a group of users working together, or a sequence of users in an
organization, each dealing with some part of the task or process. The user is whoever is trying to
get the job done using the technology.
• Computer means any technology ranging from the general desktop computer to a largescale
computer system, a mobile phone or an embedded system.
• By interaction we mean any communication between a user and computer, be it direct or indirect.
 Direct interaction involves a dialog with feedback and control throughout performance of the
task.
 Indirect interaction may involve batch processing or intelligent sensors controlling the
environment.
Goals of HCI
• A basic goal of HCI is to improve the interactions between users and computers by making
computers more usable and receptive to the user's needs.
Specifically, HCI is concerned with:
1. Methodologies and processes for designing interfaces (i.e., given a task and a class of users,
design the best possible interface within given constraints, optimizing for a desired property
such as learning ability or efficiency of use)
2. Methods for implementing interfaces (e.g. software toolkits and libraries; efficient algorithms)
3. Techniques for evaluating and comparing interfaces
4. Developing new interfaces and interaction techniques
5. Developing descriptive and predictive models and theories of interaction

• A long term goal of HCI is to design systems that minimize the barrier between the human’s
cognitive model of what they want to accomplish and the computer’s understanding of the user’s
task.
• Understand the factors that determine how people use technology.
• At physical level, HCI concerns selecting the most appropriate input devices and output devices for
a particular interface or task.
• Determine the best type of interaction, such as direct manipulation, natural language, icons,
menus.
• For systems that include computers, develop or improve-
1. Safety- protecting the user from dangerous conditions and undesirable situations. Eg:
[Type here]
Human computer interaction

2. Utility- It refers to the extent to which the system provides the right kind of functionality so that
user can do what they need or want to do
3. Effectiveness- It is a very general goal and refers to how good a system at doing what it is suppose
to do.
4. Efficiency- a measure of how quickly users can accomplish their goals or finish their work using the
system
5. Usability- usability in generally regarded as ensuring that interactive products are easy to learn,
effective to use, and enjoyable from user perspective.
6. Appeal- how well the user likes the system
7. Learnability- It refers to how easy a system is to learn to use.
8. Memorability -It refers to how easy a system is to remember how to use, once learned.

HCI Benefits
1. Gaining market share- People intend to buy/use products with higher usability.
2. Improving productivity- Employees in a company perform their jobs in a faster manner. 3. Lowering
support costs- If the product is not usable, calls to customer support can be enormous.
4. Reducing development cost- Avoid implementing features users don’t want and creating features
that are annoying or inefficient.

Why HCI is important?


• User-centred design is getting a crucial role! It is getting more important today to increase
competitiveness via HCI studies.
• Users lose time with badly designed products and services, users even give up using bad
interface.
•In the past, computers were expensive & used by technical people only. Now, computers are cheap
and used by non-technical people (different backgrounds, needs, knowledge, skills).

HCI Frameworks/ models 1. Norman’s model of interaction


Norman’s model of interaction is perhaps the most influential in Human Computer Interaction, possibly
because of its closeness to our intuitive understanding of the interaction between human user and
computer.
The user formulates a plan of action, which is then executed at the computer interface. When the plan,
or part of the plan, has been executed, the user observes the computer interface to evaluate the result
of the executed plan, and to determine further actions.
The interactive cycle can be divided into two major phases: execution and evaluation. These can then
be subdivided into further stages, seven in all. The stages in Norman’s model of interaction are as
follows:
1. user Establishes the goal.
2. Formulates the intention.
3. Specifies the action sequence.
4. Executes the action.

[Type here]
Human computer interaction

5. Perceives the system state.


6. Interprets the system state.
7. Evaluates the system state with respect to the goals and intentions.

Each stage is, of course, an activity of the user. First the user forms a goal. This is the user’s notion of
what needs to be done and is framed in terms of the domain, in the task language. It is liable to be
imprecise and therefore needs to be translated into the more specific intention, and the actual actions
that will reach the goal, before it can be executed by the user. The user perceives the new state of the
system, after execution of the action sequence, and interprets it in terms of his expectations. If the
system state reflects the user’s goal then the computer has done what he wanted and the interaction
has been successful; otherwise the user must formulate a new goal and repeat the cycle.

Advantages and Disadvantages


Norman’s model is a useful means of understanding the interaction, in a way that is clear and intuitive.
It allows other, more detailed, empirical and analytic work to be placed within a common framework.
However, it only considers the system as far as the interface. It concentrates wholly on the user’s view
of the interaction.
It does not attempt to deal with the system’s communication through the interface.

Abowd and Beale framework/ Interaction framework


The interaction framework attempts a more realistic description of interaction by including the system
explicitly, and breaks it into four main components.
The System, the User, the Input and the Output.
Each component has its own language.
 User- Task language
 System- core language,
 Input language and output languages for both the Input and Output component.
Input and Output together form the Interface.
As the interface sits between the User and the System, there are four steps in the interactive
cycle, each corresponding to a translation from one component to another. The User begins the
interactive cycle with the formulation of a goal and a task to achieve that goal. The only way the user
can manipulate the machine is through the Input, and so the task must be articulated within the input
language. The input language is translated into the core language as operations to be performed by the
System. The System then transforms itself as described by the operations; the execution phase of the
cycle is complete and the evaluation phase now begins. The System is in a new state, which must now
[Type here]
Human computer interaction

be communicated to the User. The current values of system attributes are rendered as concepts or
features of the Output. It is then up to the User to observe the Output and assess the results of the
interaction relative to the original goal, ending the evaluation phase and, hence, the interactive cycle.
There are four main translations involved in the interaction: articulation, performance,
presentation and observation.

3. HCI Framework
Frameworks provide providing a means of discussing the details of a particular interaction, As well as
provide a basis for discussing other issues that relate to the interaction.
The ACM SIGCHI Curriculum Development Group presents a framework for HCI and uses it to place
different areas that relate to HCI.
The field of ergonomics addresses issues on the user side of the interface, covering both input
and output, as well as the user’s immediate context. Dialog design and interface styles can be placed
particularly along the input branch of the framework, addressing both articulation and performance.
However, dialog is most usually associated with the computer and so is biased to that side of the
framework.
Presentation and screen design relates to the output branch of the framework. The entire
framework can be placed within a social and organizational context that also affects the interaction.
Each of these areas has important implications for the design of interactive systems and the
performance of the user.

[Type here]
Human computer interaction

HCI Paradigms from book


Unit-2
Introduction to Evaluation
– Tests usability and functionality of system
– occurs in laboratory, field and/or in collaboration with users
– evaluates both design and implementation
– should be considered at all stages in the design life cycle.

Goals of Evaluation
Evaluation has three main goals:
1. To assess the extent and accessibility of the system’s functionality.
The system’s functionality is important in that it must accord with the user’s requirements. In other
words, the design of the system should enable users to perform their intended tasks more easily. This
includes not only making the appropriate functionality available within the system, but making it
clearly reachable by the user in terms of the actions that the user needs to take to perform the task. It
also involves matching the use of the system to the user’s expectations of the task.
2. To assess users’ experience of the interaction
This includes considering aspects such as how easy the system is to learn, its usability and the user’s
satisfaction with it. It may also include his enjoyment and emotional response, particularly in the case
of systems that are aimed at leisure or entertainment. It is important to identify areas of the design

[Type here]
Human computer interaction

that overload the user in some way, perhaps by requiring an excessive amount of information to be
remembered.
3. To identify any specific problems with the system.
The final goal of evaluation is to identify specific problems with the design. These may be aspects of
the design which, when used in their intended context, cause unexpected results, or confusion
amongst users. This is, of course, related to both the functionality and usability of the design
(depending on the cause of the problem). However, it is specifically concerned with identifying
trouble-spots which can then be rectified.

Predictive Evaluation
• Basis:
– Observing users can be time-consuming and expensive – Try to
predict usage rather than observing it directly – Conserve
resources (quick & low cost).
• Approach
– Expert reviews (frequently used)
HCI experts interact with system and try to find potential problems and give prescriptive feedback.
– Best if
• Haven’t used earlier prototype
• Familiar with domain or task
• Understand user perspectives

Predictive Evaluation methods 1. Cognitive Walkthrough


Cognitive walkthrough was originally proposed and later revised by Polson and colleagues as an
attempt to introduce psychological theory into the informal and subjective walkthrough technique.
The main focus of the cognitive walkthrough is to establish how easy a system is to learn.
– usually performed by expert in cognitive psychology.
– expert ‘walks though’ design to identify potential problems using psychological principles.
– forms used to guide analysis.

• For each task walkthrough considers – what impact


will interaction have on user?
– what cognitive processes are required?
– what learning problems may occur?
• Analysis focuses on goals and knowledge: does the design lead the user to generate the correct
goals?

2. Heuristic Evaluation
 A heuristic is a guideline or general principle or rule of thumb that can guide a design decision or
be used to critique a decision that has already been made.

[Type here]
Human computer interaction

 Heuristic evaluation, developed by Jakob Nielsen and Rolf Molich, is a method for structuring the
critique of a system using a set of relatively simple and general heuristics.
 Heuristic evaluation can be performed on a design specification so it is useful for evaluating early
design. But it can also be used on prototypes, storyboards and fully functioning systems.
 It is therefore a flexible, relatively cheap approach. Hence it is often considered a discount usability
technique.
 The general idea behind heuristic evaluation is that several evaluators independently critique a
system to come up with potential usability problems. It is important that there be several of these
evaluators and that the evaluations be done independently.
 Heuristic evaluation `debugs' design.

• Advantage
– Cheap, good for small companies who can’t afford more
– Getting someone practiced in method is valuable
• Disadvantage(Somewhat Controversial)
– Very subjective assessment of problems
– Depends of expertise of reviewers

3. Discount Usability Testing


Hybrid of empirical usability testing and heuristic evaluation
Have 2 or 3 think-aloud user sessions with paper or prototype-produced mock-ups.
4. User/Cognitive Modeling
Build a model of user in order to predict usage

Cognitive Models
• Cognitive models represent users of interactive systems.
• Cognitive models capture the user’s thought (cognitive) process during interaction.
• Cognitive models attempt to represent the users as they interact with a system, modeling aspects
of their understanding, knowledge, intentions or processing.
• Although we said before that cognitive models are models of human thinking process, they are not
exactly treated as the same in HCI.
• Since interaction is involved, cognitive models in HCI not only model human cognition
(thinking) alone, but the perception and motor actions also (as interaction requires
‘perceiving what is in front’ and ‘acting’ after decision making). Thus cognitive models in HCI
should be considered as the models of human perception (perceiving the surrounding), cognition
(thinking in the ‘mind’) and motor action (result of thinking such as hand movement, eye
movement etc.)
Predictive Models
Predictive models, are widely used in many disciplines. In human-computer interaction, predictive
models allow metrics of human performance to be determined analytically without undertaking time-
consuming and resource-intensive experiments.
[Type here]
Human computer interaction

GOMS model
• GOMS model, is a description of the knowledge that a user must have in order to carry out tasks on
a device or system.
• It is a representation of the "how to do it" knowledge that is required by a system in order to get
the intended tasks accomplished.
• The model is used to analyze user’s physical , cognitive and perceptual interactions with computer
while achieving a task or a goal in best possible way.
• It describes the Goals, Operators, Methods, and Selection rules needed to perform a task.

Goals are what users want to accomplish.


Operators are the basic actions that the user must perform in order to use the system to achieve their
goals.
Methods are the procedures or subgoals and operator sequence that can accomplish a goal. Selection
rules: In some occasions, there can be more than one method to accomplish a goal a selection rule
helps to choose the appropriate method in the particular circumstance.
Characteristics for GOMS
• Combines cognitive aspects with an analysis of a task
• Results in quantitative predictions of time
• Qualitatively
– GOMS can explain the predictions
– focus on methods to accomplish goals

When is GOMS analysis used?


• It applies to situations in which users will be expected to perform tasks they have already mastered.
Advantages of GOMS
1. The GOMS approach to user modeling has strengths and weaknesses. While it is not
necessarily the most accurate method to measure human-computer interface interaction, it
does allow visibility of all procedural knowledge.

[Type here]
Human computer interaction

2. With GOMS, an analyst can easily estimate a particular interaction and calculate it quickly and
easily. This is only possible if the average Methods-Time Measurement data for each specific
task has previously been measured experimentally to a high degree of accuracy.
Disadvantages of GOMS
1. GOMS only applies to skilled users. It does not work for beginners or intermediates for errors
may occur which can alter the data.
2. He model doesn't apply to learning the system or a user using the system after a longer time of
not using it.
3. Mental workload is not addressed in the model, making this an unpredictable variable.
4. GOMS only addresses the usability of a task on a system, it does not address its functionality.

Keystroke-Level Model(KLM)
• The KLM was developed as a practical design tool and allows a designer to ‘predict’ the time it
takes for an average user to execute a task using an interface and interaction method.
• In KLM, it is assumed that any decision making task is composed of a series of ‘elementary’
cognitive (mental) steps, that are executed in sequence.

[Type here]
Human computer interaction

• These ‘elementary’ steps essentially represent low-level cognitive activities, which can not be
decomposed any further.
The model predicts expert error-free task completion times, given the following input parameters:
• a task or series of sub-tasks
• method used
• command language of the system
• motor skill parameters of the user
• response time parameters of the system

A KLM prediction is the sum of the sub-task times and the required overhead. The model includes
four motor-control operators (K = key stroking, P = pointing, H = homing, D = drawing), one
mental operator (M), and one system response operator (R):
TEXECUTE = tK + tP + tH + tD + tM + tR
Some of the operations above are omitted or repeated, depending on the task.
Advantages
The KLM was designed to be a quick and easy to use system design tool, which means that no deep
knowledge about psychology is required for its usage. Also, task times can be predicted (given the
limitations) without having to build a prototype, recruit and test users, which saves time and money.
Limitations
• It measures only one aspect of performance: time, which means execution time and not the time
to acquire or learn a task.
• It considers only expert users. Generally, users differ regarding their knowledge and experience of
different systems and tasks, motor skills and technical ability.
• It considers only routine unit tasks.
• The method has to be specified step by step  The execution of the method has to be error-free.
The predictive models are:
• Keystroke-level model (KLM)
• Throughput (TP)
• Fitt's law
Fitts’ Law
• It is one of the earliest predictive models used in HCI (and among the most well-known models in HCI
also)
• First proposed by PM Fitts (hence the name) in 1954.
“The law states that the time it takes to move to a target is a function of the length of the movement
and the size of the Target”.
In other words, the bigger the target and the closer the target, the faster it is acquired.

• Fitts's law is a predictive model of human movement primarily used in human–computer


interaction and ergonomics. This scientific law predicts that the time required to rapidly move to a
target area is a function of the ratio between the distance to the target and the width of the target.

[Type here]
Human computer interaction

Fitts's law is used to model the act of pointing, either by physically touching an object with a hand
or finger, or virtually, by pointing to an object on a computer monitor using a pointing device.
• Another important thing about the Fitts’ law is that, it is both a descriptive and a predictive model

Hick-Hyman law
The Hick-Hyman law, is another important model in HCI but not as widely used as Fitts’ law. Hick-
Hyman law can be described as the average reaction time given a set of choices with equal probability.
In other words, it can be said that this law shows the time it takes a user to make a decision based on
the set of choices.

Hick-Hyman law for choice reaction time. This law takes the form of a prediction equation. Given a set
of n stimuli, associated one-for-one with n responses, the time to react (RT) to the onset of a stimulus
and make the appropriate response is given by
RT = a + b log2(n)
where a and b are empirically determined constants.

Descriptive models
Descriptive models are of a vastly different genre than predictive models. Although they generally do
not yield an empirical or quantitative measure of user performance, their utility is by no means second
to predictive models. Simply put, descriptive models provide a framework or context for thinking
about or describing a problem or situation.
– provide a basis for understanding, reflecting, and reasoning about certain facts and interactions
– provide a conceptual framework that simplifies a, potentially real, system – are used to inspect an
idea or a system and make statements about their probable characteristics
– used to reflect on a certain subject
– can reveal flaws in the design and style of interaction

Buxton’s three state model: this is used to determine how easy it is to use a mouse or the wheel and
will measure how much pressure a user puts on it and how much dexterity and speed is used during
this action. There are three states which are used to check this:

1. Out of range: Used to show re-positioning and clutching of the mouse.


[Type here]
Human computer interaction

2. Tracking: For moving an item around the screen such as a cursor.


3. Dragging: Checking the time it takes you to drag an object across the screen or to group an
amount of items together.
Guiard’s model: This focuses on how a user will prefer using specific hand during interactions for
example when typing a keyboard layout is most suitable for left handed people as the more important
keys are on the left side of the keyboard. Guiard’s model looks at the actual position of the keys on the
keyboard.
The above diagram shows that the preferred hand is the right one but this is not always the case, it will
be the left hand in the case of some people or in my case my preferred hands change depending on
which task I am doing (ambidextrous) such as I prefer to play darts and write with my left hand and I
prefer to play golf with my right hand.
Key-Action model(KAM): This allows you to look at how users will interact with keyboards and
therefore what kind of shortcuts they will use to carry out tasks.
This will show how what the user is expecting the computer to do when you try to use certain
shortcuts and what the computer will actually do when you press the shortcut. An example of this I
can think of is when gaming you can use the arrows keys to move around as well as using (W,A,S,D)
and if both options were not available then most gamers wouldn’t like how the controls were laid
out and therefore it could decrease the enjoyability of the game.
Keys on a keyboard are described as either 1 of 3 different things. They are either a:
• Symbol keys deliver graphic symbols – typically, letters, numbers, or punctuation symbols – to an
application such as a text editor.
• Executive key-These tend to carry out specific actions for example the ENTER key or F1.
• Modifier key- these don’t actually type anything but they allow you set up a condition necessary to
modify the effect of a subsequently pressed key for example the SHIFT key.

 Ubiquitous computing (or "ubicomp") is a concept in software engineering and computer science
where computing is made to appear anytime and everywhere. In contrast to desktop computing,
ubiquitous computing can occur using any device, in any location, and in any format.
Three Waves of Computing
1. Mainframe computing (60’s-70)
2. PC computing
3. Ubiquitous computing
Ubiquitous computing ( ubicomp) is a post-desktop model of Human Computer Interaction (HCI) in
which information processing has been thoroughly integrated into everyday objects and activities. HCI
is a very recent area of thrust as the human world is getting more and more digital and one wants the
digital systems to behave as close as to a human being. This requires the devices or the information
processing systems to mimic the human behavior precisely.
Older HCI projects focused on software that people used for extended sessions in front of a
screen. Inventing a new menuing system, or analyzing user-error rates, or developing a visualization for
some productivity software would make a great old HCI paper. Ubicomp projects typically involve
some aspect of "context" e.g. something that the system senses in the environment, like GPS location,

[Type here]
Human computer interaction

or an accelerometer on a fitness bracelet. Some ubicomp work uses social science methods to study
how people use technology out in the world, without building a project.

Some of the key characteristics of HCI in ubiquitous computing-


1. implicit input
2. multi-scale and distributed output
3. Seamless integration of physical and virtual worlds

Pervasive computing:
There seems to be a bit of a debate about whether they are two different things or not. From my
reading and understanding, they aren’t the same but are related. Ubiquitous computing means that
computers are everywhere, embedded, invisible, and/or transparent.
On the other hand pervasive computing means mobile computing - in other words it is computing like
smart phones, and other hand-held devices that can go anywhere.

A tangible user interface (TUI) is a user interface in which a person interacts with digital information
through the physical environment.
• Tangible user interfaces (TUIs) use physical objects to control the computer, most often a
collection of objects arranged on a tabletop to act as ‘physical icons’.
• An immediate problem is that physical objects don’t change their visible state very easily.
• One advantage of the Tangible user interface is the user experience, because it occurs a physical
interaction between the user and the interface itself (E.g.: SandScape: Building your own landscape
with sand). Another advantage is the usability, because the user knows intuitive how to use the
interface by knowing the function of the physical object. So, the user does not need to learn the
functionality. That is why the Tangible User interface is often used to make technology more
accessible for elderly people.
Differences: Tangible and Graphical user interface
The graphical user interface is a form of user interface that allows users to interact with electronic
devices through graphical icons and visual indicators.
where as Tangible user interface (TUI) is a user interface in which a person interacts with digital
information through the physical environment.
A Graphical User Interface exists only in the digital world where as the TUI connects the digital with the
physical world.
A Tangible user interface is usually built for one specific target group where as the Graphical user
interface has a wide range of usages in one interface. Because of that it targets a large group of
possible users.
Interface type/Attributes Tangible user interface Graphical user interface

Amount of possible Build for one specific application Build for many kinds of
application areas area application areas

[Type here]
Human computer interaction

How the system is driven physical objects, such as a mouse or Based on graphical bits,
a keyboard such as pixels on the
screen
Coupling between Unmediated connection Indirect connection
cognitive bits and the
physical output
How user experience is The user already knows the function The user explores the
driven of the interface by knowing the functionality of the
physical objects function interface

User behavior when Intuition Recognition


approaching the system
Universal design is about designing systems so that they can be used by anyone in any circumstance.
Universal design means designing for diversity, including:
– people with sensory, physical or cognitive impairment
– people of different ages
– people from different cultures and backgrounds.
Universal design is the process of designing products so that they can be used by as many people as
possible in as many situations as possible.
This means particularly designing interactive systems that are usable by anyone, with any range of
abilities, using any technology platform. This can be achieved by designing systems either to have built
in redundancy or to be compatible with assistive technologies.
Example of it might be an interface that has both visual and audio access to commands.

7 principles of universal design


1. Equitable use: the design is useful to people with a range of abilities and appealing to all. No
user is excluded or stigmatized. Wherever possible, access should be the same for all; where identical
use is not possible, equivalent use should be supported. Where appropriate, security, privacy and
safety provision should be available to all.
2. flexibility in use: the design allows for a range of ability and preference, through choice of
methods of use and adaptivity to the user’s pace, precision and custom.
3. the system be simple and intuitive to use, regardless of the knowledge, experience, language
or level of concentration of the user. The design needs to support the user’s expectations and
accommodate different language and literacy skills. It should not be unnecessarily complex and should
be organized to facilitate access to the most important areas. It should provide prompting and
feedback as far as possible.
4. Perceptible information: the design should provide effective communication of information
regardless of the environmental conditions or the user’s abilities. Redundancy of presentation is
important: information should be represented in different forms or modes (e.g. graphic, verbal, text,
touch). Essential information should be emphasized and differentiated clearly from the peripheral
content. Presentation should support the range of devices and techniques used to access information
by people with different sensory abilities.
[Type here]
Human computer interaction

5. Tolerance for error: minimizing the impact and damage caused by mistakes or unintended
behavior. Potentially dangerous situations should be removed or made hard to reach. Potential
hazards should be shielded by warnings. Systems should fail safe from the user’s perspective and users
should be supported in tasks that require concentration.
6. Low physical effort: systems should be designed to be comfortable to use, minimizing physical
effort and fatigue. The physical design of the system should allow the user to maintain a natural
posture with reasonable operating effort. Repetitive or sustained actions should be avoided.
7. Size and space for approach and use: the placement of the system should be such that it can
be reached and used by any user regardless of body size, posture or mobility. Important elements
should be on the line of sight for both seated and standing users. All physical components should be
comfortably reachable by seated or standing users. Systems should allow for variation in hand size and
provide enough room for assistive devices to be used.

Assistive technology is technology used by individuals with disabilities in order to perform functions
that might otherwise be difficult or impossible.
The following technologies help people use computers to access the web:
• Screen readers: Software used by blind or visually impaired people to read the content of the
computer screen. Examples include JAWS for Windows, NVDA, or Voiceover for Mac.
• Screen magnification software: Allow users to control the size of text and or graphics on the
screen. Unlike using a zoom feature, these applications allow the user to have the ability to see
the enlarged text in relation to the rest of the screen.
This is done by emulating a handheld magnifier over the screen.
• Text readers: Software used by people with various forms of learning disabilities that affect
their ability to read text. This software will read text with a synthesized voice and may have a
highlighter to emphasize the word being spoken. These applications do not read things such as
menus or types of elements - they only read the text.
• Speech input software: Provides people with difficulty in typing an alternate way to type text
and also control the computer. Users can give the system some limited commands to perform
mouse actions. Users can tell the system to click a link or a button or use a menu item.
Examples would be Dragon Naturally Speaking for Windows or Mac. Please note both Windows
and Mac have some speech recognition utilities, but they cannot be used to browse the web.
• Alternative input devices: Some users may not be able to use a mouse or keyboard to work on
a computer. These people can use various forms of devices, such as:
o Head pointers: A stick or object mounted directly on the user’s head that can be used
to push keys on the keyboard. This device is used by individuals who have no use of
their hands.
o Motion tracking or eye tracking: This can include devices that watch a target or even
the eyes of the user to interpret where the user wants to place the mouse pointer and
moves it for the user.
o Single switch entry devices: These kinds of devices can be used with other alternative
input devices or by themselves. These are typically used with onscreen keyboards. The
on-screen keyboard has a cursor move across the keys, and when the key the user
wants is in focus, the user will click the switch. This can also work on a webpage: the
[Type here]
Human computer interaction

cursor can move through the webpage, and if the user wants a to click on a link or
button when that link or button is in focus, the user can activate the switch.
User modeling is the subdivision of human–computer interaction which describes the process of
building up and modifying a conceptual understanding of the user. The main goal of user modeling is
customization and adaptation of systems to the user's specific needs.
The system needs to "say the 'right' thing at the 'right' time in the 'right' way".[1] To do so it needs an
internal representation of the user. Another common purpose is modeling specific kinds of users,
including modeling of their skills and declarative knowledge, for use in automatic software-tests.[2]
User-models can thus serve as a cheaper alternative to user testing.

A user model is the collection and categorization of personal data associated with a specific user. A
user model is a (data) structure that is used to capture certain characteristics about an individual user,
and a user profile is the actual representation in a given user model. The process of obtaining the user
profile is called user modeling.[3] Therefore, it is the basis for any adaptive changes to the system's
behavior. Which data is included in the model depends on the purpose of the application. It can
include personal information such as users' names and ages, their interests, their skills and knowledge,
their goals and plans, their preferences and their dislikes or data about their behavior and their
interactions with the system.
 User modelling
 All help systems have a model of the user
 single, generic user (non-intelligent)
 user- configured model (adaptable)
 system-configure model (adaptive)
 Approaches
 quantification
 user moves between levels of expertise based on quantitative measure of what
he knows.
 stereotypes
 user is classified into a particular category.
 overlay
 an idealised model of expert use is constructed and actual use compared to it.
Model may contain the commonality between these two or the difference.
 Special case: user behaviour compared to known error catalogue (UT)

There are different design patterns for user models, though often a mixture of them is
used.[2][4]

[Type here]
Human computer interaction

 Static user models


Static user models are the most basic kinds of user models. Once the main data is gathered they are
normally not changed again, they are static. Shifts in users' preferences are not registered and no
learning algorithms are used to alter the model.
 Dynamic user models
Dynamic user models allow a more up to date representation of users. Changes in their interests, their
learning progress or interactions with the system are noticed and influence the user models. The
models can thus be updated and take the current needs and goals of the users into account.
 Stereotype based user models
Stereotype based user models are based on demographic statistics. Based on the gathered
information users are classified into common stereotypes. The system then adapts to this stereotype.
The application therefore can make assumptions about a user even though there might be no data
about that specific area, because demographic studies have shown that other users in this stereotype
have the same characteristics. Thus, stereotype based user models mainly rely on statistics and do not
take into account that personal attributes might not match the stereotype. However, they allow
predictions about a user even if there is rather little information about him or her.  Highly
adaptive user models
Highly adaptive user models try to represent one particular user and therefore allow a very high
adaptivity of the system. In contrast to stereotype based user models they do not rely on demographic
statistics but aim to find a specific solution for each user. Although users can take great benefit from
this high adaptivity, this kind of model needs to gather a lot of information first.

What is User Centered Design?


User-Centered Design (UCD) is the process of designing a tool, such as a website’s or application’s user
interface, from the perspective of how it will be understood and used by a human user.
User-centered design is an iterative design process in which designers focus on the users and
their needs in each phase of the design process. UCD calls for involving users throughout the design
process via a variety of research and design techniques so as to create highly usable and accessible
products for them

UCD is an Iterative Process


User-centered design demands that designers employ a mixture of investigative (e.g., surveys and
interviews) and generative (e.g., brainstorming) methods and tools to develop an understanding of
user needs.
Generally, each iteration of the UCD approach involves four distinct phases. First, designers attempt to
understand the context in which users may use a system. Subsequently, we identify and specify the
users’ requirements. A design phase follows, wherein the design team develops solutions. The team
then proceed to an evaluation phase, and assess the outcomes of the evaluation against the users’
context and requirements so as to check how well a design is performing—namely, how close it is to a
level that matches the users’ specific context and satisfies all of their relevant needs. From here, the
team makes further iterations of these four phases, continuing until the evaluation results are
satisfactory.

[Type here]
Human computer interaction

UCD Drawbacks
• Passive user involvement.
• User’s perception about the new interface may be inappropriate.
• Designers may ask incorrect questions to users.

What is Usability?
Usability is a measure of the interactive user experience associated with a user interface, such a
website or software application. A user-friendly interface design is easy-to-learn, supports users’ tasks
and goals efficiently and effectively, and is satisfying and engaging to use.
The official definition of usability is: “the extent to which a product can be used by specified users to
achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.”
Jeffrey Rubin describes usability objectives as:
• Usefulness - product enables user to achieve their goals - the tasks that it was designed to carry
out and/or wants needs of user.

[Type here]
Human computer interaction

• Effectiveness (ease of use) - quantitatively measured by speed of performance or error rate and
is tied to a percentage of users.
• Learnability - user's ability to operate the system to some defined level of competence after
some predetermined period of training. Also, refers to ability for infrequent users to relearn the
system.
• Attitude (likeability) - user's perceptions, feelings and opinions of the product, usually captured
through both written and oral communication.

A usable interface has three main outcomes:


1. It should be easy for the user to become familiar with and competent in using the user
interface during the first contact with the website. For example, if a travel agent’s website is a
well-designed one, the user should be able to move through the sequence of actions to book a
ticket quickly.
2. It should be easy for users to achieve their objective through using the website. If a user has
the goal of booking a flight, a good design will guide him/her through the easiest process to
purchase that ticket.
3. It should be easy to recall the user interface and how to use it on subsequent visits. So, a
good design on the travel agent’s site means the user should learn from the first time and book
a second ticket just as easily.
Principles usibility Usefulness
• Value: The system should provide necessary utilities and address the real needs of users.
• Relevance: The information and functions provided to the user should be relevant to the user's
task and context.
Consistency
• Consistency and standards: Follow appropriate standards/conventions for the platform and the
suite of products. Within an application (or a suite of applications), make sure that actions,
terminology, and commands are used consistently.
• Real-world conventions: Use commonly understood concepts, terms and metaphors, follow
real-world conventions (when appropriate), and present information in a natural and logical
order.

Simplicity
• Simplicity: Reduce clutter and eliminate any unnecessary or irrelevant elements.
• Visibility: Keep the most commonly used options for a task visible (and the other options easily
accessible).
• Self-evidency: Design a system to be usable without instruction by the appropriate target user
of the system: if appropriate, by a member of the general public or by a user who has the
appropriate subject-matter knowledge but no prior experience with the system. Display data in
a manner that is clear and obvious to the appropriate user.
Communication
• Feedback: Provide appropriate, clear, and timely feedback to the user so that he sees the
results of his actions and knows what is going on with the system.

[Type here]
Human computer interaction

• Structure: Use organization to reinforce meaning. Put related things together, and keep
unrelated things separate.
• Sequencing: Organize groups of actions with a beginning, middle, and end, so that users know
where they are, when they are done, and have the satisfaction of accomplishment.
• Help and documentation: Ensure that any instructions are concise and focused on supporting
the user's task.
Error Prevention and Handling
• Forgiveness: Allow reasonable variations in input. Prevent the user from making serious errors
whenever possible, and ask for user confirmation before allowing a potentially destructive
action.
• Error recovery: Provide clear, plain-language messages to describe the problem and suggest a
solution to help users recover from any errors.
• Undo and redo: Provide "emergency exits" to allow users to abandon an unwanted action. The
ability to reverse actions relieves anxiety and encourages user exploration of unfamiliar
options.
Efficiency
• Efficacy: (For frequent use) Accommodate a user’s continuous advancement in knowledge and
skill. Do not impede efficient use by a skilled, experienced user.
• Shortcuts: (For frequent use) Allow experienced users to work more quickly by providing
abbreviations, function keys, macros, or other accelerators, and allowing customization or
tailoring of frequent actions.
• User control: (For experienced users) Make users the initiators of actions rather than the
responders to increase the users’ sense that they are in charge of the system.
Workload Reduction
• Supportive automation: Make the user’s work easier, simpler, faster, or more fun.
Automate unwanted workload.
• Reduce memory load: Keep displays brief and simple, consolidate and summarize data, and
present new information with meaningful aids to interpretation. Do not require the user to
remember information. Allow recognition rather than recall.
• Free cognitive resources for high-level tasks: Eliminate mental calculations, estimations,
comparisons, and unnecessary thinking. Reduce uncertainty.
Usability Judgment
• It depends: There will often be tradeoffs involved in design, and the situation, sound judgment,
experience should guide how those tradeoffs are weighed.
• A foolish consistency...: There are times when it makes sense to bend or violate some of the
principles or guidelines, but make sure that the violation is intentional and appropriate.

[Type here]
Human computer interaction

Command Language-based Interface


A command language-based interface – as the name itself suggests, is based on designing a command
language which the user can use to issue the commands. The user is expected to frame the
appropriate commands in the language and type them in appropriately whenever required. A simple
command language-based interface might simply assign unique names to the different commands.
However, a more sophisticated command language-based interface may allow users to compose
complex commands by using a set of primitive commands. Such a facility to compose commands
dramatically reduces the number of command names one would have to remember. Thus, a command
language-based interface can be made concise requiring minimal typing by the user. Command
language-based interfaces allow fast interaction with the computer and simplify the input of complex
commands.
Disadvantages of command language-based interface
Command language-based interfaces suffer from several drawbacks. Usually, command language-
based interfaces are difficult to learn and require the user to memorize the set of primitive commands.
Also, most users make errors while formulating commands in the command language and also while
typing them in. Further, in a command language-based interface, all interactions with the system is
through a key-board and can not take advantage of effective interaction devices such as a mouse.
Obviously, for casual and inexperienced users, command language-based interfaces are not suitable.
Issues in designing a command language-based interface Two overbearing command design issues
are to reduce the number of primitive commands that a user has to remember and to minimize the
total typing required while issuing commands. These can be elaborated as follows:ƒ
• The designer has to decide what mnemonics are to be used for the different commands. The
designer should try to develop meaningful mnemonics and yet be concise to minimize the amount
of typing required. For example, the shortest mnemonic should be assigned to the most frequently
used commands.
• The designer has to decide whether the users will be allowed to redefine the command names to
suit their own preferences. Letting a user define his own mnemonics for various commands is a
useful feature, but it increases the complexity of user interface development.
• The designer has to decide whether it should be possible to compose primitive commands to form
more complex commands. A sophisticated command composition facility would require the syntax
and semantics of the various command composition options to be clearly and unambiguously
specified. The ability to combine commands is a powerful facility in the hands of experienced
users, but quite unnecessary for inexperienced users.

What is a graphical user interface (GUI)?


A graphical user interface (GUI) is an interface, or interactive system, that allows professionals to
accomplish tasks on their computers through images and icons, rather than text command systems.
Graphical user interfaces appear in computers, tablet devices and mobile devices. These graphical user
interfaces can often be optimized to provide a more positive user experience.
Advantages of graphical user interface:- Easiness for non-technical people:-
For non-technical people or for beginners good GUI’s tends to make easiness in life. For example with
few clicks on buttons user can easily make his work done. Software in shops for calculation of products

[Type here]
Human computer interaction

sold and inventory can be better managed by an even nontechnical guy. Similarly listening songs in the
car is easy for everyone.
Drag and drop feature:-
In most of the software, we have drag and drop functionality by which complex tasks are managed
easily. Like dragging and dropping folders. And in mobile games, it is also nice to use. In much graphical
software, drag and drop are awesome.
Looks nicer than text interface:-
In text interface, we have limited options to choose from and navigation is difficult. For noneducated
people, text interface is difficult to understand and use. In GUI user can use any tool by detecting
symbols or buttons.
Hotkeys usage:-
Sometimes we want a couple of functionality performed by single click then we use hotkeys. Like we
see some buttons or mouse clicks/movements by which a couple of actions performed. This is very
handy for speeding up tasks.
User-friendly:-
A user can easily navigate to the system without knowing a lot of details. Easy setup and ready to start
working are awesome. Most of the software hides the complexity of actions from the users and display
only required information is key to good interface.
Disabled people:-
In modern science, we can detect eyes movement and finger movement which is helpful for disabled
people. Now, most of the software use this functionality to make life easier for disabled people. They
can use software and websites easily with a few actions
Disadvantages of graphical user interface:- Difficult to develop and high cost:-
Nice looking designs are difficult to make and may also cost extra hardware support. Like high-quality
games consume a lot of device space and memory and it also required very skilled people to develop.
Slower than command line tools:-
In command line tools like MS dos, we perform some commands which do the work quickly. But if we
do the same task in GUI then it takes extra time to complete the task.
Extra attention required:-
If we are driving a car then controlling music/radio in the car requires attention which makes our
driving disturbed.
Using flat screen:-
Some graphical things do not display accurately on flat screens. In airplanes, sticks are used to control
most of the things because flat screen display is not very handy. This makes the limitation of GUI.
Time consumption:-
It takes a lot of time to develop and design a good looking interface. If some bad interface builds then
it makes difficult for the user to understand and use.
Memory resources:-
I see a lot of good GUI’s consuming lot of memory resources which make system/device slow to
perform. Implementation:-
Testing and implementation take a lot of time. Like we may require extra software for running GUI’s

[Type here]
Human computer interaction

BASIS FOR CLI GUI


COMPARISON
Basic Command line interface enables a Graphical User interface permits a
user to communicate with the user to interact with the system by
system through commands. using graphics which includes images,
icons, etc.
Device used Keyboard Mouse and keyboard
Ease of Hard to perform an operation and Easy to perform tasks and does not
performing tasks require expertise. require expertise.
Precision High Low
Flexibility Intransigent More flexible
Memory Low High
consumption
Appearance Can't be changed Custom changes can be employed
Speed Fast Slow
Integration and Scope of potential improvements Bounded
extensibility

What is Information Visualization?


Information visualization is the process of representing data in a visual and meaningful way so that a
user can better understand it. Dashboards and scatter plots are common examples of information
visualization.
Information visualization plays an important role in making data digestible and turning raw
information into actionable insights. It draws from the fields of humancomputer interaction, visual
design, computer science, and cognitive science, among others. Examples include world map-style
representations, line graphs, and 3-D virtual building or town plan designs.
Information visualization is becoming increasingly interactive, especially when used in a
website or application. Being interactive allows for manipulation of the visualization by users, making it
highly effective in catering to their needs. With interactive information visualization, users are able to
view topics from different perspectives, and manipulate their visualizations of these until they reach
the desired insights. This is especially useful if users require an explorative experience.
It is increasingly applied as a critical component in scientific research, digital libraries, data
mining, financial data analysis, market studies, manufacturing production control, and drug discovery.

Natural language
Perhaps the most attractive means of communicating with computers, at least at first glance, is by
natural language. Users, unable to remember a command or lost in a hierarchy of menus, may long for
the computer that is able to understand instructions expressed in everyday words! Natural language
understanding, both of speech and written input, is the subject of much interest and research.
Unfortunately, however, the ambiguity of natural language makes it very difficult for a machine to

[Type here]
Human computer interaction

understand. Language is ambiguous at a number of levels. First, the syntax, or structure, of a phrase
may not be clear.

Requirements – what is wanted The first stage is establishing what exactly is needed. Analysis The
results of observation and interview need to be ordered in some way to bring out key issues and
communicate with later stages of design.
Design Well, this is all about design, but there is a central stage when you move from what you want,
to how to do it.
Iteration and prototyping Humans are complex and we cannot expect to get designs right first time.
We therefore need to evaluate a design to see how well it is working and where there can be
improvements.
Implementation and deployment Finally, when we are happy with our design, we need to create it
and deploy it. This will involve writing code, perhaps making hardware, writing documentation and
manuals

Help and Documentation


 Users require different types of support at different times but all user support should
fulfil some basic requirements.
 Implementation and presentation both need to be considered in designing user
support.
 Types of user support
 quick reference
 task specific help (context sensitive)
 full explanation
 tutorial
 These may be provided by help and/or documentation  help
 problem-oriented and specific

[Type here]
Human computer interaction

 documentation
 system-oriented and general
 The same design principles apply to both

Characteristics
 Availability
 continuous access concurrent to main application.
 Accuracy and completeness
 help matches actual system behaviour and covers all aspects of system behaviour.
 Consistency
 different parts of the help system and any paper documentation are consistent in content,
terminology and presentation.
 Robustness
 correct error handling and predictable behaviour.
 Flexibility
 allows user to interact in a way appropriate to experience and task.
 Unobtrusiveness
 does not prevent the user continuing with work nor interfere with application

[Type here]

You might also like