Human Computer Interaction 001
Human Computer Interaction 001
A Seminar Report on
2024-25
[Type here]
Human computer interaction
l
Acknowledgement
The reason of completing the project work successfully is not just our efforts but
efforts of many people. The people, who trusted us, guided us and encouraged us with
every means. Guide is a person who provides you the direction towards success, so I
feel great pleasure to express our gratitude to our guides, our faculty members as well
as every person who helped us directly or indirectly with our project.
We are also indebted to our Professor Asst. Prof. Dr. Hetal Modi who provided
constant encouragement, support & valuable guidance before and during our project. It
was her effort who led us to this place for project work. Her guidance and suggestions
were valuable.
We are also thankful to our all Faculties Members and specially to Our Principal
Dr. Vikram Kaushik , to give us opportunity to make us this project. Thank you
very much,
2202020101008
[Type here]
Human computer interaction
Sr. Page
TOPIC
No. no
1 Definition 5
2 Goals of HCI 8
3 HCI Benefits 12
4 Why HCI is important? 16
5 HCI Frameworks/ models 18
6 Introduction to Evaluation 19
7 Goals of Evaluation 20
8 Predictive Evaluation 23
9 Predictive Evaluation methods 25
10 Advantage//Disadvantage 26
11 Cognitive Models 12>Predictive 27
12 Evaluation 28
13 Keystroke-Level Model(KLM)
[Type here]
Human computer interaction
• Human means an individual user, a group of users working together, or a sequence of users in an
organization, each dealing with some part of the task or process. The user is whoever is trying to
get the job done using the technology.
• Computer means any technology ranging from the general desktop computer to a largescale
computer system, a mobile phone or an embedded system.
• By interaction we mean any communication between a user and computer, be it direct or indirect.
Direct interaction involves a dialog with feedback and control throughout performance of the
task.
Indirect interaction may involve batch processing or intelligent sensors controlling the
environment.
Goals of HCI
• A basic goal of HCI is to improve the interactions between users and computers by making
computers more usable and receptive to the user's needs.
Specifically, HCI is concerned with:
1. Methodologies and processes for designing interfaces (i.e., given a task and a class of users,
design the best possible interface within given constraints, optimizing for a desired property
such as learning ability or efficiency of use)
2. Methods for implementing interfaces (e.g. software toolkits and libraries; efficient algorithms)
3. Techniques for evaluating and comparing interfaces
4. Developing new interfaces and interaction techniques
5. Developing descriptive and predictive models and theories of interaction
• A long term goal of HCI is to design systems that minimize the barrier between the human’s
cognitive model of what they want to accomplish and the computer’s understanding of the user’s
task.
• Understand the factors that determine how people use technology.
• At physical level, HCI concerns selecting the most appropriate input devices and output devices for
a particular interface or task.
• Determine the best type of interaction, such as direct manipulation, natural language, icons,
menus.
• For systems that include computers, develop or improve-
1. Safety- protecting the user from dangerous conditions and undesirable situations. Eg:
[Type here]
Human computer interaction
2. Utility- It refers to the extent to which the system provides the right kind of functionality so that
user can do what they need or want to do
3. Effectiveness- It is a very general goal and refers to how good a system at doing what it is suppose
to do.
4. Efficiency- a measure of how quickly users can accomplish their goals or finish their work using the
system
5. Usability- usability in generally regarded as ensuring that interactive products are easy to learn,
effective to use, and enjoyable from user perspective.
6. Appeal- how well the user likes the system
7. Learnability- It refers to how easy a system is to learn to use.
8. Memorability -It refers to how easy a system is to remember how to use, once learned.
HCI Benefits
1. Gaining market share- People intend to buy/use products with higher usability.
2. Improving productivity- Employees in a company perform their jobs in a faster manner. 3. Lowering
support costs- If the product is not usable, calls to customer support can be enormous.
4. Reducing development cost- Avoid implementing features users don’t want and creating features
that are annoying or inefficient.
[Type here]
Human computer interaction
Each stage is, of course, an activity of the user. First the user forms a goal. This is the user’s notion of
what needs to be done and is framed in terms of the domain, in the task language. It is liable to be
imprecise and therefore needs to be translated into the more specific intention, and the actual actions
that will reach the goal, before it can be executed by the user. The user perceives the new state of the
system, after execution of the action sequence, and interprets it in terms of his expectations. If the
system state reflects the user’s goal then the computer has done what he wanted and the interaction
has been successful; otherwise the user must formulate a new goal and repeat the cycle.
be communicated to the User. The current values of system attributes are rendered as concepts or
features of the Output. It is then up to the User to observe the Output and assess the results of the
interaction relative to the original goal, ending the evaluation phase and, hence, the interactive cycle.
There are four main translations involved in the interaction: articulation, performance,
presentation and observation.
3. HCI Framework
Frameworks provide providing a means of discussing the details of a particular interaction, As well as
provide a basis for discussing other issues that relate to the interaction.
The ACM SIGCHI Curriculum Development Group presents a framework for HCI and uses it to place
different areas that relate to HCI.
The field of ergonomics addresses issues on the user side of the interface, covering both input
and output, as well as the user’s immediate context. Dialog design and interface styles can be placed
particularly along the input branch of the framework, addressing both articulation and performance.
However, dialog is most usually associated with the computer and so is biased to that side of the
framework.
Presentation and screen design relates to the output branch of the framework. The entire
framework can be placed within a social and organizational context that also affects the interaction.
Each of these areas has important implications for the design of interactive systems and the
performance of the user.
[Type here]
Human computer interaction
Goals of Evaluation
Evaluation has three main goals:
1. To assess the extent and accessibility of the system’s functionality.
The system’s functionality is important in that it must accord with the user’s requirements. In other
words, the design of the system should enable users to perform their intended tasks more easily. This
includes not only making the appropriate functionality available within the system, but making it
clearly reachable by the user in terms of the actions that the user needs to take to perform the task. It
also involves matching the use of the system to the user’s expectations of the task.
2. To assess users’ experience of the interaction
This includes considering aspects such as how easy the system is to learn, its usability and the user’s
satisfaction with it. It may also include his enjoyment and emotional response, particularly in the case
of systems that are aimed at leisure or entertainment. It is important to identify areas of the design
[Type here]
Human computer interaction
that overload the user in some way, perhaps by requiring an excessive amount of information to be
remembered.
3. To identify any specific problems with the system.
The final goal of evaluation is to identify specific problems with the design. These may be aspects of
the design which, when used in their intended context, cause unexpected results, or confusion
amongst users. This is, of course, related to both the functionality and usability of the design
(depending on the cause of the problem). However, it is specifically concerned with identifying
trouble-spots which can then be rectified.
Predictive Evaluation
• Basis:
– Observing users can be time-consuming and expensive – Try to
predict usage rather than observing it directly – Conserve
resources (quick & low cost).
• Approach
– Expert reviews (frequently used)
HCI experts interact with system and try to find potential problems and give prescriptive feedback.
– Best if
• Haven’t used earlier prototype
• Familiar with domain or task
• Understand user perspectives
2. Heuristic Evaluation
A heuristic is a guideline or general principle or rule of thumb that can guide a design decision or
be used to critique a decision that has already been made.
[Type here]
Human computer interaction
Heuristic evaluation, developed by Jakob Nielsen and Rolf Molich, is a method for structuring the
critique of a system using a set of relatively simple and general heuristics.
Heuristic evaluation can be performed on a design specification so it is useful for evaluating early
design. But it can also be used on prototypes, storyboards and fully functioning systems.
It is therefore a flexible, relatively cheap approach. Hence it is often considered a discount usability
technique.
The general idea behind heuristic evaluation is that several evaluators independently critique a
system to come up with potential usability problems. It is important that there be several of these
evaluators and that the evaluations be done independently.
Heuristic evaluation `debugs' design.
• Advantage
– Cheap, good for small companies who can’t afford more
– Getting someone practiced in method is valuable
• Disadvantage(Somewhat Controversial)
– Very subjective assessment of problems
– Depends of expertise of reviewers
Cognitive Models
• Cognitive models represent users of interactive systems.
• Cognitive models capture the user’s thought (cognitive) process during interaction.
• Cognitive models attempt to represent the users as they interact with a system, modeling aspects
of their understanding, knowledge, intentions or processing.
• Although we said before that cognitive models are models of human thinking process, they are not
exactly treated as the same in HCI.
• Since interaction is involved, cognitive models in HCI not only model human cognition
(thinking) alone, but the perception and motor actions also (as interaction requires
‘perceiving what is in front’ and ‘acting’ after decision making). Thus cognitive models in HCI
should be considered as the models of human perception (perceiving the surrounding), cognition
(thinking in the ‘mind’) and motor action (result of thinking such as hand movement, eye
movement etc.)
Predictive Models
Predictive models, are widely used in many disciplines. In human-computer interaction, predictive
models allow metrics of human performance to be determined analytically without undertaking time-
consuming and resource-intensive experiments.
[Type here]
Human computer interaction
GOMS model
• GOMS model, is a description of the knowledge that a user must have in order to carry out tasks on
a device or system.
• It is a representation of the "how to do it" knowledge that is required by a system in order to get
the intended tasks accomplished.
• The model is used to analyze user’s physical , cognitive and perceptual interactions with computer
while achieving a task or a goal in best possible way.
• It describes the Goals, Operators, Methods, and Selection rules needed to perform a task.
[Type here]
Human computer interaction
2. With GOMS, an analyst can easily estimate a particular interaction and calculate it quickly and
easily. This is only possible if the average Methods-Time Measurement data for each specific
task has previously been measured experimentally to a high degree of accuracy.
Disadvantages of GOMS
1. GOMS only applies to skilled users. It does not work for beginners or intermediates for errors
may occur which can alter the data.
2. He model doesn't apply to learning the system or a user using the system after a longer time of
not using it.
3. Mental workload is not addressed in the model, making this an unpredictable variable.
4. GOMS only addresses the usability of a task on a system, it does not address its functionality.
Keystroke-Level Model(KLM)
• The KLM was developed as a practical design tool and allows a designer to ‘predict’ the time it
takes for an average user to execute a task using an interface and interaction method.
• In KLM, it is assumed that any decision making task is composed of a series of ‘elementary’
cognitive (mental) steps, that are executed in sequence.
[Type here]
Human computer interaction
• These ‘elementary’ steps essentially represent low-level cognitive activities, which can not be
decomposed any further.
The model predicts expert error-free task completion times, given the following input parameters:
• a task or series of sub-tasks
• method used
• command language of the system
• motor skill parameters of the user
• response time parameters of the system
A KLM prediction is the sum of the sub-task times and the required overhead. The model includes
four motor-control operators (K = key stroking, P = pointing, H = homing, D = drawing), one
mental operator (M), and one system response operator (R):
TEXECUTE = tK + tP + tH + tD + tM + tR
Some of the operations above are omitted or repeated, depending on the task.
Advantages
The KLM was designed to be a quick and easy to use system design tool, which means that no deep
knowledge about psychology is required for its usage. Also, task times can be predicted (given the
limitations) without having to build a prototype, recruit and test users, which saves time and money.
Limitations
• It measures only one aspect of performance: time, which means execution time and not the time
to acquire or learn a task.
• It considers only expert users. Generally, users differ regarding their knowledge and experience of
different systems and tasks, motor skills and technical ability.
• It considers only routine unit tasks.
• The method has to be specified step by step The execution of the method has to be error-free.
The predictive models are:
• Keystroke-level model (KLM)
• Throughput (TP)
• Fitt's law
Fitts’ Law
• It is one of the earliest predictive models used in HCI (and among the most well-known models in HCI
also)
• First proposed by PM Fitts (hence the name) in 1954.
“The law states that the time it takes to move to a target is a function of the length of the movement
and the size of the Target”.
In other words, the bigger the target and the closer the target, the faster it is acquired.
[Type here]
Human computer interaction
Fitts's law is used to model the act of pointing, either by physically touching an object with a hand
or finger, or virtually, by pointing to an object on a computer monitor using a pointing device.
• Another important thing about the Fitts’ law is that, it is both a descriptive and a predictive model
Hick-Hyman law
The Hick-Hyman law, is another important model in HCI but not as widely used as Fitts’ law. Hick-
Hyman law can be described as the average reaction time given a set of choices with equal probability.
In other words, it can be said that this law shows the time it takes a user to make a decision based on
the set of choices.
Hick-Hyman law for choice reaction time. This law takes the form of a prediction equation. Given a set
of n stimuli, associated one-for-one with n responses, the time to react (RT) to the onset of a stimulus
and make the appropriate response is given by
RT = a + b log2(n)
where a and b are empirically determined constants.
Descriptive models
Descriptive models are of a vastly different genre than predictive models. Although they generally do
not yield an empirical or quantitative measure of user performance, their utility is by no means second
to predictive models. Simply put, descriptive models provide a framework or context for thinking
about or describing a problem or situation.
– provide a basis for understanding, reflecting, and reasoning about certain facts and interactions
– provide a conceptual framework that simplifies a, potentially real, system – are used to inspect an
idea or a system and make statements about their probable characteristics
– used to reflect on a certain subject
– can reveal flaws in the design and style of interaction
Buxton’s three state model: this is used to determine how easy it is to use a mouse or the wheel and
will measure how much pressure a user puts on it and how much dexterity and speed is used during
this action. There are three states which are used to check this:
Ubiquitous computing (or "ubicomp") is a concept in software engineering and computer science
where computing is made to appear anytime and everywhere. In contrast to desktop computing,
ubiquitous computing can occur using any device, in any location, and in any format.
Three Waves of Computing
1. Mainframe computing (60’s-70)
2. PC computing
3. Ubiquitous computing
Ubiquitous computing ( ubicomp) is a post-desktop model of Human Computer Interaction (HCI) in
which information processing has been thoroughly integrated into everyday objects and activities. HCI
is a very recent area of thrust as the human world is getting more and more digital and one wants the
digital systems to behave as close as to a human being. This requires the devices or the information
processing systems to mimic the human behavior precisely.
Older HCI projects focused on software that people used for extended sessions in front of a
screen. Inventing a new menuing system, or analyzing user-error rates, or developing a visualization for
some productivity software would make a great old HCI paper. Ubicomp projects typically involve
some aspect of "context" e.g. something that the system senses in the environment, like GPS location,
[Type here]
Human computer interaction
or an accelerometer on a fitness bracelet. Some ubicomp work uses social science methods to study
how people use technology out in the world, without building a project.
Pervasive computing:
There seems to be a bit of a debate about whether they are two different things or not. From my
reading and understanding, they aren’t the same but are related. Ubiquitous computing means that
computers are everywhere, embedded, invisible, and/or transparent.
On the other hand pervasive computing means mobile computing - in other words it is computing like
smart phones, and other hand-held devices that can go anywhere.
A tangible user interface (TUI) is a user interface in which a person interacts with digital information
through the physical environment.
• Tangible user interfaces (TUIs) use physical objects to control the computer, most often a
collection of objects arranged on a tabletop to act as ‘physical icons’.
• An immediate problem is that physical objects don’t change their visible state very easily.
• One advantage of the Tangible user interface is the user experience, because it occurs a physical
interaction between the user and the interface itself (E.g.: SandScape: Building your own landscape
with sand). Another advantage is the usability, because the user knows intuitive how to use the
interface by knowing the function of the physical object. So, the user does not need to learn the
functionality. That is why the Tangible User interface is often used to make technology more
accessible for elderly people.
Differences: Tangible and Graphical user interface
The graphical user interface is a form of user interface that allows users to interact with electronic
devices through graphical icons and visual indicators.
where as Tangible user interface (TUI) is a user interface in which a person interacts with digital
information through the physical environment.
A Graphical User Interface exists only in the digital world where as the TUI connects the digital with the
physical world.
A Tangible user interface is usually built for one specific target group where as the Graphical user
interface has a wide range of usages in one interface. Because of that it targets a large group of
possible users.
Interface type/Attributes Tangible user interface Graphical user interface
Amount of possible Build for one specific application Build for many kinds of
application areas area application areas
[Type here]
Human computer interaction
How the system is driven physical objects, such as a mouse or Based on graphical bits,
a keyboard such as pixels on the
screen
Coupling between Unmediated connection Indirect connection
cognitive bits and the
physical output
How user experience is The user already knows the function The user explores the
driven of the interface by knowing the functionality of the
physical objects function interface
5. Tolerance for error: minimizing the impact and damage caused by mistakes or unintended
behavior. Potentially dangerous situations should be removed or made hard to reach. Potential
hazards should be shielded by warnings. Systems should fail safe from the user’s perspective and users
should be supported in tasks that require concentration.
6. Low physical effort: systems should be designed to be comfortable to use, minimizing physical
effort and fatigue. The physical design of the system should allow the user to maintain a natural
posture with reasonable operating effort. Repetitive or sustained actions should be avoided.
7. Size and space for approach and use: the placement of the system should be such that it can
be reached and used by any user regardless of body size, posture or mobility. Important elements
should be on the line of sight for both seated and standing users. All physical components should be
comfortably reachable by seated or standing users. Systems should allow for variation in hand size and
provide enough room for assistive devices to be used.
Assistive technology is technology used by individuals with disabilities in order to perform functions
that might otherwise be difficult or impossible.
The following technologies help people use computers to access the web:
• Screen readers: Software used by blind or visually impaired people to read the content of the
computer screen. Examples include JAWS for Windows, NVDA, or Voiceover for Mac.
• Screen magnification software: Allow users to control the size of text and or graphics on the
screen. Unlike using a zoom feature, these applications allow the user to have the ability to see
the enlarged text in relation to the rest of the screen.
This is done by emulating a handheld magnifier over the screen.
• Text readers: Software used by people with various forms of learning disabilities that affect
their ability to read text. This software will read text with a synthesized voice and may have a
highlighter to emphasize the word being spoken. These applications do not read things such as
menus or types of elements - they only read the text.
• Speech input software: Provides people with difficulty in typing an alternate way to type text
and also control the computer. Users can give the system some limited commands to perform
mouse actions. Users can tell the system to click a link or a button or use a menu item.
Examples would be Dragon Naturally Speaking for Windows or Mac. Please note both Windows
and Mac have some speech recognition utilities, but they cannot be used to browse the web.
• Alternative input devices: Some users may not be able to use a mouse or keyboard to work on
a computer. These people can use various forms of devices, such as:
o Head pointers: A stick or object mounted directly on the user’s head that can be used
to push keys on the keyboard. This device is used by individuals who have no use of
their hands.
o Motion tracking or eye tracking: This can include devices that watch a target or even
the eyes of the user to interpret where the user wants to place the mouse pointer and
moves it for the user.
o Single switch entry devices: These kinds of devices can be used with other alternative
input devices or by themselves. These are typically used with onscreen keyboards. The
on-screen keyboard has a cursor move across the keys, and when the key the user
wants is in focus, the user will click the switch. This can also work on a webpage: the
[Type here]
Human computer interaction
cursor can move through the webpage, and if the user wants a to click on a link or
button when that link or button is in focus, the user can activate the switch.
User modeling is the subdivision of human–computer interaction which describes the process of
building up and modifying a conceptual understanding of the user. The main goal of user modeling is
customization and adaptation of systems to the user's specific needs.
The system needs to "say the 'right' thing at the 'right' time in the 'right' way".[1] To do so it needs an
internal representation of the user. Another common purpose is modeling specific kinds of users,
including modeling of their skills and declarative knowledge, for use in automatic software-tests.[2]
User-models can thus serve as a cheaper alternative to user testing.
A user model is the collection and categorization of personal data associated with a specific user. A
user model is a (data) structure that is used to capture certain characteristics about an individual user,
and a user profile is the actual representation in a given user model. The process of obtaining the user
profile is called user modeling.[3] Therefore, it is the basis for any adaptive changes to the system's
behavior. Which data is included in the model depends on the purpose of the application. It can
include personal information such as users' names and ages, their interests, their skills and knowledge,
their goals and plans, their preferences and their dislikes or data about their behavior and their
interactions with the system.
User modelling
All help systems have a model of the user
single, generic user (non-intelligent)
user- configured model (adaptable)
system-configure model (adaptive)
Approaches
quantification
user moves between levels of expertise based on quantitative measure of what
he knows.
stereotypes
user is classified into a particular category.
overlay
an idealised model of expert use is constructed and actual use compared to it.
Model may contain the commonality between these two or the difference.
Special case: user behaviour compared to known error catalogue (UT)
There are different design patterns for user models, though often a mixture of them is
used.[2][4]
[Type here]
Human computer interaction
[Type here]
Human computer interaction
UCD Drawbacks
• Passive user involvement.
• User’s perception about the new interface may be inappropriate.
• Designers may ask incorrect questions to users.
What is Usability?
Usability is a measure of the interactive user experience associated with a user interface, such a
website or software application. A user-friendly interface design is easy-to-learn, supports users’ tasks
and goals efficiently and effectively, and is satisfying and engaging to use.
The official definition of usability is: “the extent to which a product can be used by specified users to
achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.”
Jeffrey Rubin describes usability objectives as:
• Usefulness - product enables user to achieve their goals - the tasks that it was designed to carry
out and/or wants needs of user.
[Type here]
Human computer interaction
• Effectiveness (ease of use) - quantitatively measured by speed of performance or error rate and
is tied to a percentage of users.
• Learnability - user's ability to operate the system to some defined level of competence after
some predetermined period of training. Also, refers to ability for infrequent users to relearn the
system.
• Attitude (likeability) - user's perceptions, feelings and opinions of the product, usually captured
through both written and oral communication.
Simplicity
• Simplicity: Reduce clutter and eliminate any unnecessary or irrelevant elements.
• Visibility: Keep the most commonly used options for a task visible (and the other options easily
accessible).
• Self-evidency: Design a system to be usable without instruction by the appropriate target user
of the system: if appropriate, by a member of the general public or by a user who has the
appropriate subject-matter knowledge but no prior experience with the system. Display data in
a manner that is clear and obvious to the appropriate user.
Communication
• Feedback: Provide appropriate, clear, and timely feedback to the user so that he sees the
results of his actions and knows what is going on with the system.
[Type here]
Human computer interaction
• Structure: Use organization to reinforce meaning. Put related things together, and keep
unrelated things separate.
• Sequencing: Organize groups of actions with a beginning, middle, and end, so that users know
where they are, when they are done, and have the satisfaction of accomplishment.
• Help and documentation: Ensure that any instructions are concise and focused on supporting
the user's task.
Error Prevention and Handling
• Forgiveness: Allow reasonable variations in input. Prevent the user from making serious errors
whenever possible, and ask for user confirmation before allowing a potentially destructive
action.
• Error recovery: Provide clear, plain-language messages to describe the problem and suggest a
solution to help users recover from any errors.
• Undo and redo: Provide "emergency exits" to allow users to abandon an unwanted action. The
ability to reverse actions relieves anxiety and encourages user exploration of unfamiliar
options.
Efficiency
• Efficacy: (For frequent use) Accommodate a user’s continuous advancement in knowledge and
skill. Do not impede efficient use by a skilled, experienced user.
• Shortcuts: (For frequent use) Allow experienced users to work more quickly by providing
abbreviations, function keys, macros, or other accelerators, and allowing customization or
tailoring of frequent actions.
• User control: (For experienced users) Make users the initiators of actions rather than the
responders to increase the users’ sense that they are in charge of the system.
Workload Reduction
• Supportive automation: Make the user’s work easier, simpler, faster, or more fun.
Automate unwanted workload.
• Reduce memory load: Keep displays brief and simple, consolidate and summarize data, and
present new information with meaningful aids to interpretation. Do not require the user to
remember information. Allow recognition rather than recall.
• Free cognitive resources for high-level tasks: Eliminate mental calculations, estimations,
comparisons, and unnecessary thinking. Reduce uncertainty.
Usability Judgment
• It depends: There will often be tradeoffs involved in design, and the situation, sound judgment,
experience should guide how those tradeoffs are weighed.
• A foolish consistency...: There are times when it makes sense to bend or violate some of the
principles or guidelines, but make sure that the violation is intentional and appropriate.
[Type here]
Human computer interaction
[Type here]
Human computer interaction
sold and inventory can be better managed by an even nontechnical guy. Similarly listening songs in the
car is easy for everyone.
Drag and drop feature:-
In most of the software, we have drag and drop functionality by which complex tasks are managed
easily. Like dragging and dropping folders. And in mobile games, it is also nice to use. In much graphical
software, drag and drop are awesome.
Looks nicer than text interface:-
In text interface, we have limited options to choose from and navigation is difficult. For noneducated
people, text interface is difficult to understand and use. In GUI user can use any tool by detecting
symbols or buttons.
Hotkeys usage:-
Sometimes we want a couple of functionality performed by single click then we use hotkeys. Like we
see some buttons or mouse clicks/movements by which a couple of actions performed. This is very
handy for speeding up tasks.
User-friendly:-
A user can easily navigate to the system without knowing a lot of details. Easy setup and ready to start
working are awesome. Most of the software hides the complexity of actions from the users and display
only required information is key to good interface.
Disabled people:-
In modern science, we can detect eyes movement and finger movement which is helpful for disabled
people. Now, most of the software use this functionality to make life easier for disabled people. They
can use software and websites easily with a few actions
Disadvantages of graphical user interface:- Difficult to develop and high cost:-
Nice looking designs are difficult to make and may also cost extra hardware support. Like high-quality
games consume a lot of device space and memory and it also required very skilled people to develop.
Slower than command line tools:-
In command line tools like MS dos, we perform some commands which do the work quickly. But if we
do the same task in GUI then it takes extra time to complete the task.
Extra attention required:-
If we are driving a car then controlling music/radio in the car requires attention which makes our
driving disturbed.
Using flat screen:-
Some graphical things do not display accurately on flat screens. In airplanes, sticks are used to control
most of the things because flat screen display is not very handy. This makes the limitation of GUI.
Time consumption:-
It takes a lot of time to develop and design a good looking interface. If some bad interface builds then
it makes difficult for the user to understand and use.
Memory resources:-
I see a lot of good GUI’s consuming lot of memory resources which make system/device slow to
perform. Implementation:-
Testing and implementation take a lot of time. Like we may require extra software for running GUI’s
[Type here]
Human computer interaction
Natural language
Perhaps the most attractive means of communicating with computers, at least at first glance, is by
natural language. Users, unable to remember a command or lost in a hierarchy of menus, may long for
the computer that is able to understand instructions expressed in everyday words! Natural language
understanding, both of speech and written input, is the subject of much interest and research.
Unfortunately, however, the ambiguity of natural language makes it very difficult for a machine to
[Type here]
Human computer interaction
understand. Language is ambiguous at a number of levels. First, the syntax, or structure, of a phrase
may not be clear.
Requirements – what is wanted The first stage is establishing what exactly is needed. Analysis The
results of observation and interview need to be ordered in some way to bring out key issues and
communicate with later stages of design.
Design Well, this is all about design, but there is a central stage when you move from what you want,
to how to do it.
Iteration and prototyping Humans are complex and we cannot expect to get designs right first time.
We therefore need to evaluate a design to see how well it is working and where there can be
improvements.
Implementation and deployment Finally, when we are happy with our design, we need to create it
and deploy it. This will involve writing code, perhaps making hardware, writing documentation and
manuals
[Type here]
Human computer interaction
documentation
system-oriented and general
The same design principles apply to both
Characteristics
Availability
continuous access concurrent to main application.
Accuracy and completeness
help matches actual system behaviour and covers all aspects of system behaviour.
Consistency
different parts of the help system and any paper documentation are consistent in content,
terminology and presentation.
Robustness
correct error handling and predictable behaviour.
Flexibility
allows user to interact in a way appropriate to experience and task.
Unobtrusiveness
does not prevent the user continuing with work nor interfere with application
[Type here]