[go: up one dir, main page]

0% found this document useful (0 votes)
5 views13 pages

Exploratory Testing - Quick Guide

A short and quick reference on Exploratory testing

Uploaded by

skillsigma8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views13 pages

Exploratory Testing - Quick Guide

A short and quick reference on Exploratory testing

Uploaded by

skillsigma8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

EXPLORATORY TESTING

A heuristic Technique is any approach to problem solving or self-discovery that employs


a practical method that is not guaranteed to be optimal, perfect, or rational.

Exploratory Testing Approaches: Charters


and Heuristics
1. Introduction to Exploratory Testing Approaches 1.1 What are
Charters and Heuristics in Exploratory Testing? 1.2 Importance of
Structured Approaches in Exploration 1.3 Balancing Freedom and
Structure in Exploratory Testing
2. Test Charters in Exploratory Testing 2.1 Definition and Purpose of
Test Charters 2.2 Creating Effective Test Charters - 2.2.1
Understanding Testing Objectives - 2.2.2 Identifying Test Scope and
Boundaries - 2.2.3 Defining Test Preconditions - 2.2.4 Outlining Test
Steps 2.3 Test Charter Examples and Templates 2.4 Adapting Charters
to Agile and Rapid Development
3. Heuristics in Exploratory Testing 3.1 Understanding Heuristics as
Testing Strategies 3.2 Common Testing Heuristics - 3.2.1 Consistency
Heuristic - 3.2.2 Domain Knowledge Heuristic - 3.2.3 Simplicity
Heuristic - 3.2.4 Familiarity Heuristic - 3.2.5 Error Guessing Heuristic -
3.2.6 Regression Heuristic - 3.2.7 Intuition Heuristic 3.3 Developing
Personal Heuristic Repositories 3.4 Combining Multiple Heuristics for
Comprehensive Testing
4. Applying Charters and Heuristics Together 4.1 Integration of
Charters and Heuristics 4.2 Creating Test Charters Guided by
Heuristics 4.3 Using Heuristics to Adjust Test Charters on-the-fly 4.4
Balancing Structure and Creativity for Effective Testing
5. Benefits and Challenges of Charters and Heuristics 5.1
Advantages of Test Charters 5.2 Advantages of Heuristic-Based Testing
5.3 Challenges in Charter and Heuristic Usage - 5.3.1 Over-Reliance on
Charters - 5.3.2 Lack of Heuristic Familiarity - 5.3.3 Balancing Scripted
and Exploratory Approaches
6. Case Studies and Examples 6.1 Case Study 1: Web Application
Testing Using Charters 6.2 Case Study 2: Mobile App Testing with
Heuristics 6.3 Case Study 3: Combining Charters and Heuristics in
Game Testing
7. Adapting to Project Needs and Team Dynamics 7.1 Customizing
Charters and Heuristics for Different Projects 7.2 Incorporating
Charters and Heuristics into Team Processes 7.3 Training and Skill
Development for Charters and Heuristics
8. Future Trends in Exploratory Testing Approaches 8.1 AI-Powered
Heuristics and Charters 8.2 Integration with Test Automation 8.3
Continuous Learning and Evolution of Approaches
9. Conclusion 9.1 Recap of Key Concepts 9.2 Emphasizing Flexibility and
Learning in Exploration 9.3 Final Thoughts on Balancing Charters and
Heuristics

A test charter in exploratory testing is a focused and goal-oriented mission


statement that guides the testing process. It outlines the scope, objectives,
and boundaries of a testing session or task, providing testers with a clear
direction while still allowing room for exploration and creativity.

Here's a breakdown of what a test charter includes:

1. Objective: The primary goal of the testing session. This could be to


find specific types of defects, explore a particular feature, or assess
the application's performance under certain conditions.
2. Scope: The boundaries of what is to be tested. This defines what parts
of the application or system are in scope and what parts are out of
scope for the current testing session.
3. Context and Preconditions: Any relevant information or conditions
that set the context for the testing. This might include the
environment, the current build version, any data or configurations
required, etc.
4. Test Ideas: High-level ideas or strategies for how to approach testing.
These can be based on heuristics, experience, or initial observations.
5. Test Steps: The specific actions or tasks that the tester plans to
perform during the testing session. These steps can be exploratory in
nature, but having a rough plan helps maintain focus.
6. Timebox: The allocated time for the testing session. Timeboxing helps
in maintaining a balance between thorough testing and the need to
move forward.
The test charter serves as a map for testers, giving them direction without
being overly prescriptive. It encourages testers to think critically, ask
questions, and uncover issues that might not have been explicitly considered
in scripted testing. It's important to note that while a test charter provides
structure, it doesn't mean the testing is entirely scripted; testers can still
explore and adapt as they test.

An example of a test charter might be:

Objective: Identify any UI inconsistencies and usability issues in the


checkout process of an e-commerce website.

Scope: Checkout pages (payment, shipping, and order review) on desktop


and mobile devices.

Context and Preconditions: Use the latest Chrome browser on both


desktop and a recent Android device for mobile testing.

Test Ideas:

 Focus on the alignment of elements and fonts.


 Test the responsiveness of the UI elements during different stages of
checkout.
 Try different payment methods and observe any issues.

Test Steps:

1. Load the checkout page on desktop Chrome.


2. Verify the alignment of shipping address fields.
3. Attempt to proceed to payment without entering any data.
4. Repeat steps 1-3 on the Android device.
5. Test the payment process with a sample credit card.

Timebox: 60 minutes.

Remember, a test charter provides guidance, but the tester's critical thinking
and creativity still play a crucial role in finding issues that might not be
explicitly outlined in the charter.

This is a document that lists out a number of steps for the tester to do, and then has
the result that we expect to see for each step. However, as we learned when we
looked at the explore script spectrum, we lose some things where we do this. We
lose some things like creativity and learning, if we over script our tests. So, how can
we stay on the exploratory side of the spectrum and still have the structure that we
might need and more regulated environments like, insurance banking or
medicine? One powerful technique that can be used for this is something known as
session-based test management. It allows you to have structure around your
exploratory testing without needing to have each and every step prescribed. So,
this concept was developed and popularized in the works of James and Jon
Bach, and it allows you to still have some discovery and creativity, but it does so, in
a way that lets you set up the level of rigor and accountability that you need for the
context that you're working in. There are a couple of key parts to session-based test
management. So, let's just kind of walk through them each one at a time. The first
thing we'll talk about, is charters. Now, a charter is just a simple statement that
helps guide the testing. It tells you about what you are testing and why you're
testing it. So, an example charter might be, explore the delete functionality in the
API to see if any restricted data can be deleted. Or another example might be, see if
you can find any accessibility issues on the reports page. So, as you can see, a test
charter can be quite broad. We're not telling you what tools to use and what steps
to follow, but at the same time, they provide you a structure and help you to stay
focused on what you're trying to do. Now, the next thing to talk about is time. So,
sessions are time-boxed. You can use any size you want, but it's often suggested to
use a time somewhere between 45 and 90 minutes. That way it lets you go deep
and try to find issues. And you'll find that you need a bit of time to get into the flow
to do that. But if you go for much longer than 90 minutes, you'll find you get tired
and fatigued and you're not able to focus anymore. And honestly, given the modern
distraction driven world that we live in, I would strongly suggest turning off
notifications for this time. In a world of distractions, focus testing time like this, will
yield enormous benefits. Exploratory testing takes deep thinking and
engagement and you'll need focus time to be able to do it well. Now, another
important part of session-based test management is the notes. This is the part
where you record your observations and thoughts, and to some extent what you
did. What goes into the note section is going to vary a lot, depending on what you
need to do. If they're just there to help you remember what you did, they might be
pretty sparse and include a bunch of shortcuts. But if they're there to
demonstrate to regulators or auditors, that you did the required testing, they might
be more detailed about the actual steps that you did. And then the last part of
session-based test management is what I'll call metrics. I think this part needs the
most customization. And to be honest, I would only put it in place, if you have
people in your organization that are looking for certain metrics around the
testing. So, there are a number of metrics that were considered in the classic forms
of Jon Bach in his paper where he first talked about this list and some of these
following metrics. So, some metrics around the task types and time. So, task type
was broken up into things like, setup time, test design and execution, and bug
investigation and reporting. And then there were also metrics around coverage. So,
they are included for things like what? Area of the product you were working on and
what type of test strategy you were using. And then there were bug metrics. So,
these metrics were collected around the bugs that you found and other potential
issues and concerns that you noted. And I think one thing to note with all of these
metrics is to be careful. Like they might be useful for your purposes, but I suggest
thinking about what information you actually need to present before you start
implementing too many metrics. And don't just, kind of blindly copy the metrics that
are given here. They could be useful for some people, and if you find them helpful,
use them. But think carefully about what metrics matter for what you're trying to
do. And then there are some tools that can automate some session-based
management metrics for you. One example of a tool like that, is rapid
reporter, which you can see here on this site. But it can be a fairly complex tool. And
so once again, I would say, be careful of the tools that you use. You probably want
to just start out with something simple, a text document. Experiment, find what
works for you, and then look for tools that can help you do what you need to
do. Now, in another important part of session-based test management, is the last
point I added to the slide here, debrief. This basically is just a review of the test
session and it can be a helpful learning exercise, if you're trying to see how you're
doing on your testing. In a debrief, you just talk to another team member or your
manager about the work that you did in a session. This can help you see additional
areas that you might want to consider and to see other ways that could've
approached your testing. Now, if you're in a regulatory based environment, you
might need to put a bit more structure around sessions that you do. But for this
challenge in the video, I want you to just take the three key parts of session-based
testing and put them into practice. So, start a text document, make it look
something like this, and then fill it out. So, define a simple charter to test some part
of your application in some way, choose how long you're going to spend on it, turn
off notifications in your email and other applications, and then dive into it, taking
whatever notes you feel are necessary. It really is that easy, to add some
structure to your exploratory testing.

Creating effective test charters involves careful planning and consideration.


Here's a step-by-step guide on how to create them:

1. Understand Testing Objectives: Before you start creating a test


charter, you need to understand the overarching goals of the testing
effort. What are you trying to achieve with this testing session? Are you
focusing on a specific feature, performance, or compatibility?
2. Identify Test Scope and Boundaries: Clearly define what is within
the scope of the testing and what is not. This helps in focusing the
testing effort and avoiding unnecessary exploration in unrelated areas.
3. Define Test Preconditions: Specify any preconditions or
prerequisites needed for the testing. This might include the specific
version of the application, required data setups, test environments,
and any other contextual information that testers need to know.
4. Outline Test Steps: Break down the testing process into actionable
steps. These steps should guide the tester through the testing process
without being overly restrictive. List the specific actions you expect the
tester to take during the testing.
5. Use Clear and Concise Language: Write the test charter using clear
and concise language. Avoid ambiguity or vague terms that might lead
to misinterpretation.
6. Incorporate Test Ideas: Integrate test ideas or strategies that
testers can use to guide their exploration. These could be based on
common testing heuristics, past experience, or initial observations.
7. Allocate Time: Assign a time limit or timebox for the testing session.
This helps testers manage their time effectively and prevents them
from getting stuck on a single issue for too long.
8. Balance Specificity and Flexibility: While the charter provides
direction, ensure that there's room for flexibility and exploration.
Testers should feel empowered to investigate unexpected issues that
might arise during testing.
9. Consider Different User Scenarios: If applicable, consider various
scenarios that different types of users might encounter. This helps in
testing the application from different perspectives.
10. Collaborate with Team Members: If you're working in a team,
it's valuable to collaborate on creating test charters. This brings in
diverse viewpoints and ensures that important aspects aren't
overlooked.
11. Review and Refine: Before finalizing the test charter, review it
for clarity, completeness, and alignment with testing objectives. Refine
any vague or unclear sections.
12. Document the Charter: Once the test charter is finalized,
document it in a format that's easily accessible to the testers. This
could be a shared document, a testing tool, or any other preferred
platform.

Differences with error guessing and ad-hoc testing

Error Guessing and Ad-hoc Testing are both informal testing techniques
in software testing, but they have different approaches and purposes. Here
are the key differences between the two:

Error Guessing:
1. Purpose:
 Error Guessing: This technique is based on the tester's
intuition, experience, and familiarity with common mistakes
made during development. Testers "guess" potential error-prone
areas based on their knowledge.
 Ad-hoc Testing: Ad-hoc testing aims to explore the system
without any specific test cases in mind. Testers try different
actions without a formal test plan.
2. Methodology:
 Error Guessing: Testers actively imagine scenarios where
defects might occur. This could include common coding errors,
misunderstood requirements, or other issues that might not be
obvious.
 Ad-hoc Testing: Testers perform testing randomly, without any
predefined test cases. They interact with the application based
on their intuition and exploration.
3. Structured vs. Unstructured:
 Error Guessing: While it's not as structured as formal testing,
error guessing involves more structured thinking compared to
ad-hoc testing. Testers specifically target areas they believe are
likely to have errors.
 Ad-hoc Testing: This is inherently unstructured. Testers
perform actions based on what they think to do at the moment,
without a predefined plan.
4. Predictability:
 Error Guessing: It's more predictable than ad-hoc testing
because it's based on a tester's knowledge of common errors.
 Ad-hoc Testing: Since it's random and unplanned, the
outcomes and defects discovered can be quite unpredictable.
5. Experience Requirement:
 Error Guessing: It requires a good understanding of potential
programming and logic errors, which comes from experience in
coding and debugging.
 Ad-hoc Testing: While experience can help, it's not as crucial as
in error guessing. Ad-hoc testing is often used by testers with
various levels of expertise.

Ad-hoc Testing:

1. Exploration Focus:
 Error Guessing: While error guessing is based on identifying
specific errors, ad-hoc testing focuses more on exploration
without a specific goal in mind.
 Ad-hoc Testing: It's all about exploring the software in a less
structured and more informal way, mimicking how an end-user
might interact with it.
2. Variety:
 Error Guessing: Testers generally focus on a particular type of
error based on their experience.
 Ad-hoc Testing: Testers can encounter a wide range of issues
as they explore the system in different ways.
3. Creativity:
 Error Guessing: This technique relies more on analysis and
experience.
 Ad-hoc Testing: It's more creative and dynamic, as testers
come up with new actions and scenarios on the fly.

In summary, error guessing involves predicting potential errors based on


experience, while ad-hoc testing is about exploring the system in a less
structured manner. Both techniques have their place in uncovering defects in
software applications.

Procedure of exploratory testing

Exploratory testing is a dynamic and flexible approach to software testing


where testers actively design and execute test cases simultaneously,
allowing for adaptation and learning during the testing process. Here is a
step-by-step procedure for conducting exploratory testing:

1. Understanding Testing Objectives:


 Clearly define the goals of the exploratory testing session. What
are you aiming to achieve? What areas or aspects of the
software do you intend to explore?
2. Test Environment Setup:
 Set up the test environment, ensuring that you have the
necessary tools, access, and data to begin testing.
3. Exploratory Testing Charter Creation:
 Create a test charter that outlines the objectives, scope, context,
and any initial test ideas for the testing session. This provides a
basic structure and direction for your testing.
4. Test Idea Generation:
 Based on your charter and initial understanding of the software,
brainstorm potential test scenarios and ideas. These could be
based on common testing heuristics, your intuition, and your
experience.
5. Test Execution:
 Begin testing by executing your initial test ideas. Interact with
the software as a user would, looking for issues, unexpected
behaviors, and potential areas of concern.
6. Exploration and Learning:
 While testing, pay attention to what you observe and learn about
the software. Be open to unexpected issues and behaviors that
might lead you to new test ideas.
7. Adapting and Iterating:
 As you uncover issues or find interesting areas to explore, adapt
your testing approach. Modify your test ideas and test execution
based on your findings.
8. Bug Reporting:
 If you encounter defects during testing, document them with
clear and concise descriptions. Include relevant information
about the steps to reproduce, the environment, and the
observed behavior.
9. Regression and Deepening:
 After identifying issues, perform additional testing to ensure that
reported defects are not isolated occurrences. Deepen your
exploration in the areas where issues were found.
10. Time Management:
 Keep track of time to ensure that you're balancing thorough
exploration with the available testing time.
11. Debriefing and Collaboration:
 After the testing session, discuss your findings with the team.
Share insights, observations, and the defects you discovered.
Collaboration helps in validating issues and understanding the
software better.
12. Documentation:
 Record your testing activities, observations, and findings. This
documentation can be useful for future reference and for
maintaining a record of the testing process.
13. Session Closure:
 Conclude the exploratory testing session. Review your charter,
assess the achieved objectives, and determine if any further
action is needed.
14. Continuous Learning and Improvement:
 Reflect on the testing session. What did you learn about the
software? How can you improve your exploratory testing
approach for future sessions?
Remember, exploratory testing is iterative and adaptive. The key is to
maintain a balance between structured testing (following the charter) and
creative exploration. Adjust your approach based on what you discover
during the testing process.

Application areas of exploratory testing

Exploratory testing is a versatile testing approach that can be applied to various


application areas and types of software. Here are some key application areas where
exploratory testing is particularly beneficial:

1. New Feature Testing:


 When a new feature is introduced, exploratory testing can help uncover
issues that might not have been considered during formal testing phases.
Testers can freely explore the feature and its interactions.
2. Usability and User Experience Testing:
 Exploratory testing is excellent for evaluating the user experience. Testers
can mimic real user interactions and provide valuable insights into
usability issues.
3. Ad-hoc Testing:
 For situations where formal test cases are not available or feasible, such
as quick sanity checks before a release, exploratory testing can be a
valuable tool.
4. Bug Hunting and Regression Testing:
 Exploratory testing is effective in hunting down elusive bugs, especially
those that might have slipped through formal testing processes. It's also
useful for verifying if previously reported issues have been resolved
without relying solely on scripted test cases.
5. Early Testing in Agile and DevOps:
 In agile and DevOps environments, where requirements can change
rapidly, exploratory testing complements scripted testing by allowing
testers to quickly adapt to changes and explore new functionalities.
6. Compatibility Testing:
 Exploratory testing is great for testing software compatibility across
different devices, browsers, and operating systems. Testers can explore
different configurations and identify compatibility issues.
7. Security Testing:
 In security testing, testers explore the application's vulnerabilities and
attempt to identify potential security loopholes and weaknesses that
might not be covered by scripted test cases.
8. Data Validation Testing:
 Exploratory testing can be particularly useful for data validation. Testers
can explore the application with various inputs to identify unexpected
behaviors related to data handling.
9. Load and Performance Testing:
 Exploratory testing can uncover performance bottlenecks and scalability
issues as testers interact with the application under different load
conditions.
10. Localization and Internationalization Testing:
 Exploratory testing can be employed to explore various language settings
and configurations to identify issues related to localization and
internationalization.
11. Integration Testing:
 For complex systems with multiple integrations, exploratory testing can
help testers identify issues in integration points and interactions.
12. Mobile App Testing:
 Exploratory testing is well-suited for mobile apps, allowing testers to
interact with the app in ways that mimic real-world user behavior across
various devices.
13. Emergent Technologies and Platforms:
 When working with emerging technologies or platforms, where established
testing practices might be limited, exploratory testing can help explore
the unknown.
14. Third-Party Component Testing:
 Exploratory testing can uncover issues related to third-party components
and libraries by exploring how they interact with the software.

Exploratory testing's adaptability and flexibility make it a valuable approach in various


contexts. It encourages testers to think critically and creatively, allowing them to
uncover issues that might not be addressed through traditional scripted testing.

Performing exploratory tests

Performing exploratory tests involves a dynamic and adaptive approach to testing


where testers simultaneously design and execute test cases, allowing them to adapt
and learn as they test. Here's a step-by-step guide to performing exploratory tests:

1. Understand the Context and Objective:


 Clarify the context of the testing. What part of the application are you
testing? What are the goals or objectives of this testing session?
2. Set Up the Testing Environment:
 Ensure that you have access to the necessary test environments, tools,
and data required for testing.
3. Create a Test Charter:
 Develop a test charter that outlines the testing objectives, scope, context,
and initial test ideas. This provides a guiding structure for your testing.
4. Generate Test Ideas:
 Based on your charter and understanding of the application, brainstorm
potential test scenarios and ideas. Consider using common testing
heuristics.
5. Execute Test Cases:
 Start executing your initial test ideas. Interact with the application,
attempting to find defects, unexpected behaviors, and areas of concern.
6. Observe and Learn:
 While testing, pay attention to your observations and learnings about the
application. Be open to discovering issues beyond what you initially
thought.
7. Adapt and Iterate:
 As you identify issues or come across interesting areas to explore, adapt
your testing approach. Modify your test ideas and actions based on your
findings.
8. Report Defects:
 If you encounter defects during testing, document them with clear
descriptions. Include steps to reproduce, the observed behavior, and any
relevant information.
9. Regression and Deepening:
 After discovering issues, perform additional testing to ensure that
reported defects are consistent and not isolated incidents. Dive deeper
into areas with problems.
10. Manage Time:
 Keep track of time to balance thorough testing with the available time for
the testing session.
11. Collaborate and Communicate:
 Share your findings and insights with your team. Collaboration can help
validate issues and provide a more holistic understanding of the software.
12. Reflect and Learn:
 After the testing session, reflect on what you learned about the
application. Consider how your insights could be used in future testing.
13. Document Your Testing Activities:
 Record your testing process, observations, and any defects you found.
This documentation is valuable for future reference and communication.
14. Conclude the Testing Session:
 Review your charter and the testing objectives. Assess whether you've
achieved what you set out to do in this session.
15. Continuous Improvement:
 Continuously refine your exploratory testing skills. Learn from each
session and use your insights to enhance your testing approach.

Remember, exploratory testing is iterative and adaptive. It's about striking a balance
between structured testing (guided by the charter) and creative exploration. Adapt your
approach based on your discoveries as you test.

You might also like