Docs Specflow Org Specflow en Latest PDF
Docs Specflow Org Specflow en Latest PDF
documentation!
1 SpecFlow components 3
i
ii
Welcome to SpecFlow’s documentation!
SpecFlow is a test automation solution for .NET built upon the BDD paradigm. Use SpecFlow to define, manage and
automatically execute human-readable acceptance tests in .NET projects (Full Framework and .NET Core).
SpecFlow tests are written using Gherkin, which allows you to write test cases using natural languages. SpecFlow
uses the official Gherkin parser, which supports over 70 languages. These tests are then tied to your application code
using so-called bindings, allowing you to execute the tests using the testing framework of your choice. You can also
execute your tests using SpecFlow’s own dedicated test runner, SpecFlow+ Runner.
Getting Started 1
Welcome to SpecFlow’s documentation!
2 Getting Started
CHAPTER 1
SpecFlow components
• SpecFlow (open-source): This is the core of SpecFlow, providing the functions for binding Gherkin feature files.
• SpecFlow+ Runner (closed-source): This is SpecFlow’s dedicated test runner, and provides additional features
such as advanced execution options and execution reports (HTML, XML, JSON). SpecFlow+ Runner is free of
charge, and only requires a free SpecFlow Account.
• SpecFlow+ LivingDoc (closed-source): This is a set of tools that renders your Gherkin Feature Files in an easily
readable format with syntax highlighting and allows you to quickly share and collaborate on Gherkin Feature
Files with stakeholders that are not familiar with developer tools (such as Visual Studio).
• SpecFlow+ LivingDoc Generator is available set of plugins and tools for SpecFlow to generate a local or self-
hosted documentation out of your Gherkin feature files, which can be easily shared. No SpecFlow account
needed.
• SpecFlow+ LivingDoc Azure DevOps is an extension for Azure DevOps/TFS. You can view the output directly
in Azure DevOps/TFS, meaning that anyone with access to the system can easily review your specifications
when needed. SpecFlow+ LivingDoc Azure DevOps is free of charge, and only requires a free SpecFlow
Account.
SpecFlow also includes a Visual Studio extension that adds a number of helpful features to Visual Studio (e.g. Intel-
lisense, feature file templates, context menu entries). However, SpecFlow is not tied to Visual Studio; you can use
SpecFlow with Mono or VSCode as well.
3
Welcome to SpecFlow’s documentation!
This guide assumes you are working on Windows with Visual Studio.
The SpecFlow extension for Visual Studio provides several helpful features, such as syntax highlighting for Gherkin
(feature) files, a Visual Studio project template, and multiple item templates, which help you create executable speci-
fications with SpecFlow. This extension is not required to use SpecFlow, but we recommend you install it if you are
using Visual Studio.
5
Welcome to SpecFlow’s documentation!
To install the extension, download the extension for your version of Visual Studio:
• Visual Studio 2019
• Visual Studio 2017
• Visual Studio 2015
Either choose to open the download directly, or double-click the extension once it has downloaded to install it in Visual
Studio.
This section guides you through the first steps of setting up a SpecFlow project within Visual Studio and defining and
executing your first test scenario. In this example, we will be using SpecFlow+ Runner, but you can use several other
test execution frameworks, including NUnit, xUnit, or MSTest.
SpecFlow+ Runner is available free of charge. Learn more about how to sign-up for your free account. After the
successful sign-up, you can execute your scenarios for the first time.
SpecFlow tests are usually placed into one or more separate projects in your solution, and these projects are referred to
as a “specification project” below. The easiest and most convenient way to set up these projects is to use the SpecFlow
project template provided by our SpecFlow for Visual Studio extension.
To set up your specification project:
1. In Visual Studio, create a new project and search for SpecFlow
4. Configure your .NET version and unit test framework and press Create
6. All NuGet packages for the newly created SpecFlow project should be automatically restored. If not, do a
manual restore. p Note: Your project folder should not be too deep in the filesystem, as you will get problems
with Windows 255 character limit in file paths.
SpecRun.SpecFlow Package
This package is added to your project automatically when creating the project using the SpecFlow Visual Studio project
template with default settings or by manually installing. This package configures SpecFlow+ Runner as your unit test
provider.
Note: Instead of SpecFlow+ Runner, you can also use other unit test providers, like MsTest, xUnit or NUnit. Simply
choose a different Test Framework than SpecFlow+ Runner. However, to follow all the steps in this guide, you
need to install SpecFlow+ Runner.
Microsoft.NET.Test.Sdk
We have already added your first Feature file in order to help you get started.
The Feature File includes a default scenario written in Gherkin for adding two numbers.
Feature: Calculator
In order to avoid silly mistakes
As a math idiot
I want to be told the sum of two numbers
@mytag
Scenario: Add two numbers
Given the first number is 50
And the second number is 70
When the two numbers are added
Then the result should be 120
Our SpecFlow project template, also includes your first step definitions for the Feature File in the previous section.
using TechTalk.SpecFlow;
namespace GettingStarted.Steps
{
[Binding]
public sealed class CalculatorStepDefinitions
{
// method.
_scenarioContext.Pending();
}
_scenarioContext.Pending();
}
_scenarioContext.Pending();
}
_scenarioContext.Pending();
}
}
}
To add further step definitions to your project, please take a look at how to create step definitions skeleton code.
The next step is to build the solution. After that, the business readable scenario titles will show up in Visual Studio
Test Explorer:
1. Build your solution.
2. Select Test | Windows | Test Explorer to open the Test Explorer:
Scenarios are displayed with their plain text scenario title instead of a generated unit test name.
3. Click on Run All to run your test.
4. You will be asked to sign up for a SpecFlow account or to sign in with your existing account.To see the output of
the SpecFlow+ Runner please open the “Output” pane and select “Tests” in the “Show output from” dropdown:
5. Open the URL in the message in your browser. In Visual Studio you can click the link while pressing the
CTRL-key.
6. You are displayed with a “Welcome Page”. Click on Sign in with Microsoft to continue.
7. Sign in with your Microsoft account. It can be a personal or corporate/enterprise account. If you are already
signed in, this should happen automatically – you might need additional permissions from your Active Directory
admin. Learn more about admin consents
8. You will be taken to a setup page where you can set up your SpecFlow account. Enter your details to sign up for
a free SpecFlow account.
9. Return to Visual Studio and click on “Run all” again.
10. As the automation and application code has not yet been implemented, the test will not pass successfully.
Note: If you cannot see your tests, make sure there are no spaces or dashes in your project name!
In order for your tests to pass, you need to implement both the application code (the code in your application you are
testing) and the automation code (binding the test scenario to the automation interface). This involves the following
steps, which are covered in this section:
1. Reference the assembly or project containing the interface you want to bind the automation to (including APIs,
controllers, UI automation tools, etc.).
2. Extend the step definition skeleton with the automation code.
3. Implement the missing application code.
4. Verify that the scenario passes the test.
The application code that implements the actual functions performed by the calculator should be defined in a separate
project from your specification project. This project should include a class for the calculator and expose methods for
initializing the calculator and performing the addition:
1. Right-click on your solution in the Solution Explorer and select Add | Project from the context menu. Choose
to add a new class library and give your project a name (e.g. “Example”).
2. Right-click on the .cs file in the new project and rename it (e.g. “Calculator.cs”), and choose to rename all
references.
3. Your new class should be similar to the following:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Example
{
public class Calculator
{
}
}
1. Right-click your specification project and select Add | Reference from the context menu.
2. Click on Projects on the left of the Reference Manager dialogue. The projects in your solution are listed.
3. Enable the checkbox next to the Example project to reference it from the specifications project.
4. Click on OK.A reference to the Example project is added to the References node in the Solution Explorer.
5. Add a using directive for the namespace (e.g. “Example”) of your Calculator class to the CalculatorStepDefini-
tions.cs file in your specification project:
using Example;
1. Define a variable of the type Calculator in the CalculatorStepDefinitions class before the step definitions:
Defining a variable outside of the individual steps allows the variable to be accessed by each of the individual steps
and ensures the variable is persistent between steps.
Now that the step definitions can reference the Calculator class, you need to extend the step definitions and implement
the application code.
The first Given statement in the scenario needs to initialize the calculator with the first of the two numbers defined in
the scenario (50). To implement the code:
1. Open CalculatorStepDefinitions.cs if it is not already open.The value defined in the scenario is passed as a
parameter in the automation code’s associated function, e.g.:
1. To initialize the calculator with this number, replace _scenarioContext.Pending(); in the step defini-
tion as follows:
1. Switch to the file containing your Calculator class (e.g. Calculator.cs) and add a public integer member to the
class:
You have now determined that the FirstNumber member of the Calculator class is initialized with the value defined
in the scenario when the test is executed.
The second Given statement in the scenario needs to initialize the second number with the second value defined in the
scenario (70). To implement the code:
1. Open CalculatorStepDefinitions.cs if it is not already open.
2. Locate the function corresponding to the second Given statement and rename the p0 parameter to “number”, as
before.
3. To initialize the calculator with the second number, replace _scenarioContext.Pending(); in the step
definition as follows:
1. Switch to the file containing your Calculator class and add another public integer member to the class:
You have now determined that the SecondNumber member of the Calculator class is initialized with the value defined
in the scenario when the test is executed.
The step for the When statement needs to call the method that performs the actual addition and store the result. This
result needs to be available to the other final step in the automation code in order to verify that the result is the expected
result defined in the test scenario.
To implement the code:
1. Open CalculatorStepDefinitions.cs if it is not already open.
2. Define a variable to store the result at the start of the CalculatorSeps class (before any of the steps):
Defining a variable outside of the individual steps allows the variable to be accessed by each of the individual steps.
1. Locate the function corresponding to the When statement and edit it as follows:
1. Switch to the file containing your Calculator class and define the Add() method:
You have now determined that the Add() method of the calculator class is called once the initial Given steps have been
performed.
The step for the Then statement needs to verify that the result returned by the Add() method in the previous step is the
same as the expected result defined in the test scenario. To implement the code:
1. Open CalculatorStepDefinitions.cs if it is not already open.As the result will be verified using Assert, you need
to add “using Microsoft.VisualStudio.TestTools.UnitTesting;” to the top of your automation code.
2. Locate the function corresponding to the Then statement. Rename the p0 parameter in the function call (this
time to “expectedResult”) and edit the step definition as follows:
You have now implemented the final piece of the jigsaw – testing that the result returned by your application matches
the expected result defined in the scenario.
using TechTalk.SpecFlow;
namespace GettingStarted.Steps
{
[Binding]
public sealed class CalculatorStepDefinitions
{
private readonly ScenarioContext _scenarioContext;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Example
{
public class Calculator
{
public int FirstNumber { get; set; }
public int SecondNumber { get; set; }
Now that the test steps have been bound to your application code, you need to rebuild your solution and execute the
tests again (click on Run All in the Test Explorer). You should see that the test now passes (green).
Click on Output in the Test Explorer to display a summary of the test: This
example is very simple; at this point, you would want to refactor your code before proceeding with the implementation
of your remaining scenarios.
To get the most out of SpecFlow, get feedback early on and providing the basis for further discussions about the
behavior of your system, we recommend to share your Gherkin Feature Files with all your stakeholders and team
members.
An easy way to share your Gherkin Feature Files is to use the free SpecFlow+ LivingDoc:
• Generator for local or self-hosted documentation
• Azure DevOps Extension to quickly generate a living documentation from your Gherkin Feature Files on Azure
DevOps.
Demo: Try out our SpecFlow+ LivingDoc Generator Demo which is hosted on GitHub Pages.
The generated documentation can finally be shared per Email, per Microsoft Teams or Slack without the need for
Visual Studio.
Sounds interesting? Let’s get started with SpecFlow+ LivingDoc.
Follow our step by step guide to get started, learn, explore and experiment with a simple web application project using
SpecFlow and the SpecFlow+ Runner.
The SpecFlow sample applications are publicly available in the SpecFlow-Examples GitHub repository.
You can clone the repository in Visual Studio 2019 by selecting the “Clone a repository” option on the start screen. Use
the GitHub URL https://github.com/SpecFlowOSS/SpecFlow-Examples.git as repository location.
Alternatively you can clone the repository from the command line:
git clone https://github.com/SpecFlowOSS/SpecFlow-Examples.git
This guide will walk you through the BookShop example that you can find in the ASP.NET-MVC/BookShop
folder.
The example application is a web application, where users can search and buy BDD books. The implementation
focuses on the first steps of the following user journey.
Feel free to explore the application: try to search for a book, check the details of a selected book, add it to the shopping
card, manipulate the quantity.
Architecture
The application is implemented as an ASP.NET Core MVC web application and it uses Entity Framework Core for
the database access.
Note: To keep the setup simple the Bookshop application uses an in-memory database.
Feature files
With SpecFlow you formulate your acceptance criteria in .feature files in Given-When-Then style, using the Gherkin
language. Using SpecFlow these acceptance criteria can be validated with Automated Acceptance Tests.
In this example the BookShop.AcceptanceTests project contains the feature files for the Bookshop application.
These describe the implemented behaviour of the Bookshop in the form of Features and Scenarios.
Open the Book Details.feature file to see the acceptance criteria of the Displaying book details feature.
Step defintions
Step definitions are implemented as .NET code in plain old .NET classes (see the .cs files in the folder StepDefinitions).
These step definitions (also known as “bindings”) define, how the individual scenario steps should be automated.
In Visual Studio you can easily navigate from the scenario steps to the step definition that automates the step using the
standard “Go To Definition” command (default hotkey: “F12”).
In the Book Details.feature file put the carret in the line “Given the following books” and press “F12” to jump
to the step definition of this step. You can see Given/When/Then attributes on the C# methods and a Binding attribute
on the class that establish the connection between the Gherkin steps and the step definitions.
Executable tests
When you build the solution SpecFlow generates executable tests from the acceptance criteria scenarios. The generated
tests use the step definitions that you need to implement.
In Visual Studio you can find the generated tests files as sub-items under each feature file (see e.g. the Book
Details.feature.cs under the Book Details.feature file).
Note: The tests in the feature.cs files are always generated by SpecFlow from the feature files. You
should never manually modify the generated tests.
As SpecFlow is not a unit test runner on its own, it can generate tests for a number of third party unit test runners like
MsTest, NUnit, XUnit and SpecFlow+ Runner.
The Bookshop example project is configured to generate unit tests for SpecFlow+ Runner, which is a test runner
provided by the SpecFlow team specialized for running acceptance/integration tests.
You could easily switch to other unit test providers (such as NUnit, XUnit, etc.) by uninstalling the current test
provider NuGet package (SpecRun.SpecFlow) and installing another (e.g. SpecFlow.MsTest). However, the
Bookshop example leverages some unique features of SpecFlow+ Runner, hence changing to another unit test provider
would require some additional changes in the step definitions.
In this example we use SpecFlow+ Runner to execute the SpecFlow tests, but you can use a number of other test
execution frameworks, including NUnit, xUnit or MSTest. SpecFlow+ Runner’s advantages include integration with
Visual Studio Test Runner and extensive integrated reports available from within Visual Studio.
SpecFlow+ Runner is available free of charge. Only a quick initial activation is necessary to run your scenarios.
1. Build your solution.
2. Select Test | Windows | Test Explorer in Visual Studio to open the Test Explorer
3. Click on Run All to run your test.
4. You will be asked to sign up for a SpecFlow account or to sign in with your existing account.To see the output
of the SpecFlow+ Runner please open the Output window (View- > Output) and select “Tests” in the “Show
output from” dropdown:
5. Open the URL in the message in your browser. In Visual Studio you can also click the link while pressing the
CTRL-key, in this case Visual Studio opens the link in your default browser.
Note: Depending on your local system configuration the link might open a new tab in an already
running browser instance and it might be not “brought to front” by Visual Studio. If seemingly
nothing happens when CTRL-clicking the link switch to your running browser instance and check if
the page was opened there.
6. In the browser you are displayed with a “Welcome Page”. Click on Sign in with Microsoft to continue.
7. Sign in with your Microsoft account. It can be a personal or corporate/enterprise account. If you are already
signed in, this should happen automatically – you might need additional permissions from your Active Directory
admin. Learn more about admin consents
8. You will be taken to a setup page where you can set up your SpecFlow account. Enter your details to sign up for
a free SpecFlow account.
9. Return to Visual Studio and click on “Run all” again.
10. The acceptance tests should all pass.
When you execute your acceptance tests with SpecFlow+ Runner a special test execution report is generated automat-
ically.
To see the output of the SpecFlow+ Runner please open the Output window (View- > Output) and select “Tests” in the
“Show output from” dropdown. The hyperlink to the HTML execution report should be shown there.
The report contains information about the overall test results as well as a break down of each individual scenario
execution.
SpecFlow is completely independent of what level or which interface of the system is automated. When you implement
the step bindings you have to decide what the Given/When/Then steps should do to exercise the system and to validate
the acceptance criteria.
In a project where complex business logic is encapsulated in a bunch of classes there might be even an option to
validate some acceptance criteria on “unit level”. This level can be also automated with SpecFlow, writing the step
definitions accordingly. In this case the Given step could instantiate those classes based on the given preconditions,
the When step could execute a method on those classes performing some key business logic, and the Then step could
check if the result of the method call meets the expectations.
However, unit tests usually focus on implementation details far below the the abstraction level of an acceptance crite-
rion and then it is not feasible to automate those unit tests with SpecFlow.
In the Bookshop example we added some classic unit tests in the BookShop.UnitTest project. These are imple-
mented with xUnit and are NOT bound to SpecFlow scenarios.
The Bookshop example automates the tests directly through the Controller of the MVC web application with SpecFlow
(sometimes called automation below the skin).
Automating below the skin provides several benefits: less brittle tests, less efforts for automation, better performance
of the test suite.
Let’s examine the scenario in Book Details.feature and navigate to the step definitions of the steps (shortcut
“F12”).
The Given the following books step is bound to the GivenTheFollowingBooks step definition method
in the BookStep class. The step definition classes use the Driver pattern and Dependency Injection to better structure
the code into reusable layers and parts. Following the flow of execution to the DatabaseDriver the books are
inserted into the Entity Framework DatabaseContext (using an in-memory database):
_databaseContext.Books.Add(book);
...
_databaseContext.SaveChanges();
The When I open the details of 'Analysis Patterns' step is bound to the
WhenIOpenTheDetailsOfBook step definition method in the BookSteps class, passing the name of the
book as parameter. The implementation is delegated to an IBookDetailsDriver implementation, and with the
default configuration the IntegratedBookDetailsDriver is used. We’re calling the OpenBookDetails
method. Here we can see that our automation directly instantiates the Controller class, calls the action method, and
stores the result for the subsequent assertions.
It is important that Controller is instantiated with appropriate dependencies, to ensure that the Given/When/Then steps
rely on the same database context and other shared resources.
Finally the Then the book details should show step is bound to the
ThenTheBookDetailsShouldShow method in the BookSteps class, that again delegates to the
IntegratedBookDetailsDriver, where we can assert on the previously stored action result.
Note that the reason why these test run relatively fast is that the automation steps perform cheaper in-memory opera-
tions, basically working with .NET objects within a single process.
Sometimes the behaviour that should be validated cannot be observed on the controller level, but only on the UI. This
might range from client side javascript behavior up to server side middleware that is not executed when calling the
action methods of the controller classes directly. In those cases the automation of the user interface might be a solution.
In case of e2e UI automation the Given steps can open a browser with Selenium and perform the necessary preparation
steps. Still, the boundaries of automation are not necessarily strict. Sometimes ensuring all preconditions through
the user interface would be very hard, and it is a feasible tradeoff to manipulate the database or other underlying
components directly. The When steps typically perform those key user actions on the UI that are in the focus of the
scenario. And finally the Then steps can either validate the results on the UI or, again, could look into the database or
internal component directly to validate the expected result.
To demonstrate this approach as well, the Bookshop example contains an alternative automation implementation for
all scenarios using Selenium.
To enable the tests using Selenium UI automation, you need to add (uncomment) the Chrome target in the Default.
srprofile configuration file, while you need to remove (comment) the Integrated target.
<Target name="Chrome">
<DeploymentTransformationSteps>
<EnvironmentVariable variable="Mode" value="Chrome"/>
</DeploymentTransformationSteps>
</Target>
You also need to have the correct version of Chrome installed, that can be driven by the Selenium version used in
this example. It might be necessary to update Chrome or the Selenium version used in this example, to make the UI
automation work.
Execute the acceptance tests from the Test Explorer. This time the tests will open a Chrome window and automate the
application through Selenium. Notice, however, that the execution of the tests takes significantly longer.
Note: You can also enable multiple targets at the same time. In this case SpecFlow+ Runner will generate
a unique tests for each combination of target and scenario. In the Test Explorer the name of the test will
be the same (the title of the scenario), and you can distinguish the tests by their “Traits” (e.g. Target
[Chrome] vs. Target [Integrated]) Note: You can also experiment with headless Chrome or
Firefox by uncommenting the corresponding targets in the Default.srprofile configuration file.
However, while the headless Chrome automation is faster than Chrome, the Firefox automation runs very
slowly.
Let’s examine the same scenario in Book Details.feature again and compare the Selenium automation with
the Controller automation.
We have seen before that the Given the following books step is bound to the
GivenTheFollowingBooks step definition method and at the end the DatabaseDriver inserts the
books into the database. There is no difference here.
However, in case of the When I open the details of 'Analysis Patterns' step now a different
implementation of IBookDetailsDriver interface is configured due to our changes in the configuration file.
Instead of the IntegratedBookDetailsDriver the SeleniumBookDetailsDriver is used. In the
OpenBookDetails method of SeleniumBookDetailsDriver we can see that our automation interacts with
the BrowserDriver and WebServerDriver, where the first one automates the browser opening the appropriate
URL, while the second one automates the web server starting a new instance of the Bookshop application with Kestrel.
The Then the book details should show step is also routed to the SeleniumBookDetailsDriver.
In the ShowBookDetails method the result is validated in the browser. We use the page object pattern to encapsu-
late the UI details in the BookDetailPageObject class, e.g. how the title of the book can be found in the rendered
page with Selenium. This way the driver can formulate the expectations on a higher level:
if (expectedBook.Title != null)
{
bookDetailPageObject.Title.Should().Be(expectedBook.Title);
}
Notice that the phrasing of the scenarios didn’t have to be changed, in order to automate on a different layer. This
is a good practice, as SpecFlow scenarios shouldn’t express technical details of the automation, but the intention and
behaviour to be validated.
The Bookshop example extends the SpecFlow+ Runner execution report with screenshots from the user interface taken
during the UI automation. This is especially useful if a UI automated scenario breaks, because the screenshot might
provide an immediate clue about the root cause of the failure.
After each scenario step a screenshot is taken from the browser and saved into the output directory as a new file. For the
implementation details see the Screenshots.MakeScreenshotAfterStep method with the [AfterStep]
attribute. The name of the screenshot file is written into the trace output using Console.WriteLine.
The default report template is overridden in the Default.srprofile configuration in the TestProfile/
Report/Template element. The customized ReportTemplate.cshtml replaces the screenshot text in the
trace output with the image link.
While Visual Studio provides several convenience features when working with SpecFlow (syntax coloring, navigation,
integration with the Test Explorer, etc.), you can easily run the automated tests from the command line too.
• Open a command line terminal where you can execute .NET Core CLI commands
• Set the current directory to the root directory of the Bookshop example, where the BookShop.sln solution
file is located:
cd SpecFlow-Examples\ASP.NET-MVC\BookShop
dotnet build
dotnet test
Note: You can also skip the dotnet build step and run the tests immediately with dotnet test,
because this command also (re-)builds the project. However it hides the details of the build output. We
outlined the build as a separate step here as a best practice when examining a new project, because
separating the steps makes the understanding of the output and potential troubleshooting easier.
Note that if you run dotnet test for the entire Bookshop solution then both the unit tests and the acceptance tests
are executed.
The SpecFlow+ Runner execution reports and logs are generated in the “results directory” of the dotnet test
command. The default is the TestResults folder in the directory of the solution/project, but it can be overridden
with the -r|--results-directory <PATH> option of dotnet test.
Please consult the documentation of the dotnet test command for further details.
The following examples guide you through some typical questions/scenarios when running the Bookshop acceptance
tests from the command line using dotnet test. Feel free to experiment with other combinations of parameters
and consult the documentation of dotnet test.
Run only the acceptance tests (and ignore the unit tests) from the root folder of the Bookshop sample:
Note: the default TestResults test results directory of dotnet test is relative to
the project, hence in this case the reports and logs are generated into the BookShop.
AcceptanceTests\TestResults folder.
Alternatively you can run on the tests for the entire solution and use a filter to include the acceptance tests only:
Note: in this case dotnet test still discovers both the unit test and acceptance test projects separately
and emits a warning for the unit tests that “no test matches the given testcase filter”:
You can also specify the project file explicitly:
This speeds up the test execution command as the build step is skipped. It is also useful to limit the output of the
command to the test execution details.
Run tests with more detailed output (similar detail level like the Visual Studio output):
Note: if you omit the --no-build option the output will also contain the detailed output of the build.
Filter tests
Please also consult the documentation of filter options of the dotnet test command for more insights. With
SpecFlow+ Runner the Name and TestCategory properties can be used to filter the acceptance tests.
Tip: You can list all acceptance tests with the dotnet test BookShop.AcceptanceTests -t
command. The tests are listed by the Name property. This can help to check the naming convention and to
construct the desired filter by Name (e.g. --filter Name~"Author should be matched in
Searching for books").
See the @WI12 and @WI13tags on the scenarios in Features\Shopping Cart\Add to.feature.
We combined two filter expressions with the | (OR) operator. See the filter options documentation of dotnet test
for the string matching and conditional operators.
This command runs all scenarios where the feature or the scenario title contains the term “shopping cart”.
Note: you have to use the ~ (contains) operator to match the Name property.
Run all scenarios of the feature “Adding books to the shopping card”
See the feature in Features\Shopping Cart\Add to.feature. Note: you have to use the ~ (contains)
operator to match the Name property. Note: in practice feature titles and scenario titles are so unique that it is unlikely
that another scenario/feature title contains the whole title of your selected feature.
Run a single scenario “Author should be matched” in the “Searching for books” feature
Let’s look at 3 different solutions, as the used matching strategy gets more and more strict.
• Filter by scenario title only
– Note: you have to use the ~ (contains) operator to match the Name property.
– Note: in practice feature titles and scenario titles are so unique that it is unlikely that another sce-
nario/feature title contains the whole title of your selected scenario.
• Filter by scenario title AND feature title
– Note: you have to add the in word between the scenario title and feature title. This is how the Name
property of the test is built.
– Note: you have to use the ~ (contains) operator to match the Name property.
• Filter by scenario title AND feature title AND target (= the full name)
– When using the targets feature of SpecFlow+ Runner the same scenario can be executed on different
targets, hence the target is also included in the name of the test.
– Note: For this example the Integrated target must be enabled in the Default.srprofile.
– Note: here you can filter with exact match using the = (equals) operator to match the Name property,
because you use the full name in the filter.
– Note: the filter syntax of dotnet test recognizes parenthesis to enclose conditional operators. To
match the string (target: Integrated) in the name we have to escape the parenthesis with a
preceding \ (backslash) character.
– Provided that you enabled also the Chrome target in the Default.srprofile you can execute the
same test with the Chrome UI automation as:
Run all scenarios with the Controller level automation (and skip UI automation targets) to get a quick
test result
Filter by FullyQualifiedName
# NOT WORKING:
dotnet test BookShop.AcceptanceTests --filter "Author should be matched"`
The following command exactly matches the FullyQualifiedName of the scenario to demonstrate the structure
of the FullyQualifiedName property:
˓→AcceptanceTests/Feature:Searching+for+books/Scenario:Author+should+be+matched"
To get the most out of SpecFlow, get feedback early on and providing the basis for further discussions about the
behavior of your system, we recommend to share your Gherkin Feature Files with all your stakeholders and team
members.
An easy way to share your Gherkin Feature Files is to use the free SpecFlow+ LivingDoc:
• Generator for local or self-hosted documentation
• Azure DevOps Extension to quickly generate a living documentation from your Gherkin Feature Files on Azure
DevOps.
Demo: Try out our SpecFlow+ LivingDoc Generator Demo which is hosted on GitHub Pages.
The generated documentation can finally be shared per Email, per Microsoft Teams or Slack without the need for
Visual Studio.
Sounds interesting? Let’s get started with SpecFlow+ LivingDoc.
This guide explains how to update your SpecFlow 2.* project to the latest SpecFlow 3.* version
Before upgrading to the latest version, ensure you have a backup of your project (either locally or in a source control
system).
The Visual Studio integration for SpecFlow has been updated for SpecFlow 3. You will need to update the extension
in order to upgrade. If you previously set the extension to not update automatically, please enable automatic upgrades
once your projects have been migrated to SpecFlow 2.3.2 or higher.
In previous versions of SpecFlow, the unit test provider used to execute tests was configured in your app.config file. As
of SpecFlow 3, you need to configuring your unit test provider by installing one of the available packages (see below).
specflow.json
Moving forward, we recommend using specflow.json to configure SpecFlow, rather than app.config. .NET Core
projects require specflow.json (app.config is not supported). While using specflow.json is optional for Full Framework
projects, we recommend migrating to the new format. For more details, see Configuration in the documentation.
If you want to update both SpecFlow and SpecFlow+ Runner to version 3, the easiest way to do this is to simply
upgrade the SpecRun package. This automatically updates SpecFlow as well.
To update SpecFlow and SpecFlow+ Runner:
1. Open your solution, and check that it compiles, all tests are discovered and that all source files have been
committed.
2. Right-click on your solution and select Manage NuGet Packages for Solution.
3. Uninstall any SpecRun.SpecFlow.--* packages you have installed.
4. Install/update the following packages:
• SpecFlow
• SpecRun.SpecFlow
5. Remove “SpecFlowSingleFileGenerator” from the Custom Tool field in the Properties of your feature files.
If you have customized the SpecFlow+ runner templates, a small change needs to be made to the template for SpecFlow
3:
Open the CSHTML file in the editor of your choice. Replace the first line with the following:
@inherits SpecFlow.Plus.Runner.Reporting.CustomTemplateBase<TestRunResult>
2.4 Requirements
2.4.1 .NET
.NET Core
.NET Framework
Version
Workloads
• Project folder should not be too deep in the filesystem, as you will get problems with Windows 255 character
limit in file paths
2.5 Installation
Note: If you are new to SpecFlow, we recommend checking out the Getting Started guide first. It will take you through
the process of installing SpecFlow and setting up your first project and tests in Visual Studio.
SpecFlow consists of three components:
• The IDE Integration that provides a customized editor and test generation functions within your IDE.
• The generator that can turn Gherkin specifications into executable test classes, available from NuGet.
• The runtime required for executing the generated tests. There are different runtime assemblies compiled for
different target platforms. These packages are also available from NuGet.
In order to install everything you need, you first have to install the IDE integration and then set up your project to work
with SpecFlow using the NuGet packages.
The process of installing the IDE Integration packages depends on your IDE.
Visual Studio
We recommend installing the SpecFlow Visual Studio extension (IDE Integration), as this is the most convenient way
of working with SpecFlow. An overview of the features provided by the integration can be found here.
If you are using Deveroom, do not install the SpecFlow Visual Studio extension; you should only install one of
these 2 extensions.
The easiest way to install the IDE integration is to select Tools\Extensions and Updates from the menu and search
for “SpecFlow” in the online gallery.
The integration packages can also be downloaded and installed separately from the Visual Studio Gallery:
• VS2019 integration
• VS2017 integration
• VS2015 integration
We don’t maintain our own extension for MonoDevelop/XamarinStudio/Visual Studio for Mac. But our amazing
community created on at https://github.com/straighteight/SpecFlow-VS-Mac-Integration.
2.5. Installation 33
Welcome to SpecFlow’s documentation!
VSCode
Rider
The generator and runtime are usually installed together for each project. To install the NuGet packages:
1. Right-click on your project in Visual Studio, and select Manage NuGet Packages from the menu.
2. Switch to the Browse tab.
3. Enter “SpecFlow” in the search field to list the available packages for SpecFlow.
4. Install the required NuGet packages. Depending on your chosen unit test provider, you have to use different
packages. See this list to find the correct package
The SpecFlow.CustomPlugin NuGet package can be used to implement custom plugins for SpecFlow.
SpecFlow
https://www.nuget.org/packages/SpecFlow/
This is the main package of SpecFlow and contains all parts needed at Runtime.
SpecFlow.Tools.MsBuild.Generation
https://www.nuget.org/packages/SpecFlow.Tools.MsBuild.Generation/
This package enables the code-behind file generation at build time.
>= 3.0
It is mandatory for projects to use. After SpecFlow 3.3.30 this is a dependency of the SpecFlow.xUnit,
SpecFlow.NUnit, SpecFlow.MSTest and SpecRun.SpecFlow.3-3-0 packages, hence the package is
automatically installed with the unit test provider packages and you don’t have to install it manually.
< 3.0
This package is optional if the code-behind file generation is enabled in the Visual Studio Extension. However, we
recommend to upgrade to the MSBuild code behind file generation.
SpecFlow.xUnit
https://www.nuget.org/packages/SpecFlow.xUnit/
>= 3.0
If you want to use SpecFlow with xUnit, you have to use this packages, as it does the configuration for this.
We don’t support older versions than xUnit 2.4.0.
< 3.0
This package is optional to use, as all steps can be done manually. It changes automatically the app.config to use
xUnit for you and has a dependency on xUnit (>= 2.0).
SpecFlow.MsTest
https://www.nuget.org/packages/SpecFlow.MsTest/
>= 3.0
If you want to use SpecFlow with MsTest V2, you have to use this packages, as it does the configuration for this.
We don’t support older versions than MsTest V2 1.3.2.
< 3.0
This package is optional to use, as all steps can be done manually. It changes automatically the app.config to use
MsTest. No additional dependencies are added.
We support MsTest V1 and V2.
SpecFlow.NUnit
https://www.nuget.org/packages/SpecFlow.NUnit/
> 3.0
If you want to use SpecFlow with NUnit, you have to use this packages, as it does the configuration for this.
We don’t support older versions than NUnit 3.11.0.
< 3.0
This package is optional to use, as all steps can be done manually. It changes automatically the app.config to
use NUnit and has a dependency on NUnit (>= 3.0). If you want to use earlier version of NUnit, you have to do the
changes manually.
We support NUnit 2 & NUnit 3.
SpecFlow.NUnit.Runners
https://www.nuget.org/packages/SpecFlow.NUnit.Runners/
This is a meta-package to install the NUnit.Console package additionally.
SpecFlow.CustomPlugin
https://www.nuget.org/packages/SpecFlow.CustomPlugin/
This package is for writing your own runtime or generator plugins.
2.7 Configuration
SpecFlow’s behavior can be configured extensively. How to configure SpecFlow depends on the version of SpecFlow
you are using.
Starting with SpecFlow 3, you can use the specflow.json file to configure it. It is mandatory for .NET Core
projects and it is recommended for .NET Framework projects.When using the .NET Framework, you can still use the
app.config file, as with earlier versions of SpecFlow.
If both the specflow.json and app.config files are available in a project, specflow.json takes precedence.
SpecFlow 2 is configured in your standard .NET configuration file, app.config, which is automatically added to
your project. This method is not supported by .NET Core, and SpecFlow 2 does not include .NET Core support.
We recommend using specflow.json in new projects.
Both configuration methods use the same options and general structure. The only difference is that SpecFlow 2
only uses the app.config file (XML) and SpecFlow 3 requires the specflow.json file (JSON) for .NET Core
projects.
Configuration examples
The following 2 examples show the same option defined in the specflow.json and app.config in formats:
specflow.json example:
{
"language": {
"feature": "de-AT"
}
}
app.config example:
</configSections>
<specFlow>
<language feature="de-AT" />
</specFlow>
</configuration>
You can find more examples in the sample projects for SpecFlow.
All SpecFlow configuration options have a default setting. Simple SpecFlow projects may not require any further
configuration.
2.7. Configuration 37
Welcome to SpecFlow’s documentation!
SpecFlow 3
Yu can only configure your unit provider by adding the corresponding packages to your project. You will therefore
need to add one of the following NuGet packages to your project to configure the unit test provider:
• SpecRun.SpecFlow-3.3.0
• SpecFlow.xUnit
• SpecFlow.MsTest
• SpecFlow.NUnit
Note: Make sure you do not add more than one of the unit test plugins to your project. If you do, an error
message will be displayed.
SpecFlow 2
SpecFlow 2 is configured using the app.config file (Full Framework only). Enter your unit test provider in the
unitTestProvider element in the specflow section, e.g.:
<specFlow>
<unitTestProvider name="MsTest" />
</specFlow>
The same configuration elements are available in both the XML (app.config) and JSON (specflow.json)
formats.
language
Use this section to define the default language for feature files and other language-related settings. For more details
on language settings, see Feature Language.
bindingCulture
Use this section to define the culture for executing binding methods and converting step arguments. For more details
on language settings, see Feature Language.
generator
runtime
trace
stepAssemblies
This section can be used to configure additional assemblies that contain external binding assemblies. The assembly
of the SpecFlow project (the project containing the feature files) is automatically included. The binding assemblies
must be placed in the output folder (e.g. bin/Debug) of the SpecFlow project, for example by adding a reference to the
assembly from the project.
The following example registers an additional binding assembly (MySharedBindings.dll).
specflow.json example:
{
"stepAssemblies": [
{ "assembly": "MySharedBindings" }
]
}
app.config example:
<specFlow>
<stepAssemblies>
<stepAssembly assembly="MySharedBindings" />
</stepAssemblies>
</specFlow>
The <stepAssemblies> can contain multiple <stepAssembly> elements (one for each assembly), with the
following attributes.
Project Template
You can find the SpecFlow Project in the New project dialog, when you search for
SpecFlow.
After the normal configuration of your project (location, name, . . . ), you will
get to a step to choose your .NET version and test framework for the new
project.
Clicking on Next will create you a new project with all required NuGet
packages:
Item Template
Note: The .NET Core SDK is required to be installed in order to use project templates.
• SpecFlow Feature File: Gherkin Feature file with an example
• SpecFlow Feature File (Dutch/Nederlands): empty Gherkin feature file in dutch
• SpecFlow Feature File (English): empty Gherkin feature file in english
• SpecFlow Feature File (French/français): empty Gherkin feature file in french
• SpecFlow Feature File (German/Deutsch): empty Gherkin feature file in german
• SpecFlow Hooks (event bindings): template class with BeforeScenario and AfterScenario hooks
• SpecFlow Json Configuration: template for specflow.json
• SpecFlow Step Definition: step definitions class
• SpecFlow+ Runner Profile (.srProfile): configuration file for SpecFlow+ Runner
3. The template is installed locally. Once the installation is complete, you can use the template to create new
SpecFlow projects.
After installing the templates, you can create a new project using the following command:
dotnet new specflowproject
By default, a .NET Core project is created with SpecFlow+ Runner configured as the test runner. You can create a
project for a different test runner and/or target framework using the following optional parameters:
• framework: Determine the target framework for the project. The following options are available:
– netcoreapp3.0 (default): .NET Core 3.0
– netcoreapp3.1 (default): if .NET Core 3.1 is installed (presence of Core 3.x is mutual exklusive)
– netcoreapp2.2: .NET Core 2.2
– net472: .NET Framework 4.72
• unittestprovider: Determines the test runner. The following options are available:
– specflowplusrunner (default): SpecFlow+ Runner
– xunit: XUnit
– nunit: NUnit
– mstest: MSTest
Example: dotnet new specflowproject --unittestprovider xunit --framework
netcoreapp2.2
This creates a new project with XUnit as the unit test provider, and targetting .NET Core 2.2. The project is created
with a pre-defined structure that follows best practices. The project includes a single feature file (in the Features
folder) and its associated steps (in the Steps folder).
Item Templates
SpecFlow supports several unit test framework you can use to execute your tests.
To use a specific unit test provider, you have to add it’s dedicated NuGet package to your project.You can only have
one of these packages added to your project at once.
The unittestprovider is no longer configured in app.config, but using plugins for the various test frameworks.
It is therefore mandatory to use SpecRun.SpecFlow-3.0.0, SpecFlow.xUnit, SpecFlow.MsTest or SpecFlow.NUnit in
your project to configure the unittestprovider.
Generator plugins are no longer configured in app.config. To load a plugin, you have to add it to the
SpecFlowGeneratorPlugins item group via MSBuild.
The easiest way is to package your Generator Plugin in a NuGet package and use the automatic import of the props
and target files in them. A good example are the plugins for the different test frameworks.
For example, if we look at the xUnit Plugin:The Generator Plugin is located at /Plug-
ins/TechTalk.SpecFlow.xUnit.Generator.SpecFlowPlugin.
In the /Plugins/TechTalk.SpecFlow.xUnit.Generator.SpecFlowPlugin/build/SpecFlow.xUnit.props you can add your
entry to the SpecFlowGeneratorPlugins ItemGroup. Be careful, because dependent on the version of MSBuild
you use (Full Framework or .NET Core version), you have to put different assemblies in the ItemGroup. The best way
to do this is in the /Plugins/TechTalk.SpecFlow.xUnit.Generator.SpecFlowPlugin/build/SpecFlow.xUnit.targets files.
You can use the MSBuildRuntimeType property to decide which assembly you want to use.
Runtime plugins are also no longer configuredin app.config. SpecFlow loads now all files in the folder of the test
assembly and in the current working directory that end with .SpecFlowPlugin.dll.
Because .NET Core doesn’t copy references to the target directory, you have to add it to the None ItemGroup and set
its CopyToOutputDirectory to PreserveNewest.
You have to do the same decitions as with the generator plugins
The specflow.exe was removed.To generate the code-behind files, please use the SpecFlow.Tools.MsBuild.
Generation NuGet package.
Reports were removed from the main code-base and aren’t currently available for SpecFlow 3.0. For details why we
did it and where to find the extracted code is written in GitHub Issue https://github.com/techtalk/SpecFlow/issues/1036
Configuring SpecFlow via app.config is only available when you are using the Full Framework. If you are using .NET
Core, you have to use the new specflow.json configuration file.
Gherkin uses a set of special keywords to give structure and meaning to executable specifications. Each keyword is
translated to many spoken languages; in this reference we’ll use English.
Most lines in a Gherkin document start with one of the keywords.
Comments are only permitted at the start of a new line, anywhere in the feature file. They begin with zero or more
spaces, followed by a hash sign (#) and some text.
Block comments are currently not supported by Gherkin.
Either spaces or tabs may be used for indentation. The recommended indentation level is two spaces. Here is an
example:
The trailing portion (after the keyword) of each step is matched to a code block, called a step definition.
2.11.1 Keywords
Each line that isn’t a blank line has to start with a Gherkin keyword, followed by any text you like. The only exceptions
are the feature and scenario descriptions.
The primary keywords are:
• Feature
• Rule (as of Gherkin 6)
• Example (or Scenario)
• Given, When, Then, And, But for steps (or *)
• Background
• Scenario Outline (or Scenario Template)
• Examples
There are a few secondary keywords as well:
• """ (Doc Strings)
• | (Data Tables)
• @ (Tags)
• # (Comments)
Localisation Gherkin is localised for many spoken languages; each has their own localised equivalent of these key-
words.
Feature
The purpose of the Feature keyword is to provide a high-level description of a software feature, and to group related
scenarios.
The first primary keyword in a Gherkin document must always be Feature, followed by a : and a short text that
describes the feature.
You can add free-form text underneath Feature to add more description.
These description lines are ignored by SpecFlow at runtime, but are available for reporting (They are included by
default in html reports).
The name and the optional description have no special meaning to SpecFlow. Their purpose is to provide a place
for you to document important aspects of the feature, such as a brief explanation and a list of business rules (general
acceptance criteria).
The free format description for Feature ends when you start a line with the keyword Background, Rule,
Example or Scenario Outline (or their alias keywords).
You can place tags above Feature to group related features, independent of your file and directory structure.
Tags
Tags are markers that can be assigned to features and scenarios. Assigning a tag to a feature is equivalent to assigning
the tag to all scenarios in the feature file.
If supported by the unit test framework, SpecFlow generates categories from the tags. The generated category name
is the same as the tag’s name, but without the leading @. You can filter and group the tests to be executed using these
unit test categories. For example, you can tag crucial tests with @important, and then execute these tests more
frequently.
If your unit test framework does not support categories, you can still use tags to implement special logic for tagged
scenarios in bindings by querying the ScenarioContext.Current.ScenarioInfo.Tags property.
SpecFlow treats the @ignore tag as a special tag. SpecFlow generates an ignored unit test method from scenarios
with this tag.
Descriptions
Free-form descriptions (as described above for Feature) can also be placed underneath Example/Scenario,
Background, Scenario Outline and Rule.
You can write anything you like, as long as no line starts with a keyword.
Rule
The (optional) Rule keyword has been part of Gherkin since v6.
Support in SpecFlow Currently the Visual Studio Extension does not support the Rule keyword. If you use it, your
syntax highlighting will be broken.
The purpose of the Rule keyword is to represent one business rule that should be implemented. It provides additional
information for a feature. A Rule is used to group together several scenarios that belong to this business rule. A
Rule should contain one or more scenarios that illustrate the particular rule.
For example:
# -- FILE: features/gherkin.rule_example.feature
Feature: Highlander
Scenario
This is a concrete example that illustrates a business rule. It consists of a list of steps.
The keyword Scenario is a synonym of the keyword Example.
You can have as many steps as you like, but we recommend you keep the number at 3-5 per example. Having too
many steps in an example, will cause it to lose it’s expressive power as specification and documentation.
In addition to being a specification and documentation, an example is also a test. As a whole, your examples are an
executable specification of the system.
Examples follow this same pattern:
• Describe an initial context (Given steps)
• Describe an event (When steps)
Steps
This might seem like a limitation, but it forces you to come up with a less ambiguous, more clear domain language:
Given
Given steps are used to describe the initial context of the system - the scene of the scenario. It is typically something
that happened in the past.
When SpecFlow executes a Given step, it will configure the system to be in a well-defined state, such as creating and
configuring objects or adding data to a test database.
The purpose of Given steps is to put the system in a known state before the user (or external system) starts inter-
acting with the system (in the When steps). Avoid talking about user interaction in Given’s. If you were creating use
cases, Given’s would be your preconditions.
It’s okay to have several Given steps (use And or But for number 2 and upwards to make it more readable).
Examples:
• Mickey and Minnie have started a game
• I am logged in
• Joe has a balance of £42
When
When steps are used to describe an event, or an action. This can be a person interacting with the system, or it can be
an event triggered by another system.
It’s strongly recommended you only have a single When step per Scenario. If you feel compelled to add more, it’s
usually a sign that you should split the scenario up into multiple scenarios.
Examples:
• Guess a word
• Invite a friend
• Withdraw money
Imagine it’s 1922 Most software does something people could do manually (just not as efficiently).
Try hard to come up with examples that don’t make any assumptions about technology or user interface. Imagine it’s
1922, when there were no computers.
Implementation details should be hidden in the step definitions.
Then
And, But
Or, you could make the example more fluidly structured by replacing the successive Given’s, When’s, or Then’s
with And’s and But’s:
Gherkin also supports using an asterisk (*) in place of any of the normal step keywords. This can be helpful when you
have some steps that are effectively a list of things, so you can express it more like bullet points where otherwise the
Background
Occasionally you’ll find yourself repeating the same Given steps in all of the scenarios in a Feature.
Since it is repeated in every scenario, this is an indication that those steps are not essential to describe the scenarios;
they are incidental details. You can literally move such Given steps to the background, by grouping them under a
Background section.
A Background allows you to add some context to the scenarios that follow it. It can contain one or more Given
steps, which are run before each scenario, but after any Before hooks.
A Background is placed before the first Scenario/Example, at the same level of indentation.
For example:
Background:
Given a global administrator named "Greg"
And a blog named "Greg's anti-tax rants"
And a customer named "Dr. Bill"
And a blog named "Expensive Therapy" owned by "Dr. Bill"
Scenario: Dr. Bill tries to post to somebody else's blog, and fails
Given I am logged in as Dr. Bill
When I try to post to "Greg's anti-tax rants"
Then I should see "Hey! That's not your blog!"
Rule: Users are notified about overdue tasks on first use of the day
Background:
Given I have overdue tasks
You can only have one set of Background steps per Feature or Rule. If you need different Background steps
for different scenarios, consider breaking up your set of scenarios into more Rules or more Features.
For a less explicit alternative to Background, check out scoped step definitions.
• Don’t use Background to set up complicated states, unless that state is actually something the client needs
to know.
– For example, if the user and site names don’t matter to the client, use a higher-level step such as Given
I am logged in as a site owner.
• Keep your Background section short.
– The client needs to actually remember this stuff when reading the scenarios. If the Background is more
than 4 lines long, consider moving some of the irrelevant details into higher-level steps.
• Make your Background section vivid.
– Use colourful names, and try to tell a story. The human brain keeps track of stories much better than it
keeps track of names like "User A", "User B", "Site 1", and so on.
• Keep your scenarios short, and don’t have too many.
– If the Background section has scrolled off the screen, the reader no longer has a full overview of whats
happening. Think about using higher-level steps, or splitting the *.feature file.
Scenario Outline
The Scenario Outline keyword can be used to run the same Scenario multiple times, with different combi-
nations of values.
Examples:
| start | eat | left |
| 12 | 5 | 7 |
| 20 | 5 | 15 |
A Scenario Outline must contain an Examples (or Scenarios) section. Its steps are interpreted as a tem-
plate which is never directly run. Instead, the Scenario Outline is run once for each row in the Examples
section beneath it (not counting the first header row).
The steps can use <> delimited parameters that reference headers in the examples table. SpecFlow will replace these
parameters with values from the table before it tries to match the step against a step definition.
Hint: In certain cases, when generating method names using the regular expression method, SpecFlow is unable to
generate the correct parameter signatures for unit test logic methods without a little help. Placing single quotation
marks (') around placeholders (eg. '<placeholder>')improves SpecFlow’s ability to parse the scenario outline
and generate more accurate regular expressions and test method signatures.
You can also use parameters in multiline step arguments.
In some cases you might want to pass more data to a step than fits on a single line. For this purpose Gherkin has Doc
Strings and Data Tables.
Doc Strings
Doc Strings are handy for passing a larger piece of text to a step definition.
The text should be offset by delimiters consisting of three double-quote marks on lines of their own:
In your step definition, there’s no need to find this text and match it in your pattern. It will automatically be passed as
the last argument in the step definition.
Indentation of the opening """ is unimportant, although common practice is two spaces in from the enclosing step.
The indentation inside the triple quotes, however, is significant. Each line of the Doc String will be dedented according
to the opening """. Indentation beyond the column of the opening “”” will therefore be preserved.
Data Tables
Data Tables are handy for passing a list of values to a step definition:
Just like Doc Strings, Data Tables will be passed to the step definition as the first argument.
SpecFlow provides a rich API for manipulating tables from within step definitions. See the Assist Namespace reference
for more details.
The language you choose for Gherkin should be the same language your users and domain experts use when they talk
about the domain. Translating between two languages should be avoided.
This is why Gherkin has been translated to over 70 languages.
Here is a Gherkin scenario written in Norwegian:
# language: no
Funksjonalitet: Gjett et ord
A # language: header on the first line of a feature file tells SpecFlow what spoken language to use - for example
# language: fr for French. If you omit this header, SpecFlow will default to English (en).
You can also define the language in the configuration file.
Gherkin Dialects
In order to allow Gherkin to be written in a number of languages, the keywords have been translated into multi-
ple languages. To improve readability and flow, some languages may have more than one translation for any given
keyword.
Overview
You can find all translation of Gherkin on GitHub. This is also the place to add or update translations.
Note Big parts of this page where taken over from https://cucumber.io/docs/gherkin/reference/.
To avoid communication errors introduced by translations, it is recommended to keep the specification and the accep-
tance test descriptions in the language of the business. The Gherkin format supports many natural languages besides
English, like German, Spanish or French. More details on the supported natural languages are available here.
The language of the feature files can either be specified globally in your configuration (see language element, or in the
feature file’s header using the #language syntax. Specify the language using the ISO language names used by the
CultureInfo class of the .NET Framework (e.g. en-US).
#language: de-DE
Funktionalität: Addition
...
SpecFlow uses the feature file language to determine the set of keywords used to parse the file, but the language
setting is also used as the default setting for converting parameters by the SpecFlow runtime. The culture for binding
execution and parameter conversion can be specified explicitly, see bindingCulture element.
As data conversion can only be done using a specific culture in the .NET Framework, we recommend using the specific
culture name (e.g. en-US) instead of the neutral culture name (e.g. en). If a neutral culture is used, SpecFlow uses a
specific default culture to convert data (e.g. en-US is used to convert data if the en language was used).
2.13 Bindings
The Gherkin feature files are closer to free-text than to code – they cannot be executed as they are. The automation
that connects the specification to the application interface has to be developed first. The automation that connects
the Gherkin specifications to source code is called a binding. The binding classes and methods can be defined in the
SpecFlow project or in external binding assemblies.
There are several kinds of bindings in SpecFlow.
This is the most important one. The step definition that automates the scenario at the step level. This means that instead
of providing automation for the entire scenario, it has to be done for each separate step. The benefit of this model is
that the step definitions can be reused in other scenarios, making it possible to (partly) construct further scenarios from
existing steps with less (or no) automation effort.
2.13.2 Hooks
Hooks can be used to perform additional automation logic on specific events, e.g. before executing a scenario.
The step definitions provide the connection between your feature files and application interfaces. For better reusability,
the step definitions can include parameters. This means that it is not necessary to define a new step definition for each
step that just differs slightly. For example, the steps When I perform a simple search on 'Domain'
and When I perform a simple search on 'Communication' can be automated with a single step def-
inition, with ‘Domain’ and ‘Communication’ as parameters.
The following example shows a simple step definition that matches to the step When I perform a simple
search on 'Domain':
Here the method is annotated with the [When] attribute, and includes the regular expression used to match the step’s
text. This regular expression uses ((.*)) to define parameters for the method.
This is the classic and most used way of specifying the step definitions. The step definition method has to be annotated
with one or more step definition attributes with regular expressions.
Most of the step definitions can also be specified without regular expression: the method should be named with
underscored naming style (or pascal-case, see below) and should be annotated with an empty [Given], [When],
[Then] or [StepDefinition] attribute. You can refer to the method parameters with either the parameter name
(ALL-CAPS) or the parameter index (zero-based) with the P prefix, e.g. P0.
[Given]
public void Given_I_have_entered_NUMBER_into_the_calculator(int number)
{
...
}
Matching rules:
• The match is case-insensitive.
• Underscore character is matched to one or more non-word character (eg. whitespace, punctuation): \W+.
• If the step contains accented characters, the method name should also contain the accented characters (no sub-
stitution).
• The step keyword (e.g. Given) can be omitted: [Given] public void
I_have_entered_NUMBER_....
• The step keyword can be specified in the local Gherkin language, or English. The default language
can be specified in the app config as the feature language or binding culture. The following step
definition is threfore a valid “Given” step with German language settings: [When] public void
Wenn_ich_addieren_drücke()
Similarly to the underscored naming style, you can also specify the step definitions with Pascal-case method names.
[Given]
public void GivenIHaveEnteredNUMBERIntoTheCalculator(int number)
{
...
}
Matching rules:
• All rules of “Method name - underscores” style applied.
• Any potential word boundary (e.g. number-letter, uppercase-lowercase, uppercase-uppercase) is matching to
zero or more non-word character (eg. whitespace, punctuation): \W*.
• You can mix this style with underscores. For example, the parameter placeholders can be highlighted this way:
GivenIHaveEntered_P0_IntoTheCalculator(int number)
F# allows providing any characters in method names, so you can make the regular expression as the method name, if
you use F# bindings.
• Step definitions can specify parameters. These will match to the parameters of the step definition method.
• The method parameter type can be string or other .NET type. In the later case a configurable conversion is
applied.
• With regular expressions
– The match groups ((...)) of the regular expression define the arguments for the method based on the
order (the match result of the first group becomes the first argument, etc.).
– You can use non-capturing groups (?:regex) in order to use groups without a method argument.
• With method name matching
– You can refer to the method parameters with either the parameter name (ALL-CAPS) or the parameter
index (zero-based) with the P prefix, e.g. P0.
If the step definition method should match for steps having table or multi-line text arguments, additional Table and/or
string parameters have to be defined in the method signature to be able to receive these arguments. If both table
and multi-line text argument are used for the step, the multi-line text argument is provided first.
2.15 Hooks
Hooks (event bindings) can be used to perform additional automation logic at specific times, such as any setup required
prior to executing a scenario. In order to use hooks, you need to add the Binding attribute to your class:
[Binding]
public class MyClass
{
...
}
Hooks are global, but can be restricted to run only for features or scenarios by defining a scoped binding, which can
be filtered with tags. The execution order of hooks for the same type is undefined, unless specified explicitly.
When running tests in multiple threads with SpecFlow+ Runner, Before and After hooks such as BeforeTestRun
and AfterTestRun are executed once for each thread. This is a limitation of the current architecture.
If you need to execute specific steps once per test run, rather than once per thread, you can do this using deployment
transformations. An example can be found here.
By default the hooks of the same type (e.g. two [BeforeScenario] hook) are executed in an unpredictable order.
If you need to ensure a specific execution order, you can specify the Order property in the hook’s attributes.
[BeforeScenario(Order = 0)]
public void CleanDatabase()
{
// we need to run this first...
}
[BeforeScenario(Order = 100)]
public void LoginUser()
(continues on next page)
The number indicates the order, not the priority, i.e. the hook with the lowest number is always executed first.
If no order is specified, the default value is 1000. However, we do not recommend on relying on the value to order
your tests and recommend specifying the order explicitly for each hook.
Note: If a hook throws an unhandled exception, subsequent hooks of the same type are not executed. If you want to
ensure that all hooks of the same types are executed, you need to handle your exceptions manually.
Most hooks support tag scoping. Use tag scoping to restrict hooks to only those features or scenarios that have at least
one of the tags in the tag filter (tags are combined with OR). You can specify the tag in the attribute or using scoped
bindings.
If you have code that executes an asynchronous task, you can define asynchronous bindings to execute the correspond-
ing code using the async and await keywords.
A sample project using asynchronous bindings can be found here. The When binding in WebSteps.cs is asynchronous,
and defined as follows:
Bindings (step definitions, hooks) are global for the entire SpecFlow project. This means that step definitions bound
to a very generic step text (e.g. “When I save the changes”) become challenging to implement. The general solution
for this problem is to phrase the scenario steps in a way that the context is clear (e.g. “When I save the book details”).
In some cases however, it is necessary to restrict when step definitions or hooks are executed based on certain condi-
tions. SpecFlow’s scoped bindings can be used for this purpose.
You can restrict the execution of scoped bindings by:
• tag
• feature (using the feature title)
Navigation from feature files to scoped step definitions is currently not supported by the Visual Studio extension.
The following example starts Selenium for scenarios marked with the @web tag.
[BeforeScenario("web")]
public static void BeforeWebScenario()
{
StartSelenium();
}
The following example defines a different scope for the same step depending on whether UI automation (“web” tag)
or controller automation (“controller” tag) is required:
The following example shows a way to “ignore” executing the scenarios marked with @manual. However SpecFlow’s
tracing will still display the steps, so you can work through the manual scenarios by following the steps in the report.
You can define more complex filters using the ScenarioContext class. The following example starts selenium if the
scenario is tagged with @web and @automated.
[Binding]
public class Binding
{
ScenarioContext _scenarioContext;
[BeforeScenario("web")]
public static void BeforeWebScenario()
{
if(_scenarioContext.ScenarioInfo.Tags.Contains("automated"))
StartSelenium();
}
}
In many cases, different bindings need to exchange data during execution. SpecFlow provides several ways of sharing
data between bindings.
If the binding is an instance method, SpecFlow creates a new instance of the containing class for every scenario
execution. Following the entity-based step organization rule, defining instance fields in the binding classes is an
efficient way of sharing data between different steps of the same scenario that are related to the same entity.
The following example saves the result of the MVC action to an instance field in order to make assertions for it in a
“then” step.
[Binding]
public class SearchSteps
{
private ActionResult actionResult;
SpecFlow supports a very simple dependency framework that is able to instantiate and inject class instances for the
scenarios. With this feature you can group the shared state to context-classes, and inject them into every binding class
that is interested in that shared state.
[Binding]
public class BookSteps
{
private readonly CatalogContext _catalogContext;
Generally, using static fields can cause synchronization and maintenance issues and makes the unit testability hard.
As the SpecFlow tests are executed synchronously and people usually don’t write unit tests for the tests itself, these
arguments are just partly valid for binding codes.
In some cases sharing a state through static fields can be an efficient solution.
SpecFlow supports a very simple dependency framework that is able to instantiate and inject class instances for sce-
narios. This feature allows you to group the shared state in context classes, and inject them into every binding class
that needs access to that shared state.
To use context injection:
1. Create your POCOs (simple .NET classes) representing the shared data.
2. Define them as constructor parameters in every binding class that requires them.
3. Save the constructor argument to instance fields, so you can use them in the step definitions.
Rules:
• The life-time of these objects is limited to a scenario’s execution.
• If the injected objects implement IDisposable, they will be disposed after the scenario is executed.
• The injection is resolved recursively, i.e. the injected class can also have dependencies.
• Resolution is done using public constructors only.
• If there are multiple public constructors, SpecFlow takes the first one.
The container used by SpecFlow can be customized, e.g. you can include object instances that have already been
created, or modify the resolution rules. See the Advanced options section below for details.
2.19.1 Examples
In the first example we define a POCO for holding the data of a person and use it in a given and a then step that are
placed in different binding classes.
[Binding]
public class MyStepDefs
{
private readonly PersonData personData;
public MyStepDefs(PersonData personData) // use it as ctor parameter
{
this.personData = personData;
}
[Given]
public void The_person_FIRSTNAME_LASTNAME(string firstName, string lastName)
{
personData.FirstName = firstName; // write into the shared data
personData.LastName = lastName;
//... do other things you need
}
}
[Binding]
(continues on next page)
[Then]
public void The_person_data_is_properly_displayed()
{
var displayedData = ... // get the displayed data from the app
// read from shared data, to perform assertions
Assert.AreEqual(personData.FirstName + " " + personData.LastName,
displayedData, "Person name was not displayed properly");
}
}
The following example defines a context class to store referred books. The context class is injected into a binding
class.
public class CatalogContext
{
public CatalogContext()
{
ReferenceBooks = new ReferenceBookList();
}
[Binding]
public class BookSteps
{
private readonly CatalogContext _catalogContext;
The container used by SpecFlow can be customized, e.g. you can include object instances that have already been
created, or modify the resolution rules.
You can customize the container from a plugin or a before scenario hook. The class customizing the injection rules
has to obtain an instance of the scenario execution container (an instance of BoDi.IObjectContainer). This can
be done through constructor injection (see example below).
The following example adds the Selenium web driver to the container, so that binding classes can specify
IWebDriver dependencies (a constructor argument of type IWebDriver).
[Binding]
public class WebDriverSupport
{
private readonly IObjectContainer objectContainer;
[BeforeScenario]
public void InitializeWebDriver()
{
var webDriver = new FirefoxDriver();
objectContainer.RegisterInstanceAs<IWebDriver>(webDriver);
}
}
As mentioned above, the default SpecFlow container is IObjectContainer which is recommended for most
scenarios. However, you may have situations where you need more control over the configuration of the dependency
injection, or make use of an existing dependency injection configuration within the project you are testing, e.g. pulling
in service layers for assisting with assertions in Then stages.
• SpecFlow.Autofac
• SpecFlow.Unity
• SpecFlow.Ninject (currently not on nuget)
To make use of these plugins, you need to add a reference and add the plugin to your configuration in the specflow
section:
<specFlow>
<plugins>
<add name="SpecFlow.Autofac" type="Runtime" />
</plugins>
<!-- Anything else -->
</specFlow>
This tells SpecFlow to load the runtime plugin and allows you to create an entry point to use this functionality, as
shown in the autofac example. Once set up, your dependencies are injected into steps and bindings like they are with
the IObjectContainer, but behind the scenes it will be pulling those dependencies from the DI container you
added.
One thing to note here is that each plugin has its own conventions for loading the entry point. This is
often a static class with a static method containing an attribute that is marked by the specific plugin. You
should check the requirements of the plugins you are using.
You can load all your dependencies within this handler section, or you can to inject the relevant IoC container into
your binding sections like this:
[Binding]
public class WebDriverPageHooks
{
private readonly IKernel _kernel;
[BeforeScenario]
public void BeforeScenario()
{
var webdriver = SetupWebDriver();
_kernel.Bind<IWebDriver>().ToConstant(webdriver);
}
[AfterScenario]
public void AfterScenario()
{
var webDriver = _kernel.Get<IWebDriver>();
webDriver.Close();
webDriver.Dispose();
}
}
This gives you the option of either loading types up front or creating types within your binding sections so you can
dispose of them as necessary.
We recommend looking at the autofac example and plugins documentation and following these conventions.
Remember to adhere to the plugin documentation and have your assembly end in .SpecFlowPlugin
e.g. SpecFlow.AutoFac.SpecFlowPlugin. Internal namespaces can be anything you want, but
the assembly name must follow this naming convention or SpecFlow will be unable to locate it.
2.20 ScenarioContext
You may have at least seen the ScenarioContext from the code that SpecFlow generates when a missing step definition
is found: ScenarioContext.Pending();
ScenarioContext provides access to several functions, which are demonstrated using the following scenarios.
in Bindings
[Binding]
public class Binding
{
private ScenarioContext _scenarioContext;
Now you can access the ScenarioContext in all your Bindings with the_scenarioContext‘ field.
in Hooks
Before/AfterTestRun
Accessing the ScenarioContext is not possible, as no Scenario is executed when the hook is called.
Before/AfterFeature
Accessing the ScenarioContext is not possible, as no Scenario is executed when the hook is called.
Before/AfterScenario
Before/AfterStep
2.20.2 ScenarioContext.Pending
This is the default behavior for a missing step definition, but you can also use it directly if (why?) you want to:
in the .feature:
ScenarioContext helps you store values in a dictionary between steps. This helps you to organize your step definitions
better than using private variables in step definition classes.
There are some type-safe extension methods that help you to manage the contents of the dictionary in a safer way. To
do so, you need to include the namespace TechTalk.SpecFlow.Assist, since these methods are extension methods of
ScenarioContext.
2.20.4 ScenarioContext.ScenarioInfo
ScenarioContext.ScenarioInfo allows you to access information about the scenario currently being exe-
cuted, such as its title and tags:
In the .feature file:
@showUpInScenarioInfo @andThisToo
2.20. ScenarioContext 69
Welcome to SpecFlow’s documentation!
// Assertions
si.Title.Should().Equal(fromStep.Title);
for (var i = 0; i < si.Tags.Length -1; i++)
{
si.Tags[i].Should().Equal(fromStep.Tags[i]);
}
}
Another use is to check if an error has occurred, which is possible with the ScenarioContext.TestError
property, which simply returns the exception.
You can use this information for “error handling”. Here is an uninteresting example:
in the .feature file:
[AfterScenario("showingErrorHandling")]
public void AfterScenarioHook()
{
if(_scenarioContext.TestError != null)
{
var error = _scenarioContext.TestError;
Console.WriteLine("An error ocurred:" + error.Message);
Console.WriteLine("It was of type:" + error.GetType().Name);
}
}
[AfterScenario]
public void AfterScenario()
{
if(_scenarioContext.TestError != null)
{
WebBrowser.Driver.CaptureScreenShot(_scenarioContext.ScenarioInfo.
˓→Title);
}
}
In this case, MvcContrib is used to capture a screenshot of the failing test and name the screenshot after the title of the
scenario.
2.20.5 ScenarioContext.CurrentScenarioBlock
Use ScenarioContext.CurrentScenarioBlock to query the “type” of step (Given, When or Then). This
can be used to execute additional setup/cleanup code right before or after Given, When or Then blocks.
in the .feature file:
2.20.6 ScenarioContext.StepContext
Sometimes you need to access the currently executed step, e.g. to improve tracing. Use the _scenarioContext.
StepContext property for this purpose.
2.21 FeatureContext
SpecFlow provides access to the current test context using both FeatureContext and the more commonly
used ScenarioContext. FeatureContext persists for the duration of the execution of an entire feature, whereas
ScenarioContext only persists for the duration of a scenario.
2.21. FeatureContext 71
Welcome to SpecFlow’s documentation!
in Bindings
[Binding]
public class Binding
{
private FeatureContext _featureContext;
Now you can access the FeatureContext in all your Bindings with the _featureContext field.
in Hooks
Before/AfterTestRun
Accessing the FeatureContext is not possible, as no Feature is executed, when the hook is called.
Before/AfterFeature
You can get the FeatureContext via parameter of the static method.
Example:
[Binding]
public class Hooks
{
[BeforeFeature]
public static void BeforeFeature(FeatureContext featureContext)
{
Console.WriteLine("Starting " + featureContext.FeatureInfo.Title);
}
[AfterFeature]
public static void AfterFeature(FeatureContext featureContext)
{
Console.WriteLine("Finished " + featureContext.FeatureInfo.Title);
}
}
Before/AfterScenario
Before/AfterStep
FeatureContext implements Dictionary<string, object>. So you can use the FeatureContext like a property
bag.
2.21.3 FeatureContext.FeatureInfo
FeatureInfo provides more information than ScenarioInfo, but it works the same:
In the .feature file:
var fi = _featureContext.FeatureInfo;
// Assertions
fi.Title.Should().Equal(fromStep.Title);
fi.GenerationTargetLanguage.Should().Equal(fromStep.TargetLanguage);
fi.Description.Should().StartWith(fromStep.Description);
fi.Language.IetfLanguageTag.Should().Equal(fromStep.Language);
for (var i = 0; i < fi.Tags.Length - 1; i++)
{
(continues on next page)
2.21. FeatureContext 73
Welcome to SpecFlow’s documentation!
FeatureContext exposes a Binding Culture property that simply points to the culture the feature is written in
(en-US in our example).
Note: This feature will be deprecated with SpecFlow 3.1 and removed in a future version (probably 4.0).
It is possible to call steps from within [Step Definitions](Step Definitions.md):
[Binding]
public class CallingStepsFromStepDefinitionSteps : Steps
{
[Given(@"the user (.*) exists")]
public void GivenTheUserExists(string name)
{
// ...
}
Invoking steps from step definitions is practical if you have several common steps that you want to perform in several
scenarios, or simply if you want to make your scenarios shorter and more declarative. This allows you to do the
following in a Scenario:
Instead of having a lot of repetition:
Note: When using this approach to remove duplications from your feature files, the console output will contain both
the master step and the delegated steps as follows:
Sometimes you want to call a step that has been designed to take [Multiline Step Arguments](../Gherkin/Using Gherkin
Language in SpecFlow.md), for example:
Tables
This can easily be called from a plain text step like this:
But what if you want to call this from a step definition? There are a couple of ways to do this:
Step bindings can use parameters to make them reusable for similar steps. The parameters are taken from either the
step’s text or from the values in additional examples. These arguments are provided as either strings or TechTalk.
SpecFlow.Table instances.
To avoid cumbersome conversions in the step binding methods, SpecFlow can perform an automatic conversion from
the arguments to the parameter type in the binding method. All conversions are performed using the culture of the
feature file, unless the bindingCulture element is defined in your app.config file (see Feature Language). The
following conversions can be performed by SpecFlow (in the following precedence):
• no conversion, if the argument is an instance of the parameter type (e.g. the parameter type is object or
string)
• step argument transformation
• standard conversion
Step argument transformations can be used to apply a custom conversion step to the arguments in step definitions.
The step argument transformation is a method that converts from text (specified by a regular expression) or a Table
instance to an arbitrary .NET type.
A step argument transformation is used to convert an argument if:
• The return type of the transformation is the same as the parameter type
• The regular expression (if specified) matches the original (string) argument
Note: If multiple matching transformation are available, a warning is output in the trace and the first transformation is
used.
The following example transforms a relative period of time (in 3 days) into a DateTime structure.
[Binding]
public class Transforms
{
[StepArgumentTransformation(@"in (\d+) days?")]
public DateTime InXDaysTransform(int days)
{
return DateTime.Today.AddDays(days);
}
}
The following example transforms any string input (no regex provided) into an XmlDocument.
[Binding]
public class Transforms
{
[StepArgumentTransformation]
public XmlDocument XmlTransform(string xml)
{
XmlDocument result = new XmlDocument();
result.LoadXml(xml);
return result;
}
}
The following example transforms a table argument into a list of Book entities (using the SpecFlow Assist Helpers).
[Binding]
public class Transforms
{
[StepArgumentTransformation]
public IEnumerable<Book> BooksTransform(Table booksTable)
{
return booksTable.CreateSet<Books>();
}
}
[Binding]
public class Transforms
{
[Given(@"Show messenger""(.*)""")]
public void GiveShowMessenger()
{
string chave = nfe.Tab();
Assert.IsNotNull(chave);
}
}
• The parameter type is Guid and the argument contains a full GUID string or a GUID string prefix. In the latter
case, the value is filled with trailing zeroes.
Bindings can be defined in the main SpecFlow project or in other assemblies (external binding assemblies). If the
bindings are used from external binding assemblies, the following notes have to be considered:
• The external binding assembly can be another project in the solution or a compiled library (dll).
• The external binding assembly can also use a different .NET language, e.g. you can write bindings for your C#
SpecFlow project also in F# (As an extreme case, you can use your SpecFlow project with the feature files only
and with all the bindings defined in external binding assemblies).
• The external binding assembly has to be referenced from the SpecFlow project to ensure it is copied to the target
folder and listed in the app.config of the SpecFlow project (see below).
• The external binding assemblies can contain all kind of bindings: step definition, hooks and also step argument
transformations.
• The bindings from assembly references are not fully supported in the Visual Studio integration of SpecFlow
v1.8 or earlier: the step definitions from these assemblies will not be listed in the autocompletion lists.
• The external binding file must be in the root of the project being referenced. If it is in a folder in the project, the
bindings will not be found.
2.24.1 Configuration
In order to use bindings from an external binding assembly, you have to list it (with the assembly name) in the
app.config of the SpecFlow project. The SpecFlow project is always included implicitly. See more details on the
configuration in the <stepAssemblies> section of the configuration guide.
<specFlow>
<stepAssemblies>
<stepAssembly assembly="MySharedBindings" />
</stepAssemblies>
</specFlow>
You can globally rename steps and update the associated bindings automatically. To do so:
1. Open the feature file containing the step.
2. Right-click on the step you want to rename and select Rename from the context menu.
3. Enter the new text for the step in the dialog and confirm with OK.
4. Your bindings and all feature files containing the step are updated.
Note: If the rename function is not affecting your feature files, you may need to restart Visual Studio to flush the
cache.
To use these helpers, you need to add the TechTalk.SpecFlow.Assist namespace to the top of your file:
using TechTalk.SpecFlow.Assist;
2.26.1 CreateInstance
CreateInstance<T> is an extension method of Table that will convert the table data to an object. For example,
if you list data in a table that lists the values of your object like this:
Given I entered the following data into the new account form:
| Field | Value |
| Name | John Galt |
| Birthdate | 2/2/1902 |
| HeightInInches | 72 |
| BankAccountBalance | 1234.56 |
Given I entered the following data into the new account form:
| Name | Birthdate | HeightInInches | BankAccountBalance |
| John Galt | 2/2/1902 | 72 | 1234.56 |
. . . you can convert the data in the table to an instance of an object like this:
[Given(@"Given I entered the following data into the new account form:")]
public void x(Table table)
{
var account = table.CreateInstance<Account>();
// account.Name will equal "John Galt", HeightInInches will equal 72, etc.
}
The CreateInstance<T> method will create the Account object and set properties according to what can be read
from the table. It also uses the appropriate casting or conversion to turn your string into the appropriate type.
The headers in this table can be anything you want, e.g. “Field” and “Value”. What matters is that the first column
contains the property name and the second column the value.
Alternatively you can use ValueTuples and destructuring:
[Given(@"Given I entered the following data into the new account form:")]
public void x(Table table)
{
var account = table.CreateInstance<Account>();
var account = table.CreateInstance<(string name, DateTime birthDate, int
˓→heightInInches, decimal bankAccountBalance)>();
// account.name will equal "John Galt", heightInInches will equal 72, etc.
}
Important: In the case of tuples, you need to have the same number of parameters and types; parameter names
do not matter, as ValueTuples do not hold parameter names at runtime using reflection. Scenarios with more than 7
properties are not currently supported, and you will receive an exception if you try to map more properties.
2.26.2 CreateSet
CreateSet<T> is an extension method of Table that converts table data into a set of objects. For example, assume
you have the following step:
You can convert the data in the table to a set of objects like this:
The CreateSet<T> method returns an IEnumerable<T> based on the matching data in the table. It contains the
values for each object, making appropriate type conversions from string to the related property.
2.26.3 CompareToInstance
CompareToInstance<T> makes it easy to compare the properties of an object against a table. For example, you
have a class like this:
You can assert that the properties match with this simple step definition:
[Binding]
public class Binding
{
ScenarioContext _scenarioContext;
table.CompareToInstance<Person>(person);
}
}
If FirstName is not “John”, LastName is not “Galt”, or YearsOld is not 54, a descriptive error showing the differences
is thrown.
If the values match, no exception is thrown, and SpecFlow continues to process your scenario.
2.26.4 CompareToSet
CompareToSet<T> makes it easy to compare the values in a table to a set of objects. For example, you have a class
like this:
You want to test that your system returns a specific set of accounts, like this:
[Binding]
public class Binding
{
ScenarioContext _scenarioContext;
table.CompareToSet<Account>(accounts)
}
}
In this example, CompareToSet<T> checks that two accounts are returned, and only tests the properties you defined
in the table. It does not test the order of the objects, only that one was found that matches. If no record matching
the properties in your table is found, an exception is thrown that includes the row number(s) that do not match up.
The SpecFlow Assist helpers use the values in your table to determine what properties to set in your object. However,
the names of the columns do not need to match exactly - whitespace and casing is ignored. For example, the following
two tables are treated as identical:
2.26.6 Aliasing
[TableAliases("Last[]?Name", "Family[]?Name")]
public string Surname { get; set; }
}
Test writers can then refer to this property as Surname, Last Name, Lastname, Family Name or FamilyName, and it
will still be mapped to the correct column.
The TableAliases attribute can be applied to a field, a property as a single attribute with multiple regular expres-
sions, or as multiple attributes, depending on your preference.
2.26.7 Extensions
Out-of-the-box, the SpecFlow table helpers knows how to handle most csharp base types. Types like String,
Bool, Enum, Int, Decimal, DateTime, etc. are all covered. The covered types can be found here. If you
want to cover more types, including your own custom types, you can do so by registering your own instances of
IValueRetriever and IValueComparer.
For example, you have a complex object like this:
| Name | Color |
| XL | Blue |
| L | Red |
If you want to map Blue and Red to the appropriate instance of the Color class, you need to create an instance of
IValueRetriever that can convert the strings to the Color instance.
You can register your custom IValueRetriever (and/or an instance of IValueComparer if you want to com-
pare colors) like this:
[Binding]
public static class Hooks1
{
[BeforeTestRun]
public static void BeforeTestRun()
{
Service.Instance.ValueRetrievers.Register(new ColorValueRetriever());
Service.Instance.ValueComparers.Register(new ColorValueComparer());
}
}
[Binding]
public static class Hooks1
{
[BeforeTestRun]
public static void BeforeTestRun()
{
Service.Instance.ValueRetrievers.Register(new NullValueRetriever("<null>"));
}
}
SpecFlow.Assist CompareToSet Table extension method only checks for equivalence of collections which is a rea-
sonable default. SpecFlow.Assist namespace also contains extension methods for With based operations.
Scenario: Match
When I have a collection
| Artist | Album |
| Beatles | Rubber Soul |
| Pink Floyd | Animals |
| Muse | Absolution |
Then it should match
| Artist | Album |
| Beatles | Rubber Soul |
| Pink Floyd | Animals |
| Muse | Absolution |
And it should match
| Artist | Album |
| Beatles | Rubber Soul |
| Muse | Absolution |
| Pink Floyd | Animals |
And it should exactly match
| Artist | Album |
| Beatles | Rubber Soul |
| Pink Floyd | Animals |
| Muse | Absolution |
But it should not match
| Artist | Album |
| Beatles | Rubber Soul |
| Queen | Jazz |
| Muse | Absolution |
And it should not match
| Artist | Album |
| Beatles | Rubber Soul |
| Muse | Absolution |
And it should not exactly match
| Artist | Album |
| Beatles | Rubber Soul |
| Muse | Absolution |
| Pink Floyd | Animals |
With LINQ-based operations each of the above comparisons can be expressed using a single line of code
[Binding]
public class Binding
{
ScenarioContext _scenarioContext;
}
}
Scenario: Containment
When I have a collection
| Artist | Album |
| Beatles | Rubber Soul |
| Pink Floyd | Animals |
| Muse | Absolution |
Then it should contain all items
| Artist | Album |
| Beatles | Rubber Soul |
| Muse | Absolution |
But it should not contain all items
| Artist | Album |
| Beatles | Rubber Soul |
| Muse | Resistance |
And it should not contain any of items
| Artist | Album |
| Beatles | Abbey Road |
| Muse | Resistance |
[Binding]
public class Binding
{
(continues on next page)
}
}
What if Artist and Album are properties of different entities? Look at this piece of code:
SpecFlow.Assist has a generic class EnumerableProjection. If a type “T” is known at compile time, ToProjection
method converts a table or a collection straight to an instance of EnumerableProjection:
table.ToProjection<Item>();
But if we need to compare a table with the collection of anonymous types from the example above, we need to express
this type in some way so ToProjection will be able to build an instance of specialized EnumerableProjection. This
is done by sending a collection as an argument to ToProjection. And to support both sets and instances and avoid
naming ambiguity, corresponding methods are called ToProjectionOfSet and ToProjectionOfInstance:
table.ToProjectionOfSet(collection);
table.ToProjectionOfInstance(instance);
Here are the definitions of SpecFlow Table extensions methods that convert tables and collections of IEnumerables to
EnumerableProjection:
Note that last arguments of ToProjectionOfSet and ToProjectionOfInstance methods are not used in method imple-
mentation. Their only purpose is to bring information about “T”, so the EnumerableProjection adapter class can be
built properly. Now we can perform the following comparisons with anonymous types collections and instances:
[Test]
public void Table_with_subset_of_columns_with_matching_values_should_match_
˓→collection()
{
var table = CreateTableWithSubsetOfColumns();
table.AddRow(1.ToString(), "a");
table.AddRow(2.ToString(), "b");
Assert.AreEqual(0, table.ToProjectionOfSet(query).Except(query.
˓→ ToProjection()).Count());
}
[Test]
public void Table_with_subset_of_columns_should_be_equal_to_matching_instance()
{
var table = CreateTableWithSubsetOfColumns();
table.AddRow(1.ToString(), "a");
Assert.AreEqual(table.ToProjectionOfInstance(instance), instance);
}
2.27 F# Support
Bindings for SpecFlow can be written also in F#. Doing so you can take the advantages of the F# language for writing
step definitions: you can define regex-named F# functions for your steps. Simply put the regex between double
backticks.
Although the regex method names are only important for step definitions you can also define hooks and step argument
conversions in the F# binding projects.
Note: You need to create a C# or VB project for hosting the feature files and configure your F# project(s) as external
binding assemblies:
<specFlow>
<stepAssemblies>
<stepAssembly assembly="MyFSharpBindings" />
</stepAssemblies>
</specFlow>
SpecFlow provides item templates for creating new F# step definitions or hooks in Visual Studio.
Note: The navigation and the binding analysis features of the SpecFlow editor provide only limited support for F#
projects.
2.27.2 Examples
In order to execute your SpecFlow tests, you need to define the tests as Gherkin feature files, bind the steps defined in
your feature files to your code, and configure a unit test provider to execute the tests. SpecFlow generates executable
unit tests from your Gherkin files.
We recommend that you add a separate project to your solution for your tests.
Tests are executed using a unit test provider. Add the corresponding NuGet package to your project to define your unit
test provider:
• SpecRun.Runner
• SpecFlow.xUnit
• SpecFlow.MsTest
• SpecFlow.NUnit
2.27. F# Support 87
Welcome to SpecFlow’s documentation!
Configure your unit test provider in your project’s app.config file, e.g.:
<specFlow>
<unitTestProvider name="MsTest" />
</specFlow>
Since SpecFlow 3.1 you can do skip programmatically Scenarios with the UnitTestRuntimeProvider.
[Binding]
public sealed class StepDefinitions
{
private readonly IUnitTestRuntimeProvider _unitTestRuntimeProvider;
public CalculatorStepDefinitions(IUnitTestRuntimeProvider
˓→ unitTestRuntimeProvider)
{
_unitTestRuntimeProvider = unitTestRuntimeProvider;
}
[When("your binding")]
public void YourBindingMethod()
{
_unitTestRuntimeProvider.TestIgnore("This scenario is always skipped");
}
}
Ignoring is like skipping the scenario. Be careful, as it behaves a little bit different for the different unit test runners
(xUnit, NUnit, MSTest, SpecFlow+ Runner).
2.29.2 Limitations
Currently this works only in step definitions. It is not possible to use it in hooks. See GitHub Issue #2059
When SpecFlow tests are executed, the execution engine processes the test steps, executing the necessary test logic
and either finishing successfully or failing for various reasons.
While executing the tests, the engine outputs information about the execution to the test output. In some cases it makes
sense to investigate the test output even if the test passes.
By default, the test output includes the executed test steps, the invoked test logic methods (bindings) and the execution
time for longer operations. You can configure the information displayed in the test output.
A test can fail because it causes an error. The test output contains more detailed information, e.g. a stack trace.
A test can fail if the test logic (bindings) have not yet been implemented (or are configured improperly). By default,
this is reported as an “inconclusive” result, although you can configure how SpecFlow behaves in this case.
Note: Some unit test frameworks do not support inconclusive result. In this case the problem is reported as an error
instead.
The test output can be very useful if you are missing bindings, as it contain a step binding method skeleton you can
copy to your project and extend with the test logic.
Just like with normal unit tests, you can also ignore SpecFlow tests. To do so, tag the feature or scenario with the
@ignore tag. Don’t forget that ignoring a test will not solve any problems with your implementation. . . ;-)
SpecFlow is mainly used to drive integration test that have external dependencies and applications with complex
internal architecture. Because of this, it is generally not easy to execute these tests in parallel.
Note: This page only covers parallel execution with SpecFlow. For options when executing tests with SpecFlow+
Runner, refer to the SpecFlow+ documentation
If there are no external dependencies or they can be cloned for parallel execution, but the application architecture
depends on a static state (e.g. a caches etc.), the best way is to execute tests in parallel isolated by AppDomain. This
ensures that every test execution thread is hosted in a separate AppDomain and that each thread’s memory (e.g. static
fields) is isolated. In such scenarios, SpecFlow can be used to execute tests in parallel without any extra considerations.
SpecFlow+ Runner supports parallel execution with AppDomain, SharedAppDomain and Process isolation.
Note: The [BeforeTestRun] and [AfterTestRun] hooks are executed for each individual test execution
thread (AppDomain), so you can use them to initialize/reset shared memory.
If your tests do not depend on any static states (i.e. do not store any test-specific information in static fields), you can
run the tests in parallel without AppDomain isolation. Executing tests this way has a smaller initialization footprint
and lower memory requirements.
To run SpecFlow tests in parallel without memory isolation, the following requirements must be met:
• You must be using a test runner that supports this feature (currently NUnit v3, xUnit v2, MSTest and SpecFlow+
Runner)
• You may not be using the static context properties ScenarioContext.Current, FeatureContext.
Current or ScenarioStepContext.Current (see examples below).
Execution Behaviour
• [BeforeTestRun] and [AfterTestRun] hooks (events) are only executed once, on the first thread that
initializes the framework. Executing tests in the other threads is blocked until the hooks have been fully executed
on the first thread.
• Each thread manages its own enter/exit feature execution workflow. The [BeforeFeature] and
[AfterFeature] hooks may be executed multiple times in different threads if the different threads run sce-
narios from the same feature file. The execution of these hooks do not block one another, but the Before/After
feature hooks are called in pairs within a single thread (the [BeforeFeature] hook of the next scenario is
only executed after the [AfterFeature] hook of the previous one). Each thread has a separate (and isolated)
FeatureContext.
• Scenarios and their related hooks (Before/After scenario, scenario block, step) are isolated in the differ-
ent threads during execution and do not block each other. Each thread has a separate (and isolated)
ScenarioContext.
• All scenarios in a feature are executed on the same thread.
• The test trace listener (that outputs the scenario execution trace to the console by default) is invoked
asynchronously from the multiple threads and the trace messages are queued and passed to the lis-
tener in serialized form. If the test trace listener implements TechTalk.SpecFlow.Tracing.
IThreadSafeTraceListener, the messages are sent directly from the threads.
• The binding registry (that holds the step definitions, hooks, etc.) and some other core services are shared across
test threads.
The NUnit v3 unit test provider (nunit) does not currently generate [Parallelizable] attributes on feature
classes or scenario methods. Parallelisation must be configured by setting an assembly-level attribute in the SpecFlow
project.
[assembly: Parallelizable(ParallelScope.Fixtures)]
Context injection is a type safe state sharing method that is thread-safe, and is also recommended for non-parallel
execution scenarios.
When using parallel execution with generic contexts, the context classes have to be injected to the
binding class instead of accessing the ScenarioContext.Current, FeatureContext.Current or
ScenarioStepContext.Current static properties, or the instance properties of the Steps base class can be
used. Accessing the static properties during parallel execution throws a SpecFlowException.
[Binding]
public class StepsWithScenarioContext
{
private readonly ScenarioContext scenarioContext;
this.scenarioContext = scenarioContext;
}
You can inject FeatureContext in a similar manner, and use the StepContext property of the injected
ScenarioContext to access the ScenarioStepContext.
[Binding]
public class StepsWithScenarioContext : Steps
{
[Given(@"I put something into the context")]
public void GivenIPutSomethingIntoTheContext()
{
this.ScenarioContext.Set("test-value", "test-key");
}
}
The other contexts can be accessed with the FeatureContext and the StepContext properties.
2.32 Debugging
SpecFlow Visual Studio integration also supports debugging the execution of your tests. Just like in the source code
files of your project, you can place breakpoints in the SpecFlow feature files. Whenever you execute the generated
tests in debug mode, the execution will stop at the specified breakpoints and you can execute the steps one-by-one
using “Step Over” (F10), or follow the detailed execution of the bindings using “Step Into” (F11).
If the execution of a SpecFlow test is stopped at a certain point of the binding (e.g. because of an exception), you can
navigate to the current step in the feature file from the “Call Stack” tool window in Visual Studio.
By default, you cannot debug inside the generated .feature.cs files. You can enable debugging for these files by setting
generator allowDebugGeneratedFiles=”true”.
2.32. Debugging 91
Welcome to SpecFlow’s documentation!
SpecFlow+ Runner (formerly “SpecRun”) is a dedicated test execution framework for SpecFlow. SpecFlow+ Runner
integrates more tightly with Visual Studio’s testing infrastructure and Team Foundation Server (TFS) Build. The
documentation for SpecFlow+ can be found here.
2.33.1 Installation
SpecFlow+ Runner is provided as a NuGet package (SpecRun.SpecFlow. Detailed setup instructions can be found
here.
SpecFlow+ Runner allows you to run and debug your scenarios as first class citizens:
• Run/debug individual scenarios or scenario outline examples from the feature file editor (choose “Run/Debug
SpecFlow Scenarios” from the context menu)
• View scenarios in the Visual Studio Test Explorer window with the scenario title
• Use the Test Explorer to:
– Group scenarios by tags (choose “Traits” grouping) and features (choose “Class”)
– Filter scenarios by different criteria
– Run/debug selected/all scenarios
– Jump to the corresponding scenario in the feature file
– View test execution results
• You can specify processor architecture (x86/x64), .NET platform and many other details for the test execution,
including special config file transformations used for the test execution only.
The SpecRun NuGet package contains all necessary integration components for Team Foundation Server Build, and
you do not need to make any additional configuration or build process template modifications for TFS Build to execute
your scenarios. You can also:
• Display scenario titles in the execution result
• Generate detailed and customizable HTML report
• Filter scenarios in the TFS build definition
More information on using SpecFlow+ Runner with build servers can be found here.
See the short introduction video about the configurable test execution environments and about parallel test execution.
2.34 MSTest
SpecFlow does support MsTest V2. It is not any more working with the old MsTest V1.
Documentation for MSTest can be found here.
using Microsoft.VisualStudio.TestTools.UnitTesting;
[BeforeScenario()]
public void BeforeScenario()
{
//now you can access the TestContext
}
}
The MsTest Generator can generate test class attributes from tags specified on a feature.
Owner
Tag:
@Owner:John
Output:
2.34. MSTest 93
Welcome to SpecFlow’s documentation!
[Microsoft.VisualStudio.TestTools.UnitTesting.OwnerAttribute("John")]
WorkItem
Tag:
@WorkItem:123
Output:
[Microsoft.VisualStudio.TestTools.UnitTesting.WorkItemAttribute(123)]
DeploymentItem
Example 1 : Copy a file to the same directory as the deployed test assemblies
Tag:
@MsTest:DeploymentItem:test.txt
Output:
[Microsoft.VisualStudio.TestTools.UnitTesting.DeploymentItemAttribute("test.txt")]
Tag:
@MsTest:DeploymentItem:Resources\DeploymentItemTestFile.txt:Data
Output:
[Microsoft.VisualStudio.TestTools.UnitTesting.DeploymentItemAttribute(
˓→"Resources\\DeploymentItemTestFile.txt", "Data")]
2.35 NUnit
• Microsoft.NET.Test.Sdk
2.36 xUnit
The xUnit ITestOutputHelper is registered in the ScenarioContainer. You can get access to simply via getting it via
Context-Injection.
Example
using System;
using TechTalk.SpecFlow;
[Binding]
public class BindingClass
{
private Xunit.Abstractions.ITestOutputHelper _testOutputHelper;
public BindingClass(Xunit.Abstractions.ITestOutputHelper testOutputHelper)
{
_testOutputHelper = testOutputHelper;
}
[When(@"I do something")]
public void WhenIDoSomething()
{
_testOutputHelper.WriteLine("EB7C1291-2C44-417F-ABB7-A5154843BC7B");
}
}
2.36. xUnit 95
Welcome to SpecFlow’s documentation!
The easiest way to execute SpecFlow scenarios on Azure DevOps (Team Foundation Server (TFS) Build is to use
SpecFlow+ Runner as unit test provider (see [SpecFlow+ Runner Integration]). The SpecRun NuGet package contains
all necessary integration components, and you don’t need to do any additional configuration or build process template
modification to let TFS build execute your scenarios, and even more:
• Display scenario titles in the execution result
• Generate detailed and customizable HTML report
• Allows filtering scenarios in the TFS build definition
• The integration also works with the hosted Azure DevOps Server ( [http://tfs.visualstudio.com/])
Legacy Integration
As SpecFlow generates unit test code from the scenarios, the tests can be executed on any build server (as unit tests).
The configuration depends on the unit test provider used.
2.39 Browserstack
If you want to perform web testing on multiple browsers and operating systems, it can be quite complicated to maintain
machines for each of the target environments. BrowserStack provides “remote web browsers as a service”, making it
easy to do this sort of matrix testing without having to maintain the multiple browser installations yourself. Specflow
provides easy integration with BrowserStack.
Follow Kenneth Truyers blog for further information.
Code sample can be found here.
2.40 Coded UI
2.40.1 Introduction
Note: Coded UI is no longer supported with SpecFlow 3. We recommend using Appium or WinAppDriver instead.
The following is preserved for legacy users.
The Microsoft Coded UI API can be used to create automated tests in Visual Studio, but is not directly compatible
with SpecFlow as each Test Class requires the [CodedUITest] attribute, which SpecFlow does not generate by
default.
Big thanks go to Thomy Kay for pointing us in the right direction.
2.40.2 Solution
You need to ensure SpecFlow generates the [CodedUITest] attribute by creating a custom test generator provider,
copying the DLL file into the tools directory where the SpecFlow NuGet package is installed, and ensure that any
SpecFlow hooks also ensure the CodedUI API is initialized.
1. Create a new VS project to generate an assembly that contains the class below. This will require a reference to
TechTalk.SpecFlow.Generator.dll in the SpecFlow directory. If you are using version 1.7 or higher
you will also need to add a reference to TechTalk.SpecFlow.Utils.dll
2. Add the following class to your new VS project:
SpecFlow version 1.6
namespace My.SpecFlow
{
using System.CodeDom;
using TechTalk.SpecFlow.Generator.UnitTestProvider;
{
base.SetTestFixture(typeDeclaration, title, description);
foreach (CodeAttributeDeclaration customAttribute in typeDeclaration.
˓→CustomAttributes)
{
if (customAttribute.Name == "Microsoft.VisualStudio.TestTools.
˓→UnitTesting.TestClassAttribute")
{
typeDeclaration.CustomAttributes.Remove(customAttribute);
break;
}
}
typeDeclaration.CustomAttributes.Add(new CodeAttributeDeclaration(new
˓→ CodeTypeReference("Microsoft.VisualStudio.TestTools.UITesting.CodedUITestAttribute
˓→")));
}
}
}
using System.CodeDom;
using TechTalk.SpecFlow.Generator.UnitTestProvider;
namespace SpecflowCodedUIGenerator
{
public class MsTest2010CodedUiGeneratorProvider : MsTest2010GeneratorProvider
{
public override void SetTestClass(TechTalk.SpecFlow.Generator.
˓→TestClassGenerationContext generationContext, string featureTitle, string
˓→featureDescription)
2.40. Coded UI 97
Welcome to SpecFlow’s documentation!
{
generationContext.TestClass.CustomAttributes.
˓→Remove(customAttribute);
break;
}
}
generationContext.TestClass.CustomAttributes.Add(new
˓→ CodeAttributeDeclaration(new CodeTypeReference("Microsoft.VisualStudio.TestTools.
˓→UITesting.CodedUITestAttribute")));
}
}
}
namespace SpecflowCodedUIGenerator
{
public class MsTest2010CodedUiGeneratorProvider : MsTest2010GeneratorProvider
{
public MsTest2010CodedUiGeneratorProvider(CodeDomHelper codeDomHelper)
: base(codeDomHelper)
{
}
{
base.SetTestClass(generationContext, featureTitle, featureDescription);
{
generationContext.TestClass.CustomAttributes.
˓→Remove(customAttribute);
break;
}
}
generationContext.TestClass.CustomAttributes.Add(new
˓→CodeAttributeDeclaration(new CodeTypeReference("Microsoft.VisualStudio.TestTools.
˓→UITesting.CodedUITestAttribute")));
(continues on next page)
1. Build the project to generate an assembly (.dll) file. Make sure this is built against the same version of the .NET
as SpecFlow, and copy this file to your SpecFlow installation directory.
2. Add a config item to your CodedUI project’s App.Config file
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="specFlow"
type="TechTalk.SpecFlow.Configuration.ConfigurationSectionHandler,
TechTalk.SpecFlow"/>
</configSections>
<specFlow>
<unitTestProvider name="MsTest.2010"
generatorProvider="My.SpecFlow.MsTest2010CodedUiGeneratorProvider,
My.SpecFlow"
runtimeProvider="TechTalk.SpecFlow.UnitTestProvider.MsTest2010RuntimeProvider,
TechTalk.SpecFlow"/>
</specFlow>
</configuration>
1. Now when you generate a new feature file, it will add the appropriate attributes.
Getting SpecFlow to generate the [CodedUITest] attribute with Visual Studio 2013+ and MSTest
namespace TechTalk.SpecFlow.CodedUI.MsTest
{
public class SpecFlowCodedUITestGenerator : MsTestGeneratorProvider
{
public SpecFlowCodedUITestGenerator(CodeDomHelper codeDomHelper) :
˓→base(codeDomHelper)
(continues on next page)
2.40. Coded UI 99
Welcome to SpecFlow’s documentation!
{
base.SetTestClass(generationContext, featureTitle, featureDescription);
{
generationContext.TestClass.CustomAttributes.
˓→Remove(customAttribute);
break;
}
}
generationContext.TestClass.CustomAttributes.Add(new
˓→ CodeAttributeDeclaration(new CodeTypeReference("Microsoft.VisualStudio.TestTools.
˓→UITesting.CodedUITestAttribute")));
}
}
}
1. Right-click the Project in the Solution Explorer pane, and click “Properties”.
2. Go to the “Build Events” tab.
3. In the “Post-build event command line” box, enter the following command:
˓→Visual Studio solution that contains your SpecFlow tests in order for this to work.
runtimeProvider="TechTalk.SpecFlow.UnitTestProvider.MsTestRuntimeProvider,
˓→TechTalk.SpecFlow" />
</specFlow>
</configuration>
If Visual Studio prompts you to regenerate the feature files, do so. If not, right-click on the project containing your
SpecFlow tests and click “Regenerate Feature Files”.
If you want to use any of the SpecFlow Hooks as steps such as [BeforeTestRun],[BeforeFeature], [BeforeSce-
nario], [AfterTestRun], [AfterFeature] or [AfterScenario], you will receive the following error: Microsoft.
VisualStudio.TestTools.UITest.Extension.TechnologyNotSupportedException: The
browser is currently not supported
Solve this by adding a Playback.Initialize(); call in your [BeforeTestRun] step, and a Playback.
Cleanup(); in your [AfterTestRun] step.
But how does SpecFlow match the values in the table with the values in the object? It does so with Value Retrievers.
There are value retrievers defined for almost every C# base type, so mapping most basic POCOs can be done with
SpecFlow without any modification.
Often you might have a more complicated POCO type, one that is not comprised solely of C# base types. Like this
one:
Simple example how to process the human readable color ‘red’ to the Hex value:
The table will be processed, and the following code can be used to capture the table translation and customize it:
{
if (!keyValuePair.Key.Equals("ShirtColor"))
{
return false;
}
bool value;
if (Color.TryParse(keyValuePair.Value, out value))
{
return true;
}
}
2.42 Plugins
This information only applies to SpecFlow 3. For legacy information on plugins for previous versions, see Plug-
ins (Legacy). With SpecFlow 3, we have changed how you configure which plugins are used. They are no longer
configured in your app.config (or specflow.json).
Runtime plugins need to target .NET Framework 4.6.1 and .NET Standard 2.0. SpecFlow searches for files that end
with .SpecFlowPlugin.dll in the following locations:
• The folder containing your TechTalk.SpecFlow.dll file
• Your working directory
SpecFlow loads plugins in the order they are found in the folder.
RuntimePluginsEvents
• RegisterGlobalDependencies - registers a new interface in the global container, see Available Contain-
ers & Registrations
• CustomizeGlobalDependencies - overrides registrations in the global container, see Available Contain-
ers & Registrations
• ConfigurationDefaults - adjust configuration values
• CustomizeTestThreadDependencies - overrides or registers a new interface in the test thread con-
tainer, see Available Containers & Registrations
• CustomizeFeatureDependencies - overrides or registers a new interface in the feature container, see
Available Containers & Registrations
• CustomizeScenarioDependencies - overrides or registers a new interface in the scenario container, see
Available Containers & Registrations
A complete example of a Runtime plugin can be found here. It packages a Runtime plugin as a NuGet package.
SampleRuntimePlugin.csproj
<TargetFrameworks>net461;netstandard2.0</TargetFrameworks>
We set a different <AssemblyName> to add the required .SpecFlowPlugin suffix to the assembly name. You
can also simply name your project with .SpecFlowPlugin at the end.
<AssemblyName>SampleRuntimePlugin.SpecFlowPlugin</AssemblyName>
<GeneratePackageOnBuild> is set to true so that the NuGet package is generated on build. We use a NuSpec
file (SamplePlugin.nuspec) to provide all information for the NuGet package. This is set with the <NuspecFile>
property.
<GeneratePackageOnBuild>true</GeneratePackageOnBuild>
<NuspecFile>$(MSBuildThisFileDirectory)SamplePlugin.nuspec</NuspecFile>
<ItemGroup>
<PackageReference Include="SpecFlow" Version="3.0.199" />
</ItemGroup>
build/SpecFlow.SamplePlugin.targets
<_SampleRuntimePluginPath>$(MSBuildThisFileDirectory)\..\lib\$(_
˓→SampleRuntimePluginFramework)\SampleRuntimePlugin.SpecFlowPlugin.dll</_
˓→SampleRuntimePluginPath>
build/SpecFlow.SamplePlugin.props
<ItemGroup>
<None Include="$(_SampleRuntimePluginPath)" >
<Link>%(Filename)%(Extension)</Link>
(continues on next page)
Directory.Build.targets
</PropertyGroup>
</Target>
SamplePlugin.nuspec
<files>
<file src="build\**\*" target="build"/>
<file src="bin\$config$\net45\SampleRuntimePlugin.SpecFlowPlugin.*" target=
˓→"lib\net45"/>
<file src="bin\$config$\netstandard2.0\SampleRuntimePlugin.SpecFlowPlugin.dll
˓→" target="lib\netstandard2.0"/>
<file src="bin\$config$\netstandard2.0\SampleRuntimePlugin.SpecFlowPlugin.pdb
˓→" target="lib\netstandard2.0"/>
</files>
Generator plugins need to target .NET Framework 4.7.1 and .NET Core 2.1. The MSBuild task needs
to know which generator plugins it should use. You therefore have to add your generator plugin to the
SpecFlowGeneratorPlugins ItemGroup. This is passed to the MSBuild task as a parameter and later used
to load the plugins.
GeneratorPluginsEvents
A complete example of a generator plugin can be found here. It packages a Generator plugin into a NuGet package.
SampleGeneratorPlugin.csproj
<PropertyGroup>
<TargetFrameworks>net471;netstandard2.0</TargetFrameworks>
<GeneratePackageOnBuild>true</GeneratePackageOnBuild>
<NuspecFile>$(MSBuildThisFileDirectory)SamplePlugin.nuspec</NuspecFile>
<AssemblyName>SampleGeneratorPlugin.SpecFlowPlugin</AssemblyName>
</PropertyGroup>
For a generator plugin you need a reference to the SpecFlow.CustomPlugin- NuGet package.
<ItemGroup>
<PackageReference Include="SpecFlow.CustomPlugin" Version="3.0.199" />
</ItemGroup>
build/SpecFlow.SamplePlugin.targets
<PropertyGroup>
<_SampleGeneratorPluginFramework Condition=" '$(MSBuildRuntimeType)' == 'Core'">
˓→netecoreapp2.1</_SampleGeneratorPluginFramework>
<_SampleGeneratorPluginPath>$(MSBuildThisFileDirectory)\$(_
˓→SampleGeneratorPluginFramework)\SampleGeneratorPlugin.SpecFlowPlugin.dll</_
˓→SampleGeneratorPluginPath>
</PropertyGroup>
build/SpecFlow.SamplePlugin.props
<ItemGroup>
<SpecFlowGeneratorPlugins Include="$(_SampleGeneratorPluginPath)" />
</ItemGroup>
Directory.Build.targets
</PropertyGroup>
</Target>
SamplePlugin.nuspec
<files>
<file src="build\**\*" target="build"/>
<file src="bin\$config$\net471\SampleGeneratorPlugin.SpecFlowPlugin.*" target=
˓→"build\net471"/>
<file src="bin\$config$\netstandard2.0\SampleGeneratorPlugin.SpecFlowPlugin.dll"
˓→target="build\netcoreapp2.1"/>
</files>
If you need to update generator and runtime plugins with a single NuGet package (as we are doing with the
SpecFlow.xUnit, SpecFlow.NUnit and SpecFlow.xUnit packages), you can do so.
As with the separate plugins, you need two projects. One for the runtime plugin, and one for the generator plugin. As
you only want one NuGet package, the NuSpec files must only be present in the generator project. This is because
the generator plugin is built with a higher .NET Framework version (.NET 4.7.1), meaning you can add a dependency
on the Runtime plugin (which is only .NET 4.6.1). This will not working the other way around.
You can simply combine the contents of the .targets and .props file to a single one.
Example
A complete example of a NuGet package that contains a runtime and generator plugin can be found here.
We have set up a Gitter channel for plugin developers here. If you questions regarding the development of plugins for
SpecFlow, this is the place to ask them.
2.43.1 Introduction
SpecFlow provides a plugin infrastructure, allowing customization. You can develop SpecFlow plugins that change
the behavior of the built-in generator and runtime components. For example, a plugin could provide support for a new
unit testing framework.
To use a custom plugin it has to be enabled in the [[configuration]] (app.config) of your SpecFlow project:
<specFlow>
<plugins>
<add name="MyPlugin" />
</plugins>
</specFlow>
By default, SpecFlow assumes plugins are Runtime Generator plugins. If your plugin is either a Runtime or Generator
plugin, you need to add this information to the configuration.
Example for a Runtime plugin:
<specFlow>
<plugins>
<add name="MyPlugin" type="Runtime"/>
</plugins>
</specFlow>
The steps required to create plugins of all types are similar. All plugins require the suffix “.SpecFlowPlugin”.
Generator Plugin
{
generatorPluginEvents.CustomizeDependencies += CustomizeDependencies;
}
In order for your new library to be picked up by SpecFlow plugin loader, you need to flag your assembly with the
GeneratorPlugin attribute. This is an example of it, taking in consideration that the class that implements
IGeneratorPlugin interface is called MyNewPlugin.
[assembly: GeneratorPlugin(typeof(MyNewPlugin))]
Runtime Plugin
[assembly: RuntimePlugin(typeof(MyNewPlugin))]
In order to load your plugin, in your SpecFlow project, you need to reference your plugin in the app.config file without
the “.SpecFlowPlugin” suffix. It is also handy to know that the path attribute considers that project root as a path root.
The following example is used to load a plugin assembly called “MyNewPlugin.SpecFlowPlugin.dll” that is located
in a folder called “Binaries” that is at the same level of the current project.
<specFlow>
<plugins>
<add name="MyNewPlugin" path="..\Binaries" />
</plugins>
</specFlow>
For reference, here are some sample implementations of an IRuntimePlugin and IGeneratorPlugin:
SpecFlow.FsCheck
SpecFlow.Autofac
The global container captures global services for test execution and the step definition, hook and transformation
discovery result (i.e. what step definitions you have).
• IRuntimeConfigurationProvider
• ITestRunnerManager
• IStepFormatter
• ITestTracer
• ITraceListener
• ITraceListenerQueue
• IErrorProvider
• IRuntimeBindingSourceProcessor
• IRuntimeBindingRegistryBuilder
• IBindingRegistry
• IBindingFactory
• IStepDefinitionRegexCalculator
• IBindingInvoker
• IStepDefinitionSkeletonProvider
• ISkeletonTemplateProvider
• IStepTextAnalyzer
• IRuntimePluginLoader
• IBindingAssemblyLoader
• IBindingInstanceResolver
• RuntimePlugins
– RegisterGlobalDependencies- Event
– CustomizeGlobalDependencies- Event
The test thread container captures the services and state for executing scenarios on a particular test thread. For parallel
test execution, multiple test runner containers are created, one for each thread.
• ITestRunner
• IContextManager
• ITestExecutionEngine
• IStepArgumentTypeConverter
• IStepDefinitionMatchService
• ITraceListener
• ITestTracer
• RuntimePlugins
– CustomizeTestThreadDependencies- Event
The feature container captures a feature’s execution state. It is disposed after the feature is executed.
• FeatureContext (also available from the test thread container through IContextManager)
• [RuntimePlugins]
– CustomizeFeatureDependencies- Event
The scenario container captures the state of a scenario execution. It is disposed after the scenario is executed.
• (step definition classes)
• (dependencies of the step definition classes, aka context injection)
• ScenarioContext (also available from the Test Thread Container through IContextManager)
• [RuntimePlugins]
– CustomizeScenarioDependencies- Event
2.46 Tools
Note that this tool has been removed in SpecFlow 3. This topic only applies to earlier versions of SpecFlow.
Besides executing your tests and verifying that acceptance criteria are met, SpecFlow provides a number of other tools
to support the development process. These tools are accessed using the specflow.exe command line tool located
in the /packages/SpecFlow.x.y.a/tools directory of your project.
A number of commands are available. If you start specflow.exe without any additional options, the available
commands are listed.
2.46.3 help
Use the help command to display the detailed information on a specific command’s syntax. For example,
specflow.exe help generateall displays the options available for the generateall command.
2.46.4 generateall
Use the generateall command to re-generate outdated unit test classes based on your feature files. This can be
useful when upgrading to a newer SpecFlow version, or if feature files are modified outside of Visual Studio.
The following arguments are available.
The following command line regenerates unit tests for the BookShop sample application.
SpecFlow’s generateall function can also be used with MSBuild, allowing you to update the files before compil-
ing the solution. This can be particularly useful if feature files are regularly modified outside of Visual Studio. See
Generate Tests from MsBuild for details on configuring MSBuild to use the generateall function.
Please note that the following reports are only supported prior to SpecFlow 3. As of SpecFlow 3, these options are no
longer available.
stepdefinitionreport
Generates a report that includes information on the usage and binding status of the steps in the entire project. For more
details, see Reporting.
nunitexecutionreport
Generates an execution report based on the output of the NUnit Console Runner. For more details, see Reporting.
mstestexecutionreport
Generates an execution report based on the TRX output generated by VSTest. For more details, see Reporting.
2.47.1 General
You need to use the MSBuild code behind file generation for SpecFlow 3.0.
After version SpecFlow 3.3.30 don’t need to add the SpecFlow.Tools.MSBuild.Generation package any-
more to your project, if you are using one of our Unit-Test-Provider NuGet packages.
Note: You will need at least VS2017/MSBuild 15 to use this package.
Configuration
1. Add the NuGet package SpecFlow.Tools.MsBuild.Generation with the same version as SpecFlow
to your project.
2. Remove all SpecFlowSingleFileGenerator custom tool entries from your feature files.
3. Select Tools | Options | SpecFlow from the menu in Visual Studio, and set Enable SpecFlowSingleFileGenerator
CustomTool to “false”.
Please use the SpecFlow 2.4.1 NuGet package or higher, as this version fixes an issue with previous versions (see
Known Issues below)
The TechTalk.SpecFlow.targets file defines a number of default options in the following section:
<PropertyGroup>
<ShowTrace Condition="'$(ShowTrace)'==''">false</ShowTrace>
<OverwriteReadOnlyFiles Condition="'$(OverwriteReadOnlyFiles)'==''">false</
˓→OverwriteReadOnlyFiles>
<ForceGeneration Condition="'$(ForceGeneration)'==''">false</ForceGeneration>
<VerboseOutput Condition="'$(VerboseOutput)'==''">false</VerboseOutput>
</PropertyGroup>
To change these options, add the corresponding element to your project file before the <Import> element you added
earlier.
Example:
<PropertyGroup>
<ShowTrace>true</ShowTrace>
<VerboseOutput>true</VerboseOutput>
</PropertyGroup>
...
</ItemGroup>
<Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" />
<Import Project="..\packages\SpecFlow.2.2.0\tools\TechTalk.SpecFlow.tasks" Condition=
˓→"Exists('..\packages\SpecFlow.2.2.0\tools\TechTalk.SpecFlow.tasks')" />
<Import Project="..\packages\SpecFlow.2.2.0\tools\TechTalk.SpecFlow.targets"
˓→Condition="Exists('..\packages\SpecFlow.2.2.0\tools\TechTalk.SpecFlow.targets')" />
...
</Project>
When using SpecFlow NuGet packages prior to SpecFlow 2.4.1, Visual Studio sometimes does not recognize that a
feature file has changed. To generate the code-behind file, you therefore need to rebuild your project. We recommend
upgrading your SpecFlow NuGet package to 2.4.1 or higher, where this is no longer an issue.
When using the classic project system, the previous MSBuild target may no longer be located at the end of your
project. NuGet ignores entries added manually. NuGet places the MSBuild imports at the end. However, the
AfterUpdateFeatureFilesInProject target needs to be defined after the imports. Otherwise it will be
overwritten with an empty definition. If this happens, your code-behind files are not compiled as part of the assembly.
If you link feature files into a project, no code-behind file is generated for them (see GitHub Issue 1295).
The Visual Studio integration includes a number of features that make it easier to edit Gherkin files and navigate to and
from bindings in Visual Studio. You can also generate skeleton code including step definition methods from feature
files. The Visual Studio integration also allows you to execute tests from Visual Studio’s Test Explorer.
You can install the integration from Visual Studio Gallery or from the online search in Visual Studio under
Tools/Extensions and Updates (search for “SpecFlow”). Detailed instructions can be found here.
The integration provides the following features:
• Editor
– Gherkin syntax highlighting in feature files, highlight unbound steps and parameters
– IntelliSense (auto-completion) for keywords and steps
2.48.1 Troubleshooting
If you are having trouble with the Visual Studio integration, refer to the Troubleshooting page first.
The Visual Studio integration includes the following features to make it easier to edit feature files and identify which
steps have already been bound.
Various default styles have been defined for the Gherkin syntax. You can customise these colours in Visual Studio’s
settings (Tools | Options | Environment | Fonts and Colors). The names of the corresponding Display items in the
list begin with “Gherkin”.
In addition to highlighting keywords, comments, tags etc., unbound steps and parameters in feature files are highlighted
when editing the file in Visual Studio. The following syntax highlighting is used by default:
• Purple: unbound steps
• Black: bound steps
• Grey italics: parameters in bound steps
You can thus tell immediately which steps in a feature file have been bound.
IntelliSense makes SpecFlow easy to use when integrated with Visual Studio. IntelliSense uses find-as-you-type to
restrict the list of suggested entries.
Gherkin Files
(click image for full size)The IntelliSense suggestions (red rectangle) for the Given step include the two existing Given
steps in “GetProducts.feature” and “AddProducts.feature”. Step definition methods have been defined for these steps;
the entries in the list contain “–>” to indicate that the step has been bound.
Code Files
IntelliSense is also available for the Gherkin keywords in your code files.
Most of the items in the Edit menu work well with SpecFlow feature files, for example:
• You can comment and uncomment selected lines (‘#’ character) with the default shortcut for comments (Ctrl+K
Ctrl+C/Ctrl+K Ctrl+U) or from the menu
• You can use the options in the Edit | Outlining menu to expand and contract sections of your feature
files (click
image for full size)
Tables in SpecFlow are also expanded and formatted automatically as you enter column names and values:
(click
image for full size)
You can navigate between the methods in your bindings and the associated steps in your Gherkin feature files.
To navigate from a step definition method to the matching step(s) in your Gherkin feature file(s):
1. Place your cursor in a step definition method.
2. Right-click and select Go To SpecFlow Step Definition Usages from the context menu, or press
Ctrl+Shift+Alt+S (configurable shortcut).
3. If only one match exists, the feature file is opened. If more than one matching step is defined in your feature
files, select the corresponding feature file from the list to switch to it.
To navigate from a step in a Gherkin feature file to the corresponding step definition method:
1. Place your curosr in the step in your feature file.
2. Right-click and select Go To Step Definition from the menu (Alt+Ctrl+Shift+S).
3. The file containing the binding is opened at the appropriate step definition
method.Note: If the steps definition does not exists, a message is displayed in-
stead:
(click image for full size)Click on Yes to copy the skeleton code for your step to the clipboard, so you can paste
it in the corresponding code file.
The Visual Studio integration supports executing SpecFlow scenarios from the Visual Studio Test Explorer. The basic
Test Explorer features work with all unit test providers, although you may need to install additional Visual Studio
connectors, depending on the unit test framework. Full integration is provided for SpecFlow+ Runner, meaning you
can run and debug your scenarios as first class citizens:
• View scenarios listed by title in the Test Explorer
• Group scenarios in the Test Explorer by tag (choose “Traits” grouping) or feature (choose “Class”)
• Filter scenarios by various criteria using the search field
• Run/debug selected or all scenarios
• Double-click an entry in the list to switch to the scenario in the feature file
• View test execution results
You can automatically create a suitable class with skeleton bindings and methods in Visual Studio. To do so:
1. Open your feature file.
2. Right-click in the editor and select Generate Step Definitions from the menu.
3. A dialog is displayed with a list of the steps in your feature file. Use the check boxes to determine which steps
to generate skeleton code for.
4. Enter a name for your class in the Class name field.
5. Choose your desired [[step definition style|step definition styles]], which include formats without regular ex-
pressions. Click on Preview to preview the output.
6. Either
• Click on Generate to add a new .cs file with your class to your project. This file will contain the skeleton code
for your class and the selected steps.
• Click on Copy methods to clipboard to copy the generated skeleton code to the clipboard. You can then paste
it to the file of your choosing. Use this method to extend your bindings if new steps have been added to a feature
file that already contains bound steps.
The most common parameter usage patterns (quotes, apostrophes, numbers) are detected automatically when creating
the code and are used by SpecFlow to generate methods and regular expressions.
For more information on the available options and custom templates, refer to the [[Step Definition Styles]] page.
Cucumber messages provide a set of standardised messages across all Cucumber implementations. These messages
are emitted when running your scenarios.
A standardised set of messages make it possible to write tools (e.g. report generators) that work for all Cucumber
implementations, such as SpecFlow, Cucumber JVM, Cucumber Ruby, Cucumber.js etc.
Cucumber messages are sent to sinks, which you can configure. If Cucumber messages are enabled, but no sinks are
configured, a file sink with the default path cucumbermessages\messages is used.
2.53.1 Configuration
You can configure Cucumber messages in your specflow.json configuration file, or in the the SpecFlow section of your
App.config (.NET Framework only).
The App.config specifications can be found here.
The specflow.json specifications can be found here .
Enabled
You can enable or disable the generation of Cucumber Messages by setting enabled to “true” or “false” in your
configuration. The following defaults apply depending on the version of SpecFlow you are using:
• SpecFlow 3.1: Cucumber messages are disabled by default.
• SpecFlow 3.2 and later: Cucumber messages are enabled by default.
specflow.json
{
"cucumber-messages": {
"enabled": true
}
}
App.config
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="specFlow" type="TechTalk.SpecFlow.Configuration.
˓→ConfigurationSectionHandler, TechTalk.SpecFlow" />
</configSections>
<specFlow>
<cucumber-messages enabled="true" />
</specFlow>
</configuration>
Sinks
Sinks determine where Cucumber messages are sent. If Cucumber messages are enabled, but no sinks are configured,
a file sink with the default path cucumbermessages\messages is used.
Use the type property to determine the type of sink.
File Sinks
When using file sinks (type="file"), Cucumber messages are written to the file specified using the path property.
specflow.json
{
"cucumber-messages": {
"enabled": true,
"sinks": [
{
"type": "file",
"path": "custom_cucumber_messages_file.cm"
}
]
}
}
App.config
<specFlow>
<cucumber-messages enabled="true">
<sinks>
<sink type="file" path="custom_cucumber_messages_file.cm" />
</sinks>
</cucumber-messages>
</specFlow>
2.54 Reporting
These reports are only available prior to SpecFlow 3! They have been removed in SpecFlow 3.
SpecFlow provides various options to generate reports related to the acceptance tests.
Note: The specflow.exe command line tool that is used to generate reports can be found in the
packages\Specflow.{version number}\tools directory, when you installed SpecFlow through NuGet.
Start the tool with no parameters or use the --help option to display an overview of the available options.
You can find a repository containing the old report code here. For information on why the reports were moved to a
seperate repo, please read see this GitHub issue.
This report generates an HTML test execution report. The report contains a summary of the executed tests and their
results, as well as a detailed report of the execution of individual scenarios.
The following sub-sections cover generating the test execution report for different unit test providers.
NUnit 2
In order to generate this report, execute the acceptance tests with the nunit-console runner. This tool generates
an XML summary of the test executions. To include detailed scenario execution traces, you need to capture the test
output using the /out and the /labels options, e.g.
NUnit 3
In order to generate this report, execute the acceptance tests with the nunit3-console runner, and set it to output
the results in nunit2 format. To include detailed scenario execution traces, you need to capture the test output using
the --out and the --labels=All options (see the example below).
Important: The NUnit.Extension.NUnitV2ResultWriter package must be included in your project, oth-
erwise you will receive the message: “Unknown result format: nunit2”.
Note: The examples and parameters are for version 2.4.* and higher. Older versions can be found by viewing past
revisions in the GitHub wiki.
The report generation step is the same for both versions of NUnit. The two generated files can be used to invoke
the SpecFlow report generation. If you use the default output file names shown above, you only need to specify
information about the C# test project containing the *.feature files. Specflow uses the default TestResult.
xml and TestResult.txt files to produce TestResult.html.
˓→OutputFile CustomSpecflowTestReport.html
The following table contains the possible arguments for this command.
The following table contains the possible arguments for this command.
This report shows the usage and binding status of the steps in your entire project. You can use this report to find both
unused code in the automation layer and scenario steps with no binding defined.
• Steps with a red background are steps that exist in the automation layer but are not used in any feature files.
• Steps with a yellow background are steps that exist in a feature file but do not have a corresponding binding
defined.
• Steps without a special backgrounds are steps that exist both in feature files and the automation layer. Ideally,
all your steps are like this.
The following table contains the possible arguments for this command.
2.55 Prerequisite
MSBUILDDISABLENODEREUSE
You have to set MSBUILDDISABLENODEREUSE to 1. Reason for this is, that SpecFlow has an MSBuild Task that
is used in the TechTalk.SpecFlow.Specs project. Because of the using of the task and MSBuild reuses processes, the
file is loaded by MSBuild and will then lock the file and break the next build.
This environment variable controls the behaviour if MSBuild reuses processes. Setting to 1 disables this behaviour.
See https://github.com/Microsoft/msbuild/wiki/MSBuild-Tips-&-Tricks for more info about it.
2.57.1 Runtime
2.57.2 GeneratorTime
For every feature file, SpecFlow generates a code-behind file, which contains the code for the various test frame-
works.It’s generated when the project gets compiled. This is done by the SpecFlow.Tools.MsBuild.Generation MS-
Built task.
2.58 Projects
2.58.1 TechTalk.SpecFlow
2.58.2 TechTalk.SpecFlow.Generator
2.58.3 TechTalk.SpecFlow.Parser
This contains the parser for Feature- Files. We use Gherkin and added some additional features to the object model
and parser.
2.58.4 TechTalk.SpecFlow.Utils
2.58.5 SpecFlow.Tools.MsBuild.Generation
This project contains the MSBuild task that generates the code-behind files.
2.58.6 TechTalk.SpecFlow.GeneratorTests
This contains unit tests that are about the generation of the code-behind files
2.58.7 TechTalk.SpecFlow.RuntimeTests
This contains unit tests that are about the runtime of Scenarios
2.58.8 TechTalk.SpecFlow.MSTest.SpecFlowPlugin
This is the plugin for MSTest. It contains all specific implementations for MSTest.
2.58.9 TechTalk.SpecFlow.NUnit.SpecFlowPlugin
This is the plugin for NUnit. It contains all specific implementations for NUnit.
2.58.10 TechTalk.SpecFlow.xUnit.SpecFlowPlugin
This is the plugin for xUnit. It contains all specific implementations for xUnit.
2.58.11 TechTalk.SpecFlow.TestProjectGenerator
This project provides an API for generating projects, compile them and run tests.
2.58.12 TechTalk.SpecFlow.TestProjectGenerator.Tests
2.58.13 TechTalk.SpecFlow.Specs
2.58.14 TechTalk.SpecFlow.Specs.Generator.SpecFlowPlugin
This is a generator plugin, that generates the integration tests for various combinations. They can differ in Framework
Version, Testing Framework, Project Format and programming language
2.59.1 Directory.Build.props
Explanation: https://docs.microsoft.com/en-us/visualstudio/msbuild/customize-your-build#directorybuildprops-and-
directorybuildtargets
We set here general MSBuild properties that are for all projects.
Important is the PropertyGroup for the different Framework versions. We control here which part of SpecFlow is
compiled for which .NET Framework version.
2.59.2 TestRunCombinations.cs
In this file it is controlled for which combinations the integration tests should be generated.
2.60.1 Error “No templates matched the input template name” when executing tests
This error occurs, when somehow your local template cache has some problems. It’s located in
C:\Users\%username%\.templateengine\dotnetcli\<used .NET Core SDK Version>\. To
fix it, simple delete the cache. At the next execution, it will be regenerated.
We prefer instance methods, even if they can be made static because they do not use instance members. Making a
static methods into an instance method happens relatively often and can entail a lot of work.
The test class should be named like the class it is testing, with a Tests suffix. So for example: if a class is named
Calculator, then the test class is called CalculatorTests.
Each test method is named by three parts, separated by an underscore. The parts are “method or property un-
der test”, “scenario” and “expected result”. For example, if we want to test the Add method with a small
positive and a big negative argument and the result should be negative, then the text method would be called
Add_SmallPositiveAndBigNegativeArgument_ResultShouldBeNegative.
2.62 Troubleshooting
If Visual Studio displays the error message Cannot find custom tool
'SpecFlowSingleFileGenerator' on this system. when right-clicking on a feature file and
selecting Run Custom Tool, make sure the SpecFlow extension is installed and enabled.
To enable the extension in Visual Studio, select Tools | Extensions and Updates. . . , select the “SpecFlow for Visual
Studio” extension, then select Enable.
You can enable traces for SpecFlow. Once tracing is enabled, a new SpecFlow pane is added to the output window
showing diagnostic messages.
To enable tracing, select Tools | Options | SpecFlow from the menu in Visual Studio and set Enable Tracing to
‘True’.
Steps are not recognised even though there are matching step definitions
The SpecFlow Visual Studio integration caches the binding status of step definitions. If the cache is corrupted, steps
may be unrecognised and the highlighting of your steps may be wrong (e.g. bound steps showing as being unbound).
To delete the cache:
1. Close all Visual Studio instances.
2. Navigate to your %TEMP% folder and delete any files that are prefixed with specflow-stepmap-, e.g.
specflow-stepmap-SpecFlowProject-607539109-73a67da9-ef3b-45fd-9a24-6ee0135b5f5c.
cache.
3. Reopen your solution.
You may receive a more specific error message if you enable tracing (see above).
Tests are not displayed in the Test Explorer window when using SpecFlow+ Runner
Note: As of Visual Studio 2017 15.7 the temporary files are no longer used. The following only applies to earlier
versions of Visual Studio.
The Visual Studio Test Adapter cache may also get corrupted, causing tests to not be displayed. If this happens, try
clearing your cache as follows:
1. Close all Visual Studio instances
2. Navigate to your %TEMP%\VisualStudioTestExplorerExtensions\ folder and delete any sub-
folders related to SpecFlow/SpecRun, i.e. that have “SpecFlow” or “SpecRun” in their name.
3. Reopen your solution and ensure that it builds.
Unable to find plugin in the plugin search path: SpecRun‘ when saving / generating feature files
SpecFlow searches for plugins in the NuGet packages folder. This is detected relative to the reference to TechTalk.
SpecFlow.dll. If this DLL is not loaded from the NuGet folder, the plugins will not be found.
A common problem is that the NuGet folder is not yet ready (e.g. not restored) when opening the solution, but
TechTalk.SpecFlow.dll in located in the bin\Debug folder of the project. In this case, Visual Studio may
load the assembly from the bin\Debug folder instead of waiting for the NuGet folder to be properly restored. Once
this has happened, Visual Studio remembers that it loaded the assembly from bin\Debug, so reopening the solution
may not solve this issue. The best way to fix this issue is as follows:
1. Make sure the NuGet folders are properly restored.
2. Close Visual Studio.
3. Delete the bin\Debug folder from your project(s).
4. Reopen your solution in Visual Studio.
Tests are not displayed in the Test Explorer window when using SpecFlow+ Runner, even after after
restoring the NuGet package
The SpecRun.Runner NuGet package that contains the Visual Studio Test Explorer adapter is a solution-level
package (registered in the .nuget\packages.config file of the solution). In some situations, NuGet package
restore on build does not restore solution-level packages.
To fix this, open the NuGet console or the NuGet references dialog and click on the restore packages button. You may
need to restart Visual Studio after restoring the packages.
VS2015: Tests are not displayed in the Test Explorer window when using SpecFlow+ Runner
It seems that VS2015 handles solution-level NuGet packages differently (those registered in the .
nuget\packages.config file of the solution). As a result, solution-level NuGet packages must be listed in
the projects that use them, otherwise Test Explorer cannot recognise the test runner.
To fix this issue, either re-install the SpecFlow+ Runner NuGet packages, or add the dependency on the SpecRun.
Runner package (<package id="SpecRun.Runner" version="1.2.0" />) to the packages.config file
of your SpecFlow projects. You might need to restart Visual Studio to see your tests.
2.62.3 When trying to run my tests in Visual Studio, I receive a Missing [assem-
bly:GeneratorPlugin] attribute error. How can I solve this?
Sample output:
Missing [assembly:GeneratorPlugin] attribute in SpecFlow.Plus.Excel.SpecFlowPlugin.dll
#error TechTalk.SpecFlow.Generator
#error Server stack trace:
#error at TechTalk.SpecFlow.Generator.Plugins.GeneratorPluginLoader.
˓→LoadPlugin(PluginDescriptor pluginDescriptor)
...
If you are receiving this error, try setting the Generation Mode in SpecFlow to “OutOfProcess”. To do so:
1. Select Tools | Options from the menu in Visual Studio.
2. Select SpecFlow from the list on the left.
3. Locate the Generation Mode setting and set it to “OutOfProcess”.
2.62.4 After upgrading to SpecFlow 2 from 1.9, I get the message “Trace listener
failed. -> The ScenarioContext.Current static accessor cannot be used in
multi-threaded execution. Try injecting the scenario context to the binding
class”
Make sure you have regenerated the .feature.cs files after upgrading. If you do not do this, you will receive this
exception when accessing ScenarioContext.Current.
To regenerate these files:
• Open a feature file in your solution. If you see a popup informing you that the feature files were generated with
an earlier version of SpecFlow, click on Yes to regenerate these files. Depending on the size of your project, this
may take a while.
• If you are using an earlier version of Visual Studio, you need to force the feature files to be regenerated. Right-
click on your project, and select Regenerate Feature Files from the menu.
2.63.1 Using the “Rule” Gherkin keyword breaks syntax highlighting in Visual Stu-
dio
The Visual Studio extension does not yet support the “Rule” Gherkin keyword, and using this keyword will stop
syntax highlighting from working in Visual Studio. Syntax highlighting for the “Rule” keyword will be added in a
future release.
If you are using Deveroom, do not install the SpecFlow Visual Studio extension; you should only install one of
these 2 extensions.
If Visual Studio displays the error message Cannot find custom tool
'SpecFlowSingleFileGenerator' on this system. when right-clicking on a feature file and
selecting Run Custom Tool, make sure the SpecFlow extension is enabled.
To enable the extension in Visual Studio, select Tools | Extensions and Updates. . . , select the “SpecFlow for Visual
Studio” extension, then select Enable.
If the error still occurs, select Tools | Options | SpecFlow and set Enable SpecFlowSingleFileGenerator Custom
Tool to ‘True’.
You can enable traces for SpecFlow. Once tracing is enabled, a new SpecFlow pane is added to the output window
showing diagnostic messages.
To enable tracing, select Tools | Options | SpecFlow from the menu in Visual Studio and set Enable Tracing to
‘True’.
2.64.4 Troubleshooting
Steps are not recognised even though there are matching step definitions
The SpecFlow Visual Studio integration caches the binding status of step definitions. If the cache is corrupted, steps
may be unrecognised and the highlighting of your steps may be wrong (e.g. bound steps showing as being unbound).
To delete the cache:
1. Close all Visual Studio instances.
2. Navigate to your %TEMP% folder and delete any files that are prefixed with specflow-stepmap-, e.g.
specflow-stepmap-SpecFlowProject-607539109-73a67da9-ef3b-45fd-9a24-6ee0135b5f5c.
cache.
3. Reopen your solution.
You may receive a more specific error message if you enable tracing (see above).
Tests are not displayed in the Test Explorer window when using SpecFlow+ Runner
Note: As of Visual Studio 2017 15.7 the temporary files are no longer used. The following only applies to earlier
versions of Visual Studio.
The Visual Studio Test Adapter cache may also get corrupted, causing tests to not be displayed. If this happens, try
clearing your cache as follows:
1. Close all Visual Studio instances
2. Navigate to your %TEMP%\VisualStudioTestExplorerExtensions\ folder and delete any sub-
folders related to SpecFlow/SpecRun, i.e. that have “SpecFlow” or “SpecRun” in their name.
3. Reopen your solution and ensure that it builds.
Unable to find plugin in the plugin search path: SpecRun‘ when saving / generating feature files
SpecFlow searches for plugins in the NuGet packages folder. This is detected relative to the reference to TechTalk.
SpecFlow.dll. If this DLL is not loaded from the NuGet folder, the plugins will not be found.
A common problem is that the NuGet folder is not yet ready (e.g. not restored) when opening the solution, but
TechTalk.SpecFlow.dll in located in the bin\Debug folder of the project. In this case, Visual Studio may
load the assembly from the bin\Debug folder instead of waiting for the NuGet folder to be properly restored. Once
this has happened, Visual Studio remembers that it loaded the assembly from bin\Debug, so reopening the solution
may not solve this issue. The best way to fix this issue is as follows:
1. Make sure the NuGet folders are properly restored.
2. Close Visual Studio.
3. Delete the bin\Debug folder from your project(s).
Tests are not displayed in the Test Explorer window when using SpecFlow+ Runner, even after after
restoring the NuGet package
The SpecRun.Runner NuGet package that contains the Visual Studio Test Explorer adapter is a solution-level
package (registered in the .nuget\packages.config file of the solution). In some situations, NuGet package
restore on build does not restore solution-level packages.
To fix this, open the NuGet console or the NuGet references dialog and click on the restore packages button. You may
need to restart Visual Studio after restoring the packages.
VS2015: Tests are not displayed in the Test Explorer window when using SpecFlow+ Runner
It seems that VS2015 handles solution-level NuGet packages differently (those registered in the .
nuget\packages.config file of the solution). As a result, solution-level NuGet packages must be listed in
the projects that use them, otherwise Test Explorer cannot recognise the test runner.
To fix this issue, either re-install the SpecFlow+ Runner NuGet packages, or add the dependency on the SpecRun.
Runner package (<package id="SpecRun.Runner" version="1.2.0" />) to the packages.config file
of your SpecFlow projects. You might need to restart Visual Studio to see your tests.
In order to improve the quality of SpecFlow and understand how it is used, we collect anonymous usage data via
Azure Application Insights, an analytics platform provided by Microsoft. We do not collect any personally identifiable
information, but information such as the Visual Studio version, operating system, MSBuild version, target frameworks
etc.
For more details on the information collected by Application Insights, see our privacy policy.
You can disable these analytics as follows:
• Select Tools | Options from the menu in Visual Studio, and navigate to *SpecFlow in the list on the left. Set
Opt-Out of Data Collection to “True”. This disables analytics collected by the Visual Studio extension (see
“SpecFlow Visual Studio Extension” in the privacy policy for details).
• Define an environment variable called SPECFLOW_TELEMETRY_ENABLED and set its value to 0. This
disables all analytics, i.e. those collected by both the extension and SpecFlow itself (see “SpecFlow” and
“SpecFlow Visual Studio Extension” in the privacy policy for details)